From cca5852b787dcf3b78e316fcbcbfafff933322ad Mon Sep 17 00:00:00 2001 From: qingliu Date: Tue, 21 Aug 2018 14:34:56 +0800 Subject: [PATCH] fix: fine_tune_batch_norm not match the train_batch_size We should set fine_tune_batch_norm to false while the train_batch_size is 4 to avoid the OOM of the limited resource at hand. The details can be found in the file of train.py: > # Set to True if one wants to fine-tune the batch norm parameters in DeepLabv3. # Set to False and use small batch size to save GPU memory. flags.DEFINE_boolean('fine_tune_batch_norm', False, 'Fine tune the batch norm parameters or not.') --- research/deeplab/local_test.sh | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/research/deeplab/local_test.sh b/research/deeplab/local_test.sh index 65c827af475..0e4ed9b93dd 100644 --- a/research/deeplab/local_test.sh +++ b/research/deeplab/local_test.sh @@ -86,7 +86,7 @@ python "${WORK_DIR}"/train.py \ --train_crop_size=513 \ --train_batch_size=4 \ --training_number_of_steps="${NUM_ITERATIONS}" \ - --fine_tune_batch_norm=true \ + --fine_tune_batch_norm=false \ --tf_initial_checkpoint="${INIT_FOLDER}/deeplabv3_pascal_train_aug/model.ckpt" \ --train_logdir="${TRAIN_LOGDIR}" \ --dataset_dir="${PASCAL_DATASET}"