Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: fine_tune_batch_norm not match the train_batch_size #5148

Closed
wants to merge 1 commit into from

Conversation

shuiqingliu
Copy link

We should set fine_tune_batch_norm to false while the train_batch_size is 4 to avoid the OOM of the limited resource at hand.The details can be found in the file of train.py:

Set to True if one wants to fine-tune the batch norm parameters in DeepLabv3.
Set to False and use small batch size to save GPU memory.
flags.DEFINE_boolean('fine_tune_batch_norm', False,
'Fine tune the batch norm parameters or not.')

 We should set  fine_tune_batch_norm to false while the  train_batch_size is 4  to avoid the OOM of  the limited resource at hand.
The details can be found in the file of train.py:
> # Set to True if one wants to fine-tune the batch norm parameters in DeepLabv3.
   # Set to False and use small batch size to save GPU memory.
    flags.DEFINE_boolean('fine_tune_batch_norm', False,
                     'Fine tune the batch norm parameters or not.')
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants