Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: fine_tune_batch_norm not match the train_batch_size #5148

Closed
wants to merge 1 commit into from

Commits on Aug 21, 2018

  1. fix: fine_tune_batch_norm not match the train_batch_size

     We should set  fine_tune_batch_norm to false while the  train_batch_size is 4  to avoid the OOM of  the limited resource at hand.
    The details can be found in the file of train.py:
    > # Set to True if one wants to fine-tune the batch norm parameters in DeepLabv3.
       # Set to False and use small batch size to save GPU memory.
        flags.DEFINE_boolean('fine_tune_batch_norm', False,
                         'Fine tune the batch norm parameters or not.')
    shuiqingliu authored Aug 21, 2018
    Configuration menu
    Copy the full SHA
    cca5852 View commit details
    Browse the repository at this point in the history