Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add anchor_generator, box_matcher and non_max_supression #1849

Merged
merged 3 commits into from
Sep 20, 2024

Conversation

sineeli
Copy link
Collaborator

@sineeli sineeli commented Sep 19, 2024

Add below layers for RetinaNet

  1. Non Max Suppression
  2. Anchor Generator
  3. Box Matcher

@sineeli
Copy link
Collaborator Author

sineeli commented Sep 19, 2024

Closed previous PR and made changes as per new keras-hub rename

Copy link
Member

@mattdangerw mattdangerw left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good! Minor comments. Also, a heads up @fchollet is quickly adding bounding box support to core Keras, soon we will be able to move our implementation, though some porting might be required to the final API Francois cooked up.

keras_hub/src/models/retinanet/anchor_generator.py Outdated Show resolved Hide resolved
keras_hub/src/models/retinanet/anchor_generator.py Outdated Show resolved Hide resolved
keras_hub/src/models/retinanet/anchor_generator.py Outdated Show resolved Hide resolved
keras_hub/src/models/retinanet/box_matcher.py Outdated Show resolved Hide resolved
keras_hub/src/models/retinanet/box_matcher.py Outdated Show resolved Hide resolved
keras_hub/src/models/retinanet/non_max_supression.py Outdated Show resolved Hide resolved
keras_hub/src/models/retinanet/non_max_supression.py Outdated Show resolved Hide resolved
keras_hub/src/models/retinanet/non_max_supression.py Outdated Show resolved Hide resolved
@sineeli
Copy link
Collaborator Author

sineeli commented Sep 19, 2024

@mattdangerw where ever necessary if there are general ops. that we are using in layers should be casted toself.compute_dtype is it correct and is it mandatory to convert?

@mattdangerw
Copy link
Member

@mattdangerw where ever necessary if there are general ops. that we are using in layers should be casted toself.compute_dtype is it correct and is it mandatory to convert?

Not sure I totally get the question, but in general, layers should make sure they are doing computation with the compute dtype. This will often happen automatically. Variables and inputs will automatically be cast to compute dtype in call. But when, say making a new array of floats inside call for whatever reason, that should usually be done with dtype=self.compute_dtype.

@sineeli
Copy link
Collaborator Author

sineeli commented Sep 20, 2024

Not sure I totally get the question, but in general, layers should make sure they are doing computation with the compute dtype. This will often happen automatically. Variables and inputs will automatically be cast to compute dtype in call. But when, say making a new array of floats inside call for whatever reason, that should usually be done with dtype=self.compute_dtype.

Thanks @mattdangerw for clarification, I wanted to know when we declare new arrays in layer call, clear now.

@mattdangerw mattdangerw merged commit 9676061 into keras-team:master Sep 20, 2024
7 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants