-
-
Notifications
You must be signed in to change notification settings - Fork 16.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How can I move and run the classification model out of YOLOV5 codebase? #8790
Comments
@AI-Passionner this is a good question. In general for any pytorch model you'll need to have the modules available in the workspace prior to loading. PyTorch Hub accomplishes this in the background by downloading and caching the repo in a separate directory. To point PyTorch Hub to a specific branch in a repo, i.e. the classifier branch in the YOLOv5 repo you can try this, though the classifier should be merged into master as part of the upcoming v6.2 release in the next few weeks. model = torch.hub.load('ultralytics/yolov5:classifier', 'custom', path='path/to/model.pt') # local model |
@glenn-jocher Thank you. Let me try. I bet this is what I am looking for. Really appreciate your help. |
The classification model was trained as I threw the error message like this. Adding """" |
@AI-Passionner got it. It looks like we need to add some classifier-specific code to hubconf.py to handle these models differently, which makes sense. I'll add a TODO for this. |
@glenn-jocher Thank you. |
@AI-Passionner I think this should work now for newly trained models of type models.yolo.ClassificationModel, (but not for older models of type models.yolo.Model). I'll run some tests to verify today. |
@glenn-jocher Thank you. I will run the train again and have a try. |
@AI-Passionner I've confirmed PyTorch Hub loading works in a test just now on Google Colab. I uploaded a recent ImageNet trained YOLOv5m-cls model (about 75 top-1 accuracy after 90 epochs) and then requested it like this. This does not need a local git clone of YOLOv5. import torch
model = torch.hub.load('ultralytics/yolov5:classifier', 'custom', 'yolov5m-cls.pt') # or use local cls model
print(type(model)) EDIT: Once we release 6.2 with built-in classification support then you no longer need to point torch.hub.load() to the |
* Fix TensorRT --dynamic excess outputs bug Potential fix for #8790 * Cleanup * Update common.py * Update common.py * New fix
Great. Really appreciate your effort. YOLOV5 is the best for our object detection project so far, no matter its accuracy and speed. |
I am getting the same issue trying to inference using hub with classification models. |
@UygarUsta can you please submit a minimum reproducible example? |
@UygarUsta works for me. |
|
Add PyTorch Hub loading of official and custom trained classification models to CI checks. May help resolve #8790 (comment) Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com>
@UygarUsta added a CI check for this use case and can now reproduce in https://github.com/ultralytics/yolov5/runs/7905622621?check_suite_focus=true, I'll investigate. |
* Add PyTorch Hub classification CI checks Add PyTorch Hub loading of official and custom trained classification models to CI checks. May help resolve #8790 (comment) Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com> * Update hubconf.py Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com> Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com>
@UygarUsta good news 😃! Your original issue may now be fixed ✅ in PR #9027. To receive this update:
Thank you for spotting this issue and informing us of the problem. Please let us know if this update resolves the issue for you, and feel free to inform us of any other issues you discover or feature requests that come to mind. Happy trainings with YOLOv5 🚀! |
Unfortunately another error arises. I also tried my luck with ultralytics cache.As always thank you for your hard work . |
* Fix TensorRT --dynamic excess outputs bug Potential fix for ultralytics#8790 * Cleanup * Update common.py * Update common.py * New fix
* Add PyTorch Hub classification CI checks Add PyTorch Hub loading of official and custom trained classification models to CI checks. May help resolve ultralytics#8790 (comment) Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com> * Update hubconf.py Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com> Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com>
its not working
ValueError: Cannot find classifier in https://github.com/ultralytics/yolov5. If it's a commit from a forked repo, please call hub.load() with forked repo directly. |
Understood, @aiakash. The error may be due to the |
Search before asking
Question
It seems that loading the classification model can't leave out of the YOLOV5-Classifier code base. When loading the model,
model = torch.load('path/to/best.pt', map_location=torch.device('cpu'))['model'].float()
, it throws error like this"""
module = self._system_import(name, *args, **kwargs)
ModuleNotFoundError: No module named 'models'
"""
Once the folders of
models
andutils
are copied to a new project directory, the loading works.I am asking if there is a way I can load the classification model and use it as a stand-alone and for example, converting to TensorFlow like the object detection codebase.
As I know, the YOLOV5 object detection doesn't have this problem.
Thanks.
Additional
No response
The text was updated successfully, but these errors were encountered: