-
-
Notifications
You must be signed in to change notification settings - Fork 16.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
inference with tensorrt on torch.hub doesnt work properly #7822
Comments
@kzyadaking I'm not able to reproduce any issues using your commands in Colab: |
@glenn-jocher just figured it works when its converted as fp32 but doesnt work when its fp16. |
@kzyadaking all YOLOv5 TensorRT exports are fixed at FP16. |
@glenn-jocher does that mean you dont really need --half when exporting? also if you export using trtexec (which is from official tensorrt bin) and put fp16 there. it wouldnt still work. (detect.py still works but only from toch.hub). same goes to export.py. if you put --half it wont work with torch.hub even though you dont really need --half since its fixed at fp16 like you said. so there must be a bug i guess |
@kzyadaking TRT exports are fixed at FP16. TensorRT inference with detect.py and PyTorch Hub both work correctly. If you have evidence to the contrary please submit a bug report with exact code to reproduce the problem. |
👋 Hello, this issue has been automatically marked as stale because it has not had recent activity. Please note it will be closed if no further activity occurs. Access additional YOLOv5 🚀 resources:
Access additional Ultralytics ⚡ resources:
Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed! Thank you for your contributions to YOLOv5 🚀 and Vision AI ⭐! |
@kzyadaking I can confirm that I am facing the same issue. @glenn-jocher To replicate it please export the engine of yolov5x6.pt model with --half enabled using export.py, and then use the generated engine file for inference with torch.hub. You won't get any detections.
|
@mic2112 thanks for letting us know about the problem. I am able to reproduce with this code. Added a TODO to investigate. TODO: TRT --half export with PyTorch Hub inference bug |
@mic2112 until this is resolved please export TRT models at FP32 precision (the default), which does work correctly with PyTorch Hub. |
@mic2112 @kzyadaking good news 😃! Your original issue may now be fixed ✅ in PR #8435. This PR adds automatic casting to FP16 in the DetectMultiBackend class. I tested TRT FP16 models with PyTorch Hub and they work correctly in my test: To receive this update:
Thank you for spotting this issue and informing us of the problem. Please let us know if this update resolves the issue for you, and feel free to inform us of any other issues you discover or feature requests that come to mind. Happy trainings with YOLOv5 🚀! |
@glenn-jocher Thank you so much, this was proving to be a showstopper for us. Thanks a bunch for fixing it so fast. You rock! |
* TRT `--half` fix autocast images to FP16 Resolves bug raised in ultralytics/yolov5#7822 * Update common.py
* TRT `--half` fix autocast images to FP16 Resolves bug raised in ultralytics#7822 * Update common.py
* TRT `--half` fix autocast images to FP16 Resolves bug raised in ultralytics#7822 * Update common.py
@mic2112 you're very welcome! 😊 We're delighted to be of assistance. However, the real accolades go to the YOLO community, as well as the diligent folks on the Ultralytics team who helped to resolve the issue. If you encounter any other challenges, please don't hesitate to reach out. We're always here to help! |
Search before asking
YOLOv5 Component
PyTorch Hub
Bug
so after converting .pt to .engine, you can inference with detect.py without any problem but when using torch.hub it seems that its not able to find any objects
Environment
-yolo v5 6.1
-windows 10
-cuda 11.3, cudnn 8.2.1, TensorRT-8.2.5.1, pytorch 1.11.0
Minimal Reproducible Example
Additional
No response
Are you willing to submit a PR?
The text was updated successfully, but these errors were encountered: