Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

inference with tensorrt on torch.hub doesnt work properly #7822

Closed
1 of 2 tasks
kzyadaking opened this issue May 15, 2022 · 12 comments · Fixed by #8435
Closed
1 of 2 tasks

inference with tensorrt on torch.hub doesnt work properly #7822

kzyadaking opened this issue May 15, 2022 · 12 comments · Fixed by #8435
Labels
bug Something isn't working

Comments

@kzyadaking
Copy link

Search before asking

  • I have searched the YOLOv5 issues and found no similar bug report.

YOLOv5 Component

PyTorch Hub

Bug

export.py --weights yolov5s.pt --include engine --device 0 --half --workspace 8 #
detect.py --weight yolov5s.engine --source bus.jpg (bus.jpg from data/images)

#result - detect.py 
image 1/2 C:\yolov5\data\images\bus.jpg: 640x640 4 persons, 1 bus, Done. (0.002s)
Speed: 0.5ms pre-process, 2.5ms inference, 3.5ms NMS per image at shape (1, 3, 640, 640)

so after converting .pt to .engine, you can inference with detect.py without any problem but when using torch.hub it seems that its not able to find any objects

model = torch.hub.load('./', 'custom', path='yolov5s.engine', source='local')
img = 'bus.jpg'

results = model(img).print()

#result - torch.hub 
Speed: 13.0ms pre-process, 3.0ms inference, 1.0ms NMS per image at shape (1, 3, 640, 640)
image 1/1: 1080x810 (no detections)

Environment

-yolo v5 6.1
-windows 10
-cuda 11.3, cudnn 8.2.1, TensorRT-8.2.5.1, pytorch 1.11.0

Minimal Reproducible Example

export.py --weights yolov5s.pt --include engine --device 0 --half --workspace 8 

model = torch.hub.load('./', 'custom', path='yolov5s.engine', source='local')
img = 'bus.jpg'

results = model(img).print()

Additional

No response

Are you willing to submit a PR?

  • Yes I'd like to help by submitting a PR!
@kzyadaking kzyadaking added the bug Something isn't working label May 15, 2022
@glenn-jocher
Copy link
Member

@kzyadaking I'm not able to reproduce any issues using your commands in Colab:
https://colab.research.google.com/github/ultralytics/yolov5/blob/master/tutorial.ipynb?hl=en

Screenshot 2022-05-16 at 00 27 48

@kzyadaking
Copy link
Author

kzyadaking commented May 16, 2022

@glenn-jocher just figured it works when its converted as fp32 but doesnt work when its fp16.

@glenn-jocher
Copy link
Member

@kzyadaking all YOLOv5 TensorRT exports are fixed at FP16.

@kzyadaking
Copy link
Author

kzyadaking commented May 18, 2022

@glenn-jocher does that mean you dont really need --half when exporting? also if you export using trtexec (which is from official tensorrt bin) and put fp16 there. it wouldnt still work. (detect.py still works but only from toch.hub). same goes to export.py. if you put --half it wont work with torch.hub even though you dont really need --half since its fixed at fp16 like you said. so there must be a bug i guess

@glenn-jocher
Copy link
Member

@kzyadaking TRT exports are fixed at FP16. TensorRT inference with detect.py and PyTorch Hub both work correctly. If you have evidence to the contrary please submit a bug report with exact code to reproduce the problem.

@github-actions
Copy link
Contributor

github-actions bot commented Jun 18, 2022

👋 Hello, this issue has been automatically marked as stale because it has not had recent activity. Please note it will be closed if no further activity occurs.

Access additional YOLOv5 🚀 resources:

Access additional Ultralytics ⚡ resources:

Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!

Thank you for your contributions to YOLOv5 🚀 and Vision AI ⭐!

@mic2112
Copy link

mic2112 commented Jun 30, 2022

@kzyadaking I can confirm that I am facing the same issue.

@glenn-jocher To replicate it please export the engine of yolov5x6.pt model with --half enabled using export.py, and then use the generated engine file for inference with torch.hub. You won't get any detections.
You can run the notebook below to replicate the issue.
(https://colab.research.google.com/drive/1YP2qiDBI4qXgZ-Iypq_-F2TCOx7IWKdY?usp=sharing)

@glenn-jocher does that mean you dont really need --half when exporting? also if you export using trtexec (which is from official tensorrt bin) and put fp16 there. it wouldnt still work. (detect.py still works but only from toch.hub). same goes to export.py. if you put --half it wont work with torch.hub even though you dont really need --half since its fixed at fp16 like you said. so there must be a bug i guess

@glenn-jocher
Copy link
Member

@mic2112 thanks for letting us know about the problem. I am able to reproduce with this code. Added a TODO to investigate.

TODO: TRT --half export with PyTorch Hub inference bug

Screenshot 2022-06-30 at 23 09 46

@glenn-jocher
Copy link
Member

@mic2112 until this is resolved please export TRT models at FP32 precision (the default), which does work correctly with PyTorch Hub.

glenn-jocher added a commit that referenced this issue Jul 1, 2022
@glenn-jocher glenn-jocher linked a pull request Jul 1, 2022 that will close this issue
@glenn-jocher glenn-jocher removed the TODO label Jul 1, 2022
glenn-jocher added a commit that referenced this issue Jul 1, 2022
* TRT `--half` fix autocast images to FP16

Resolves bug raised in #7822

* Update common.py
@glenn-jocher
Copy link
Member

@mic2112 @kzyadaking good news 😃! Your original issue may now be fixed ✅ in PR #8435. This PR adds automatic casting to FP16 in the DetectMultiBackend class. I tested TRT FP16 models with PyTorch Hub and they work correctly in my test:

Screen Shot 2022-07-01 at 3 40 35 PM

To receive this update:

  • Gitgit pull from within your yolov5/ directory or git clone https://github.com/ultralytics/yolov5 again
  • PyTorch Hub – Force-reload model = torch.hub.load('ultralytics/yolov5', 'yolov5s', force_reload=True)
  • Notebooks – View updated notebooks Open In Colab Open In Kaggle
  • Dockersudo docker pull ultralytics/yolov5:latest to update your image Docker Pulls

Thank you for spotting this issue and informing us of the problem. Please let us know if this update resolves the issue for you, and feel free to inform us of any other issues you discover or feature requests that come to mind. Happy trainings with YOLOv5 🚀!

@mic2112
Copy link

mic2112 commented Jul 1, 2022

@glenn-jocher Thank you so much, this was proving to be a showstopper for us. Thanks a bunch for fixing it so fast. You rock!

merouaneamqor pushed a commit to merouaneamqor/yolov5-improved that referenced this issue Jul 1, 2022
* TRT `--half` fix autocast images to FP16

Resolves bug raised in ultralytics/yolov5#7822

* Update common.py
Shivvrat pushed a commit to Shivvrat/epic-yolov5 that referenced this issue Jul 12, 2022
* TRT `--half` fix autocast images to FP16

Resolves bug raised in ultralytics#7822

* Update common.py
ctjanuhowski pushed a commit to ctjanuhowski/yolov5 that referenced this issue Sep 8, 2022
* TRT `--half` fix autocast images to FP16

Resolves bug raised in ultralytics#7822

* Update common.py
@glenn-jocher
Copy link
Member

@mic2112 you're very welcome! 😊 We're delighted to be of assistance. However, the real accolades go to the YOLO community, as well as the diligent folks on the Ultralytics team who helped to resolve the issue. If you encounter any other challenges, please don't hesitate to reach out. We're always here to help!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants