Replies: 11 comments 10 replies
-
For standalone inference in 3rd party projects or repos importing your model into the python workspace with PyTorch Hub is the recommended method. See YOLOv5 PyTorch Hub tutorial here, specifically the section on loading custom models. Custom ModelsThis example loads a custom 20-class VOC-trained YOLOv5s model import torch
model = torch.hub.load('ultralytics/yolov5', 'custom', path_or_model='yolov5s_voc_best.pt')
model = model.autoshape() # for PIL/cv2/np inputs and NMS Then once the model is loaded: from PIL import Image
# Images
img1 = Image.open('zidane.jpg')
img2 = Image.open('bus.jpg')
imgs = [img1, img2] # batched list of images
# Inference
result = model(imgs, size=640) # includes NMS
result.print() |
Beta Was this translation helpful? Give feedback.
-
@gpierard results.save() accepts no arguments. See tutorial for proper usage, or simply view the source: Lines 233 to 235 in 49abc72 |
Beta Was this translation helpful? Give feedback.
-
@gpierard results.show() will show images with predictions. See hub for details: |
Beta Was this translation helpful? Give feedback.
-
If you spot a bug please submit a bug report using the bug report template with code to reproduce. Thank you! |
Beta Was this translation helpful? Give feedback.
-
@gpierard YOLOv5 models require RGB inputs. The tutorial shows proper usage on cv2 inputs, suggest you review the tutorial, or even the hub webpage: |
Beta Was this translation helpful? Give feedback.
-
Oh that’s odd. Can you raise a bug report with example image and code to
reproduce your user?
On Fri, Dec 18, 2020 at 1:47 PM gpierard ***@***.***> wrote:
Yes, I saw. Thought this might be useful anyway because the proposed
method img2 = cv2.imread('bus.jpg')[:, :, ::-1] # OpenCV image (BGR to
RGB) does not work with my particular use case.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#1697 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AGMXEGJZBHZH55SRIO2636TSVPEVNANCNFSM4U46EWVA>
.
--
<https://www.ultralytics.com/>
*Glenn Jocher*Founder & CEO, Ultralytics LLC
+1 301 237 6695
<https://www.facebook.com/ultralytics>
<https://www.twitter.com/ultralytics>
<https://www.youtube.com/ultralytics>
<https://www.github.com/ultralytics>
<https://www.linkedin.com/company/ultralytics>
<https://www.instagram.com/ultralytics>
<https://contact.ultralytics.com/>
|
Beta Was this translation helpful? Give feedback.
-
sure, see here Cheers |
Beta Was this translation helpful? Give feedback.
-
Thanks! Will check it out.
On Fri, Dec 18, 2020 at 5:23 PM gpierard ***@***.***> wrote:
sure, see here
<#1735 (comment)>
Cheers
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#1697 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AGMXEGM7O3ZOTBEWBJHSQOLSVP6BTANCNFSM4U46EWVA>
.
--
<https://www.ultralytics.com/>
*Glenn Jocher*Founder & CEO, Ultralytics LLC
+1 301 237 6695
<https://www.facebook.com/ultralytics>
<https://www.twitter.com/ultralytics>
<https://www.youtube.com/ultralytics>
<https://www.github.com/ultralytics>
<https://www.linkedin.com/company/ultralytics>
<https://www.instagram.com/ultralytics>
<https://contact.ultralytics.com/>
|
Beta Was this translation helpful? Give feedback.
-
How to edit the cv2 window when open the webcam, need to resize the --source 0 (webcam) window |
Beta Was this translation helpful? Give feedback.
-
Try:
|
Beta Was this translation helpful? Give feedback.
-
@glenn-jocher I wanted to access results to check whether there is some detection (lets say fire). If my model has detected fire I want to execute some function. How can i do this? |
Beta Was this translation helpful? Give feedback.
-
I am using the following in order to grab screenshots from an open application in realtime. How can I run detect.py on
screen
orscr
without saving to disk and runningos.system(python detect.py ...)
for every frame? Thanks(PS more details on my SO question)
Beta Was this translation helpful? Give feedback.
All reactions