文章翻译自: docs.ultralytics.com/quick-start…
快速开始
要求(Requirements)
您需要Python>=3.8和PIP才能遵循本指南。
其余要求列在 "./requirements.txt"中
如果安装了python的多个版本,请确保使用正确的版本
安装
- 克隆仓库
$ git clone https://github.com/ultralytics/yolov5.git
- 输入存储库根目录
$ cd yolov5
- 从克隆的存储库根目录安装所需的软件包
$ pip install -r requirements.txt
包环境
为了快速、轻松地进行设置,YOLOv5已与以下环境的所有依赖项一起打包
*including CUDA/CUDNN, Python and PyTorch
- Google Colab and Kaggle notebooks with free GPU: Open In Colab Open In Kaggle
- Google Cloud Deep Learning VM. See GCP Quickstart Guide
- Amazon Deep Learning AMI. See AWS Quickstart Guide
- Docker Image. See Docker Quickstart Guide
推理-检测对象
从克隆的存储库
要开始使用最新的YOLO模型进行对象检测,请从存储库根目录运行此命令。结果保存到"/runs/detect"
$ python detect.py --source OPTION
用您的选择替换选项,以检测:
- Webcam : (OPTION = 0) 用于从连接的网络摄像头检测活动对象。(For live object detection from your connected webcam)
- Image : (OPTION = filename.jpg) 创建具有对象检测覆盖的图像副本。(Create a copy of the image with an object detection overlay)
- Video : (OPTION = filename.mp4) 创建具有对象检测覆盖的视频副本。(Create a copy of the video with an object detection overlay)
- Directory : (OPTION = directory_name/) 创建具有对象检测覆盖的所有文件的副本。(Create a copy of all file with an object detection overlay)
- Global File Type (OPTION = directory_name/*.jpg)创建具有对象检测覆盖的所有文件的副本。( Create a copy of all file with an object detection overlay)
- RTSP stream : (OPTION = rtsp://170.93.143.139/rtplive/470011e600ef003a004ee33696235daa) For live object detection from a stream
- RTMP stream : (OPTION = rtmp://192.168.1.105/live/test) 用于从流中检测活动对象。(For live object detection from a stream)
- HTTP stream : (OPTION = http://112.50.243.8/PLTV/88888888/224/3221225900/1.m3u8) 用于从流中检测活动对象。(For live object detection from a stream)
当前支持以下文件格式:
- Images: bmp, jpg, jpeg, png, tif, tiff, dng, webp, mpo
- Videos: mov, avi, mp4, mpg, mpeg, m4v, wmv, mkv
从PyTorch Hub
推理可以直接从PyTorch Hub运行,而无需克隆存储库。必要的文件将下载到您的临时目录中。
下面是一个使用最新YOLOv5s模型和存储库示例图像的示例脚本。
import torch
# Model
model = torch.hub.load('ultralytics/yolov5', 'yolov5s')
# Images
dir = 'https://github.com/ultralytics/yolov5/raw/master/data/images/'
imgs = [dir + f for f in ('zidane.jpg', 'bus.jpg')] # batch of images
# Inference
results = model(imgs)
results.print() # or .show(), .save()