How to load model YOLOv8 Tensorrt

Ali Mustofa
11 min readFeb 21, 2023

YOLOv8 is the latest version (v8) of the YOLO (You Only Look Once) object detection system. YOLO is a real-time, one-shot object detection system that aims to perform object detection in a single forward pass of the network, making it fast and efficient. YOLOv8 is an improved version of the previous YOLO models with improved accuracy and faster inference speed.

TensorRT is a high-performance deep learning inference library developed by NVIDIA. It is designed to optimize and deploy trained neural networks for production deployment on NVIDIA GPUs. TensorRT can take trained deep learning models, such as those created with popular frameworks like TensorFlow, PyTorch, and Caffe, and convert them into highly optimized inference engines that can be deployed for real-time inferencing on a variety of NVIDIA hardware platforms.

  1. How to export model to Tensorrt?
git clone https://github.com/triple-Mu/YOLOv8-TensorRT.git
cd YOLOv8-TensorRT

# Install requirements
pip3 install -r requirements.txt

Clone repository git clone https://github.com/triple-Mu/YOLOv8-TensorRT.git and then install requirements

# Download model yolo
wget https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8s.pt
# Export to onnx
python3 export-det.py \
--weights yolov8s.pt \…

--

--