ChatPaper.aiChatPaper

YOLOv12:以注意力機制為核心的即時物體檢測器

YOLOv12: Attention-Centric Real-Time Object Detectors

February 18, 2025
作者: Yunjie Tian, Qixiang Ye, David Doermann
cs.AI

摘要

長期以來,增強YOLO框架的網絡架構一直是關鍵任務,但主要集中在基於卷積神經網絡(CNN)的改進上,儘管注意力機制在建模能力上已被證明具有優越性。這是因為基於注意力的模型無法匹配基於CNN模型的速度。本文提出了一種以注意力為核心的YOLO框架,即YOLOv12,它在保持與先前基於CNN模型相當速度的同時,充分利用了注意力機制的性能優勢。YOLOv12在準確性上超越了所有流行的實時目標檢測器,並具有競爭性的速度。例如,YOLOv12-N在T4 GPU上實現了40.6%的mAP,推理延遲為1.64毫秒,在速度相當的情況下,分別比先進的YOLOv10-N / YOLOv11-N高出2.1%/1.2%的mAP。這一優勢也延伸至其他模型規模。YOLOv12還超越了改進DETR的端到端實時檢測器,如RT-DETR / RT-DETRv2:YOLOv12-S在運行速度上比RT-DETR-R18 / RT-DETRv2-R18快42%,僅使用36%的計算量和45%的參數。更多比較詳見圖1。
English
Enhancing the network architecture of the YOLO framework has been crucial for a long time, but has focused on CNN-based improvements despite the proven superiority of attention mechanisms in modeling capabilities. This is because attention-based models cannot match the speed of CNN-based models. This paper proposes an attention-centric YOLO framework, namely YOLOv12, that matches the speed of previous CNN-based ones while harnessing the performance benefits of attention mechanisms. YOLOv12 surpasses all popular real-time object detectors in accuracy with competitive speed. For example, YOLOv12-N achieves 40.6% mAP with an inference latency of 1.64 ms on a T4 GPU, outperforming advanced YOLOv10-N / YOLOv11-N by 2.1%/1.2% mAP with a comparable speed. This advantage extends to other model scales. YOLOv12 also surpasses end-to-end real-time detectors that improve DETR, such as RT-DETR / RT-DETRv2: YOLOv12-S beats RT-DETR-R18 / RT-DETRv2-R18 while running 42% faster, using only 36% of the computation and 45% of the parameters. More comparisons are shown in Figure 1.

Summary

AI-Generated Summary

PDF102February 19, 2025