Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

YOLACT++: Better Real-time Instance Segmentation (1912.06218v2)

Published 3 Dec 2019 in cs.CV, cs.LG, and eess.IV

Abstract: We present a simple, fully-convolutional model for real-time (>30 fps) instance segmentation that achieves competitive results on MS COCO evaluated on a single Titan Xp, which is significantly faster than any previous state-of-the-art approach. Moreover, we obtain this result after training on only one GPU. We accomplish this by breaking instance segmentation into two parallel subtasks: (1) generating a set of prototype masks and (2) predicting per-instance mask coefficients. Then we produce instance masks by linearly combining the prototypes with the mask coefficients. We find that because this process doesn't depend on repooling, this approach produces very high-quality masks and exhibits temporal stability for free. Furthermore, we analyze the emergent behavior of our prototypes and show they learn to localize instances on their own in a translation variant manner, despite being fully-convolutional. We also propose Fast NMS, a drop-in 12 ms faster replacement for standard NMS that only has a marginal performance penalty. Finally, by incorporating deformable convolutions into the backbone network, optimizing the prediction head with better anchor scales and aspect ratios, and adding a novel fast mask re-scoring branch, our YOLACT++ model can achieve 34.1 mAP on MS COCO at 33.5 fps, which is fairly close to the state-of-the-art approaches while still running at real-time.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Daniel Bolya (14 papers)
  2. Chong Zhou (12 papers)
  3. Fanyi Xiao (25 papers)
  4. Yong Jae Lee (88 papers)
Citations (372)

Summary

An Overview of Efficient Real-Time Instance Segmentation with \methodname++

The paper presents a novel approach called \methodname++ for real-time instance segmentation, achieving impressive performance on the challenging MS COCO dataset. With a primary focus on speed, the method effectively balances segmentation quality and inference speed, operating at above 30 frames per second (fps) using a single GPU.

Key Contributions

\methodname++ seeks to bridge a notable gap in the field of computer vision by offering a one-stage model that maintains competitive accuracy while significantly enhancing inference speed. The methodology breaks down instance segmentation into two parallel processes: creating prototype masks and estimating per-instance mask coefficients. The final instance masks result from linearly combining these prototypes with their respective coefficients. Notably, the approach circumvents the need for ROI pooling by leveraging this linear combination strategy, thereby streamlining computation.

Technical Advancements

  1. Efficient Prototype and Coefficient Generation: The method employs an FCN to generate a set of prototype masks for the entire image, while a separate branch predicts a vector of mask coefficients per anchor. This decoupling of spatially aware segmentation tasks allows the network to produce high-quality masks without relying on traditional repooling.
  2. Fast Non-Maximum Suppression (NMS): \methodname++ introduces a Fast NMS technique that operates in parallel. This adjustment reduces computational overhead by about 12 milliseconds compared to standard NMS, achieving this with minimal sacrifice in accuracy.
  3. Deformable Convolution and Optimized Heads: Incorporating deformable convolutions within the backbone enhances the network's ability to adapt to varied instance scales and orientations. Additionally, refining anchor designs for the detection heads improves recall rates and recognition accuracy.
  4. Fast Mask Re-scoring Network: By introducing an additional mask scoring branch, \methodname++ better aligns class confidence scores with mask quality, thus optimizing the ranking of predictions based on their likelihood of correctness.

Experiments and Results

Experiments conducted on the COCO dataset reveal that \methodname++ achieves 34.1 mAP with ResNet-50 as the backbone, operating at 33.5 fps, which demonstrates a clear improvement in both speed and segmentation accuracy compared to existing models like Mask R-CNN. Furthermore, qualitative results exhibit superior mask quality and temporal stability across video frames, attributing these perks to the absence of feature re-pooling.

Implications and Future Directions

The advancement of \methodname++ impacts both practical and theoretical realms. Practically, this model offers a deployable solution in real-time applications such as autonomous driving and interactive editing, where delay can critically impair utility. Theoretically, \methodname++ presents a paradigm shift in leveraging linear combination and bypassing explicit localization steps which might inspire subsequent research efforts in AI and vision.

Future developments could explore further improvements in prototype learning or explore interplays with other modest computational techniques. Reducing the gap between mask and box mAP remains an ongoing challenge, highlighting potential research avenues for refining spatial and semantic comprehensions within neural network designs, potentially through enhanced feature learning or sophisticated network architectures.

In summary, \methodname++ provides a compelling approach to real-time instance segmentation, effectively balancing high-quality segmentation with rapid inference speeds, significantly contributing to advancements in the domain of computer vision.

Youtube Logo Streamline Icon: https://streamlinehq.com