Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

MMViT: Multiscale Multiview Vision Transformers (2305.00104v1)

Published 28 Apr 2023 in cs.CV, eess.AS, and eess.IV

Abstract: We present Multiscale Multiview Vision Transformers (MMViT), which introduces multiscale feature maps and multiview encodings to transformer models. Our model encodes different views of the input signal and builds several channel-resolution feature stages to process the multiple views of the input at different resolutions in parallel. At each scale stage, we use a cross-attention block to fuse information across different views. This enables the MMViT model to acquire complex high-dimensional representations of the input at different resolutions. The proposed model can serve as a backbone model in multiple domains. We demonstrate the effectiveness of MMViT on audio and image classification tasks, achieving state-of-the-art results.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (11)
  1. Yuchen Liu (156 papers)
  2. Natasha Ong (1 paper)
  3. Kaiyan Peng (6 papers)
  4. Bo Xiong (84 papers)
  5. Qifan Wang (129 papers)
  6. Rui Hou (56 papers)
  7. Madian Khabsa (38 papers)
  8. Kaiyue Yang (2 papers)
  9. David Liu (32 papers)
  10. Donald S. Williamson (12 papers)
  11. Hanchao Yu (17 papers)
Citations (4)