Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Modeling Collaborator: Enabling Subjective Vision Classification With Minimal Human Effort via LLM Tool-Use (2403.02626v2)

Published 5 Mar 2024 in cs.CV and cs.LG

Abstract: From content moderation to wildlife conservation, the number of applications that require models to recognize nuanced or subjective visual concepts is growing. Traditionally, developing classifiers for such concepts requires substantial manual effort measured in hours, days, or even months to identify and annotate data needed for training. Even with recently proposed Agile Modeling techniques, which enable rapid bootstrapping of image classifiers, users are still required to spend 30 minutes or more of monotonous, repetitive data labeling just to train a single classifier. Drawing on Fiske's Cognitive Miser theory, we propose a new framework that alleviates manual effort by replacing human labeling with natural language interactions, reducing the total effort required to define a concept by an order of magnitude: from labeling 2,000 images to only 100 plus some natural language interactions. Our framework leverages recent advances in foundation models, both LLMs and vision-LLMs, to carve out the concept space through conversation and by automatically labeling training data points. Most importantly, our framework eliminates the need for crowd-sourced annotations. Moreover, our framework ultimately produces lightweight classification models that are deployable in cost-sensitive scenarios. Across 15 subjective concepts and across 2 public image classification datasets, our trained models outperform traditional Agile Modeling as well as state-of-the-art zero-shot classification models like ALIGN, CLIP, CuPL, and large visual question-answering models like PaLI-X.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (62)
  1. Falcon-40B: an open large language model with state-of-the-art performance. 2023.
  2. Palm 2 technical report. arXiv preprint arXiv:2305.10403, 2023.
  3. Language models are few-shot learners. NeurIPS, 33:1877–1901, 2020.
  4. Towards end-to-end embodied decision making with multi-modal large language model: Explorations with gpt4-vision and beyond. In NeurIPS 2023 Foundation Models for Decision Making Workshop, 2023.
  5. Pali-x: On scaling up a multilingual vision and language model. arXiv preprint arXiv:2305.18565, 2023.
  6. Pali-3 vision language models: Smaller, faster, stronger. arXiv preprint arXiv:2310.09199, 2023.
  7. Pali: A jointly-scaled multilingual language-image model. In ICLR, 2022.
  8. Palm: Scaling language modeling with pathways. Journal of Machine Learning Research, 24(240):1–113, 2023.
  9. Smart content recognition from images using a mixture of convolutional neural networks. In IT Convergence and Security 2017: Volume 1, pages 11–18. Springer, 2017.
  10. Imagenet: A large-scale hierarchical image database. In CVPR, pages 248–255, 2009.
  11. Bert: Pre-training of deep bidirectional transformers for language understanding. In NAACL-HLT, pages 4171–4186, 2019.
  12. Social cognition. Mcgraw-Hill Book Company, 1991.
  13. Gottlob Frege et al. Begriffsschrift, a formula language, modeled upon that of arithmetic, for pure thought. From Frege to Gödel: A source book in mathematical logic, 1931:1–82, 1879.
  14. Bias and fairness in large language models: A survey. arXiv preprint arXiv:2309.00770, 2023.
  15. Making the v in vqa matter: Elevating the role of image understanding in visual question answering. In CVPR, pages 6904–6913, 2017.
  16. Deep residual learning for image recognition. In CVPR, pages 770–778, 2016.
  17. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2015.
  18. Tool documentation enables zero-shot tool-usage with large language models. arXiv preprint arXiv:2308.00675, 2023.
  19. Avis: Autonomous visual information seeking with large language models. arXiv preprint arXiv:2306.08129, 2023.
  20. Reveal: Retrieval-augmented visual-language pre-training with multi-source multimodal knowledge memory. In CVPR, pages 23369–23379, 2023.
  21. Action genome: Actions as compositions of spatio-temporal scene graphs. In CVPR, pages 10236–10247, 2020.
  22. Scaling up visual and vision-language representation learning with noisy text supervision. In ICML, pages 4904–4916, 2021.
  23. Graph-rise: Graph-regularized image semantic embedding. arXiv preprint arXiv:1902.10814, 2019.
  24. Understanding and predicting image memorability at a large scale. ICCV, pages 2390–2398, 2015.
  25. The hateful memes challenge: Detecting hate speech in multimodal memes. NeurIPS, 33:2611–2624, 2020.
  26. Visual genome: Connecting language and vision using crowdsourced dense image annotations. IJCV, 123(1):32–73, 2017.
  27. The open images dataset v4. ICCV, 128:1956–1981, 2020.
  28. Lmeye: An interactive perception network for large language models. arXiv preprint arXiv:2305.03701, 2023.
  29. Microsoft coco: Common objects in context. In ECCV, pages 740–755, 2014.
  30. Decoupled weight decay regularization. In ICLR, 2018.
  31. Visual relationship detection with language priors. In ECCV, pages 852–869, 2016.
  32. Hierarchical question-image co-attention for visual question answering. NeurIPS, 29, 2016.
  33. Chameleon: Plug-and-play compositional reasoning with large language models. NeurIPS, 36, 2024.
  34. Crepe: Can vision-language foundation models reason compositionally? In CVPR, pages 10910–10921, 2023.
  35. George A Miller. The magical number seven, plus or minus two: Some limits on our capacity for processing information. Psychological review, 63(2):81, 1956.
  36. OpenAI. Gpt-4 technical report, 2023.
  37. OpenAI. Gpt-4v(ision) system card. https://cdn.openai.com/papers/GPTV_System_Card.pdf, 2023. Accessed: 2023-11-15.
  38. Training language models to follow instructions with human feedback. NeurIPS, 35:27730–27744, 2022.
  39. The RefinedWeb dataset for Falcon LLM: outperforming curated corpora with web data, and web data only. arXiv preprint arXiv:2306.01116, 2023.
  40. What does a platypus look like? generating customized prompts for zero-shot image classification. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 15691–15701, 2023.
  41. Measuring and narrowing the compositionality gap in language models. arXiv preprint arXiv:2210.03350, 2022.
  42. Learning transferable visual models from natural language supervision. In ICML, pages 8748–8763, 2021.
  43. Language models are unsupervised multitask learners. OpenAI blog, 1:9, 2019.
  44. Automated detection of substance use-related social media posts based on image and text analysis. In 2017 IEEE 29th International Conference on Tools with Artificial Intelligence (ICTAI), pages 772–779. IEEE, 2017.
  45. Toolformer: Language models can teach themselves to use tools. arXiv preprint arXiv:2302.04761, 2023.
  46. Laion-5b: An open large-scale dataset for training next generation image-text models. NeurIPS, 35:25278–25294, 2022.
  47. Burr Settles. Active learning literature survey. 2009.
  48. How much can clip benefit vision-and-language tasks? In ICLR, 2021.
  49. Hugginggpt: Solving ai tasks with chatgpt and its friends in hugging face. NeurIPS, 36, 2024.
  50. Agile modeling: Image classification with domain experts in the loop. ICCV, 2023.
  51. Vipergpt: Visual inference via python execution for reasoning. arXiv preprint arXiv:2303.08128, 2023.
  52. Lxmert: Learning cross-modality encoder representations from transformers. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). Association for Computational Linguistics, 2019.
  53. Multi-modal dual-tower architectures for entity retrieval from image and text. In CVPRW, 2023.
  54. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288, 2023.
  55. Non-intrusive adaptation: Input-centric parameter-efficient fine-tuning for versatile multimodal modeling. arXiv preprint arXiv:2310.12100, 2023.
  56. Chain-of-thought prompting elicits reasoning in large language models. NeurIPS, 35:24824–24837, 2022.
  57. Visual chatgpt: Talking, drawing and editing with visual foundation models. arXiv preprint arXiv:2303.04671, 2023.
  58. Autogen: Enabling next-gen llm applications via multi-agent conversation framework. arXiv preprint arXiv:2308.08155, 2023.
  59. Large language models as optimizers. arXiv preprint arXiv:2309.03409, 2023.
  60. Mm-react: Prompting chatgpt for multimodal reasoning and action. arXiv preprint arXiv:2303.11381, 2023.
  61. Idealgpt: Iteratively decomposing vision and language reasoning via large language models. arXiv preprint arXiv:2305.14985, 2023.
  62. Coca: Contrastive captioners are image-text foundation models. arXiv preprint arXiv:2205.01917, 2022.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (13)
  1. Imad Eddine Toubal (2 papers)
  2. Aditya Avinash (2 papers)
  3. Neil Gordon Alldrin (2 papers)
  4. Jan Dlabal (4 papers)
  5. Wenlei Zhou (5 papers)
  6. Enming Luo (11 papers)
  7. Otilia Stretcu (10 papers)
  8. Hao Xiong (41 papers)
  9. Chun-Ta Lu (20 papers)
  10. Howard Zhou (12 papers)
  11. Ranjay Krishna (116 papers)
  12. Ariel Fuxman (10 papers)
  13. Tom Duerig (11 papers)
Citations (5)
X Twitter Logo Streamline Icon: https://streamlinehq.com