Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

TV100: A TV Series Dataset that Pre-Trained CLIP Has Not Seen (2404.12407v1)

Published 16 Apr 2024 in cs.CV and cs.LG

Abstract: The era of pre-trained models has ushered in a wealth of new insights for the machine learning community. Among the myriad of questions that arise, one of paramount importance is: 'Do pre-trained models possess comprehensive knowledge?' This paper seeks to address this crucial inquiry. In line with our objective, we have made publicly available a novel dataset comprised of images from TV series released post-2021. This dataset holds significant potential for use in various research areas, including the evaluation of incremental learning, novel class discovery, and long-tailed learning, among others. Project page: https://tv-100.github.io/

Definition Search Book Streamline Icon: https://streamlinehq.com
References (16)
  1. Rf-badge: Vital sign-based authentication via rfid tag array on badges. IEEE Transactions on Mobile Computing, 2023, 22(02): 1170–1184
  2. Contextualizing meta-learning via learning to decompose. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023
  3. Learning contrastive embedding in low-dimensional space. NeurIPS, 2022, 35: 6345–6357
  4. Auxiliary information regularized machine for multiple modality feature learning. In: IJCAI. 2015
  5. Expandable subspace ensemble for pre-trained model-based class-incremental learning. arXiv preprint arXiv:2403.12030, 2024
  6. Moirépose: ultra high precision camera-to-screen pose estimation based on moiré pattern. In: Proceedings of the 28th Annual International Conference on Mobile Computing And Networking. 2022, 106–119
  7. Floridi L, Chiriatti M. Gpt-3: Its nature, scope, limits, and consequences. Minds and Machines, 2020, 30(4): 681–694
  8. Zero-shot text-to-image generation. In: ICML. 2021, 8821–8831
  9. Learning transferable visual models from natural language supervision. In: ICML. 2021, 8748–8763
  10. Imagenet: A large-scale hierarchical image database. In: CVPR. 2009, 248–255
  11. Deep class-incremental learning: A survey. arXiv preprint arXiv:2302.03648, 2023
  12. Laion-5b: An open large-scale dataset for training next generation image-text models. NeurIPS, 2022, 35: 25278–25294
  13. Learning placeholders for open-set recognition. In: CVPR. 2021, 4401–4410
  14. Continual learning with pre-trained models: A survey. arXiv preprint arXiv:2401.16386, 2024
  15. Pilot: A pre-trained model-based continual learning toolbox. arXiv preprint arXiv:2309.07117, 2023
  16. A survey of hallucination in large foundation models. arXiv preprint arXiv:2309.05922, 2023
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Da-Wei Zhou (24 papers)
  2. Zhi-Hong Qi (3 papers)
  3. Han-Jia Ye (74 papers)
  4. De-Chuan Zhan (90 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.