2000 character limit reached
TV100: A TV Series Dataset that Pre-Trained CLIP Has Not Seen (2404.12407v1)
Published 16 Apr 2024 in cs.CV and cs.LG
Abstract: The era of pre-trained models has ushered in a wealth of new insights for the machine learning community. Among the myriad of questions that arise, one of paramount importance is: 'Do pre-trained models possess comprehensive knowledge?' This paper seeks to address this crucial inquiry. In line with our objective, we have made publicly available a novel dataset comprised of images from TV series released post-2021. This dataset holds significant potential for use in various research areas, including the evaluation of incremental learning, novel class discovery, and long-tailed learning, among others. Project page: https://tv-100.github.io/
- Rf-badge: Vital sign-based authentication via rfid tag array on badges. IEEE Transactions on Mobile Computing, 2023, 22(02): 1170–1184
- Contextualizing meta-learning via learning to decompose. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023
- Learning contrastive embedding in low-dimensional space. NeurIPS, 2022, 35: 6345–6357
- Auxiliary information regularized machine for multiple modality feature learning. In: IJCAI. 2015
- Expandable subspace ensemble for pre-trained model-based class-incremental learning. arXiv preprint arXiv:2403.12030, 2024
- Moirépose: ultra high precision camera-to-screen pose estimation based on moiré pattern. In: Proceedings of the 28th Annual International Conference on Mobile Computing And Networking. 2022, 106–119
- Floridi L, Chiriatti M. Gpt-3: Its nature, scope, limits, and consequences. Minds and Machines, 2020, 30(4): 681–694
- Zero-shot text-to-image generation. In: ICML. 2021, 8821–8831
- Learning transferable visual models from natural language supervision. In: ICML. 2021, 8748–8763
- Imagenet: A large-scale hierarchical image database. In: CVPR. 2009, 248–255
- Deep class-incremental learning: A survey. arXiv preprint arXiv:2302.03648, 2023
- Laion-5b: An open large-scale dataset for training next generation image-text models. NeurIPS, 2022, 35: 25278–25294
- Learning placeholders for open-set recognition. In: CVPR. 2021, 4401–4410
- Continual learning with pre-trained models: A survey. arXiv preprint arXiv:2401.16386, 2024
- Pilot: A pre-trained model-based continual learning toolbox. arXiv preprint arXiv:2309.07117, 2023
- A survey of hallucination in large foundation models. arXiv preprint arXiv:2309.05922, 2023
- Da-Wei Zhou (24 papers)
- Zhi-Hong Qi (3 papers)
- Han-Jia Ye (74 papers)
- De-Chuan Zhan (90 papers)