AntGPT: Can Large Language Models Help Long-term Action Anticipation from Videos? (2307.16368v3)
Abstract: Can we better anticipate an actor's future actions (e.g. mix eggs) by knowing what commonly happens after his/her current action (e.g. crack eggs)? What if we also know the longer-term goal of the actor (e.g. making egg fried rice)? The long-term action anticipation (LTA) task aims to predict an actor's future behavior from video observations in the form of verb and noun sequences, and it is crucial for human-machine interaction. We propose to formulate the LTA task from two perspectives: a bottom-up approach that predicts the next actions autoregressively by modeling temporal dynamics; and a top-down approach that infers the goal of the actor and plans the needed procedure to accomplish the goal. We hypothesize that LLMs, which have been pretrained on procedure text data (e.g. recipes, how-tos), have the potential to help LTA from both perspectives. It can help provide the prior knowledge on the possible next actions, and infer the goal given the observed part of a procedure, respectively. To leverage the LLMs, we propose a two-stage framework, AntGPT. It first recognizes the actions already performed in the observed videos and then asks an LLM to predict the future actions via conditioned generation, or to infer the goal and plan the whole procedure by chain-of-thought prompting. Empirical results on the Ego4D LTA v1 and v2 benchmarks, EPIC-Kitchens-55, as well as EGTEA GAZE+ demonstrate the effectiveness of our proposed approach. AntGPT achieves state-of-the-art performance on all above benchmarks, and can successfully infer the goal and thus perform goal-conditioned "counterfactual" prediction via qualitative analysis. Code and model will be released at https://brown-palm.github.io/AntGPT
- When will you do what?-anticipating temporal occurrences of activities. In CVPR, 2018.
- Do as i can, not as i say: Grounding language in robotic affordances. arXiv preprint arXiv:2204.01691, 2022.
- Hiervl: Learning hierarchical video-language embeddings. arXiv preprint arXiv:2301.02311, 2023.
- My view is the best view: Procedure learning from egocentric videos. In ECCV, 2022.
- Procedure planning in instructional videos via contextual modeling and model-based policy learning. In ICCV, 2021.
- Language models are few-shot learners. In NeurIPS, 2020.
- Knowledge distillation for action anticipation via label smoothing. In ICPR, 2021.
- J. Carreira and A. Zisserman. Quo vadis, action recognition? A new model and the Kinetics dataset. In CVPR, 2017.
- Procedure planning in instructional videos. In ECCV, 2020.
- Videollm: Modeling video sequence with large language models. arXiv preprint arXiv:2305.13292, 2023.
- The epic-kitchens dataset: Collection, challenges and baselines. TPAMI, 2020.
- Video+ clip baseline for ego4d long-term action anticipation. arXiv preprint arXiv:2207.00579, 2022.
- Palm-e: An embodied multimodal language model. arXiv preprint arXiv:2303.03378, 2023.
- Learning temporal dynamics from cycles in narrated video. arXiv preprint arXiv:2101.02337, 2021.
- Long-term anticipation of activities with cycle consistency. arXiv preprint arXiv:2009.01142, 2020.
- Forecasting future action sequences with neural memory networks. arXiv preprint arXiv:1909.09278, 2019.
- Predicting the future: A jointly learnt model for action anticipation. In ICCV, 2019.
- Vectornet: Encoding hd maps and agent dynamics from vectorized representation. In ICCV, 2020.
- Text-derived knowledge helps vision: A simple cross-modal distillation for video-based action anticipation. In EACL, 2023.
- Latency matters: Real-time action forecasting transformer. In CVPR, 2023.
- Anticipative video transformer. In ICCV, 2021.
- Actionvlad: Learning spatio-temporal aggregation for action classification. arXiv preprint arXiv:1704.02895, 2017.
- Ego4d: Around the world in 3,000 hours of egocentric video, 2021.
- Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2015.
- Lora: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685, 2021.
- Palm: Predicting actions through language models@ ego4d long-term action anticipation challenge 2023. arXiv preprint arXiv:2306.16545, 2023.
- Timeception for complex action recognition. arXiv preprint arXiv:1812.01289, 2019.
- Videograph: Recognizing minutes-long human activities in videos. arXiv preprint arXiv:1905.05143, 2019.
- Technical report for ego4d long term action anticipation challenge 2023. arXiv preprint arXiv:2307.01467, 2023.
- Time-conditioned action anticipation in one shot. In CVPR, 2019.
- Habitual behavior is goal-driven. Perspectives on Psychological Science, 15(5):1256–1271, 2020.
- A hierarchical representation for future action prediction. In ECCV, 2014.
- In the eye of beholder: Joint learning of gaze and actions in first person video. In ECCV, 2018.
- Neuro-symbolic procedural planning with commonsense prompting. arXiv preprint arXiv:2206.02928, 2022.
- Intention-conditioned long-term human egocentric action forecasting @ ego4d challenge 2022.
- Linearly mapping from image to text space. arXiv preprint arXiv:2209.15162, 2022.
- Large language models as general pattern machines. arXiv preprint arXiv:2307.04721, 2023.
- Ego-topo: Environment affordances from egocentric video. In CVPR, 2020.
- Learning and verification of task structure in instructional videos. arXiv preprint arXiv:2303.13519, 2023.
- Rethinking learning approaches for long-term action anticipation. In ECCV, 2022.
- The minimalist grammar of action. Philosophical Transactions of the Royal Society B: Biological Sciences, 367(1585):103–117, 2012.
- Task-action grammars: A model of the mental representation of task languages. Human-computer interaction, 2(2):93–133, 1986.
- Learning transferable visual models from natural language supervision. arXiv preprint arXiv:2103.00020, 2021.
- Predicting the next action by modeling the abstract goal. arXiv preprint arXiv:2209.05044, 2022.
- Temporal aggregate representations for long-range video understanding. In ECCV, 2020.
- Zero-shot anticipation for instructional activities. In ICCV, 2019.
- Look for the change: Learning object states and state-modifying actions from untrimmed web videos. In CVPR, 2022.
- Videobert: A joint model for video and language representation learning. In ICCV, 2019.
- Relational action forecasting. In CVPR, 2019.
- Plate: Visually-grounded planning with transformers in procedural tasks. IEEE Robotics and Automation Letters, 7(2):4924–4930, 2022.
- Vipergpt: Visual inference via python execution for reasoning. arXiv preprint arXiv:2303.08128, 2023.
- Goal-driven long-term trajectory prediction. In WACV, 2021.
- Anticipating visual representations from unlabeled video. In CVPR, 2016.
- Bevt: Bert pretraining of video transformers. In CVPR, 2022.
- Chain of thought prompting elicits reasoning in large language models. arXiv preprint arXiv:2201.11903, 2022.
- Socratic models: Composing zero-shot multimodal reasoning with language. arXiv preprint arXiv:2204.00598, 2022.
- P3iv: Probabilistic procedure planning from instructional videos with weak supervision. In CVPR, pages 2938–2948, 2022.
- Qi Zhao (181 papers)
- Shijie Wang (62 papers)
- Ce Zhang (215 papers)
- Changcheng Fu (2 papers)
- Minh Quan Do (2 papers)
- Nakul Agarwal (16 papers)
- Kwonjoon Lee (23 papers)
- Chen Sun (187 papers)