Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Palm: Predicting Actions through Language Models @ Ego4D Long-Term Action Anticipation Challenge 2023 (2306.16545v1)

Published 28 Jun 2023 in cs.CV

Abstract: We present Palm, a solution to the Long-Term Action Anticipation (LTA) task utilizing vision-language and LLMs. Given an input video with annotated action periods, the LTA task aims to predict possible future actions. We hypothesize that an optimal solution should capture the interdependency between past and future actions, and be able to infer future actions based on the structure and dependency encoded in the past actions. LLMs have demonstrated remarkable commonsense-based reasoning ability. Inspired by that, Palm chains an image captioning model and a LLM. It predicts future actions based on frame descriptions and action labels extracted from the input videos. Our method outperforms other participants in the EGO4D LTA challenge and achieves the best performance in terms of action prediction. Our code is available at https://github.com/DanDoge/Palm

Definition Search Book Streamline Icon: https://streamlinehq.com
References (10)
  1. GPT-Neo: Large Scale Autoregressive Language Modeling with Mesh-Tensorflow, Mar. 2021.
  2. Video + clip baseline for ego4d long-term action anticipation, 2022.
  3. Ego4d: Around the world in 3,000 hours of egocentric video. arXiv preprint arXiv:2110.07058, 2021.
  4. BLIP-2: bootstrapping language-image pre-training with frozen image encoders and large language models. In ICML, 2023.
  5. Egocentric video-language pretraining. arXiv preprint arXiv:2206.01670, 2022.
  6. Intention-conditioned long-term human egocentric action anticipation. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), pages 6048–6057, January 2023.
  7. NLP Connect. vit-gpt2-image-captioning (revision 0e334c7), 2022.
  8. Language models are unsupervised multitask learners. 2019.
  9. Mpnet: Masked and permuted pre-training for language understanding, 2020.
  10. Complementary explanations for effective in-context learning. ArXiv, abs/2211.13892, 2022.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Daoji Huang (22 papers)
  2. Otmar Hilliges (120 papers)
  3. Luc Van Gool (570 papers)
  4. Xi Wang (275 papers)
Citations (10)

Summary

We haven't generated a summary for this paper yet.

Github Logo Streamline Icon: https://streamlinehq.com