Papers
Topics
Authors
Recent
2000 character limit reached

MoodSmith: Enabling Mood-Consistent Multimedia for AI-Generated Advocacy Campaigns (2403.12356v1)

Published 19 Mar 2024 in cs.HC

Abstract: Emotion is vital to information and message processing, playing a key role in attitude formation. Consequently, creating a mood that evokes an emotional response is essential to any compelling piece of outreach communication. Many nonprofits and charities, despite having established messages, face challenges in creating advocacy campaign videos for social media. It requires significant creative and cognitive efforts to ensure that videos achieve the desired mood across multiple dimensions: script, visuals, and audio. We introduce MoodSmith, an AI-powered system that helps users explore mood possibilities for their message and create advocacy campaigns that are mood-consistent across dimensions. To achieve this, MoodSmith uses emotive language and plotlines for scripts, artistic style and color palette for visuals, and positivity and energy for audio. Our studies show that MoodSmith can effectively achieve a variety of moods, and the produced videos are consistent across media dimensions.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (20)
  1. Paul H Barrett. 2016. The Works of Charles Darwin: Vol 23: The Expression of the Emotions in Man and Animals. Routledge.
  2. E. Cavender. 2023. Why is everyone on TikTok obsessed with slideshows? https://mashable.com/article/tiktok-photo-mode-day-in-the-life Accessed: 2023-04.
  3. TaleBrush: Sketching stories with generative pretrained language models. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems. 1–19.
  4. James Price Dillard and Robin L Nabi. 2006. The persuasive influence of emotion in cancer prevention and detection messages. Journal of communication 56, suppl_1 (2006), S123–S139.
  5. James Price Dillard and Eugenia Peck. 2000. Affect and persuasion: Emotional responses to public service announcements. Communication research 27, 4 (2000), 461–495.
  6. Sandra G Hart and Lowell E Staveland. 1988. Development of NASA-TLX (Task Load Index): Results of empirical and theoretical research. In Advances in psychology. Vol. 52. Elsevier, 139–183.
  7. Patrick Colm Hogan. 2011. Affective narratology: The emotional structure of stories. U of Nebraska Press.
  8. Punam Anand Keller. 1999. Converting the unconverted: the effect of inclination and opportunity to discount health-related fear appeals. Journal of Applied Psychology 84, 3 (1999), 403.
  9. May AI? Design ideation with cooperative contextual bandits. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. 1–12.
  10. Promoting public health messages: Should we move beyond fear-evoking appeals in road safety? Qualitative health research 17, 1 (2007), 61–74.
  11. Novice-AI music co-creation via AI-steering tools for deep generative models. In Proceedings of the 2020 CHI conference on human factors in computing systems. 1–13.
  12. Co-writing screenplays and theatre scripts with language models: Evaluation by industry professionals. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. 1–34.
  13. OpenAI. 2023. GPT-4 Technical Report. arXiv:2303.08774 [cs.CL]
  14. Anglekindling: Supporting journalistic angle ideation with large language models. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. 1–16.
  15. William H Phillips. 2009. Film: an introduction. Macmillan.
  16. The circumplex model of affect: An integrative approach to affective neuroscience, cognitive development, and psychopathology. Development and psychopathology 17, 3 (2005), 715–734.
  17. High-Resolution Image Synthesis with Latent Diffusion Models. arXiv:2112.10752 [cs.CV]
  18. Steve Rubin and Maneesh Agrawala. 2014. Generating emotionally relevant musical scores for audio stories. In Proceedings of the 27th annual ACM symposium on User interface software and technology. 439–448.
  19. EmoG: supporting the sketching of emotional expressions for storyboarding. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1–12.
  20. ReelFramer: Human-AI Co-Creation for News-to-Video Translation. In Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems. 1–20.
Citations (1)

Summary

We haven't generated a summary for this paper yet.

Whiteboard

Paper to Video (Beta)

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 0 likes about this paper.