Bring Your Own Character: A Holistic Solution for Automatic Facial Animation Generation of Customized Characters (2402.13724v1)
Abstract: Animating virtual characters has always been a fundamental research problem in virtual reality (VR). Facial animations play a crucial role as they effectively convey emotions and attitudes of virtual humans. However, creating such facial animations can be challenging, as current methods often involve utilization of expensive motion capture devices or significant investments of time and effort from human animators in tuning animation parameters. In this paper, we propose a holistic solution to automatically animate virtual human faces. In our solution, a deep learning model was first trained to retarget the facial expression from input face images to virtual human faces by estimating the blendshape coefficients. This method offers the flexibility of generating animations with characters of different appearances and blendshape topologies. Second, a practical toolkit was developed using Unity 3D, making it compatible with the most popular VR applications. The toolkit accepts both image and video as input to animate the target virtual human faces and enables users to manipulate the animation results. Furthermore, inspired by the spirit of Human-in-the-loop (HITL), we leveraged user feedback to further improve the performance of the model and toolkit, thereby increasing the customization properties to suit user preferences. The whole solution, for which we will make the code public, has the potential to accelerate the generation of facial animations for use in VR applications.
- Live link face on the app store. https://apps.apple.com/us/app/live-link-face/id1495370836. Accessed: 2023-09-27.
- Apple. Tracking and visualizing faces. 2022.
- Animation control for real-time virtual humans. Communications of the ACM, 42(8):64–73, 1999.
- A simple approach to animating virtual characters by facial expressions reenactment. In 2023 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW), pp. 585–586. IEEE, 2023.
- Enhancing emotional experience by building emotional virtual characters in vr volleyball games. Computer Animation and Virtual Worlds, 32(3-4):e2008, 2021.
- Play with emotional characters: Improving user emotional experience by a data-driven approach in vr volleyball games. In 2021 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW), pp. 458–459. IEEE, 2021.
- V. Blanz and T. Vetter. A morphable model for the synthesis of 3d faces. In Proceedings of the 26th annual conference on Computer graphics and interactive techniques, pp. 187–194, 1999.
- J. Brooke et al. Sus-a quick and dirty usability scale. Usability evaluation in industry, 189(194):4–7, 1996.
- D. Burden and M. Savin-Baden. Virtual humans: Today and tomorrow. CRC Press, 2019.
- Facewarehouse: A 3d facial expression database for visual computing. IEEE Transactions on Visualization and Computer Graphics, 20(3):413–425, 2013.
- Personalized face modeling for improved face reconstruction and motion retargeting. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part V 16, pp. 142–160. Springer, 2020.
- Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pp. 248–255. Ieee, 2009.
- Accurate 3d face reconstruction with weakly-supervised learning: From single image to image set. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops, pp. 0–0, 2019.
- End-to-end 3d face reconstruction with deep neural networks. In proceedings of the IEEE conference on computer vision and pattern recognition, pp. 5908–5917, 2017.
- Jali: an animator-centric viseme model for expressive lip synchronization. ACM Transactions on graphics (TOG), 35(4):1–11, 2016.
- Faceware. Intro to faceware studio. 2022.
- Joint 3d face reconstruction and dense alignment with position map regression network. In Proceedings of the European conference on computer vision (ECCV), pp. 534–551, 2018.
- Labeled faces in the wild: A database forstudying face recognition in unconstrained environments. In Workshop on faces in’Real-Life’Images: detection, alignment, and recognition, 2008.
- IEEE. A 3D Face Model for Pose and Illumination Invariant Face Recognition. Genova, Italy, 2009.
- Learning controls for blend shape based realistic facial animation. In ACM Siggraph 2006 Courses, pp. 17–es. 2006.
- D. E. King. Dlib-ml: A machine learning toolkit. The Journal of Machine Learning Research, 10:1755–1758, 2009.
- Practice and theory of blendshape facial models. Eurographics (State of the Art Reports), 1(8):2, 2014.
- Direct manipulation blendshapes. IEEE Computer Graphics and Applications, 30(4):42–50, 2010.
- Learning a model of facial shape and expression from 4D scans. ACM Transactions on Graphics, (Proc. SIGGRAPH Asia), 36(6):194:1–194:17, 2017.
- Large-scale celebfaces attributes (celeba) dataset. Retrieved August, 15(2018):11, 2018.
- Virtual humans in cultural heritage ict applications: A review. Journal of Cultural Heritage, 33:249–260, 2018.
- P. Nogueira. Motion capture fundamentals. In Doctoral Symposium in Informatics Engineering, vol. 303, 2011.
- Emotional voice puppetry. IEEE Transactions on Visualization and Computer Graphics, 29(5):2527–2535, 2023.
- A 3d face model for pose and illumination invariant face recognition. In 2009 sixth IEEE international conference on advanced video and signal based surveillance, pp. 296–301. Ieee, 2009.
- A palette of deepened emotions: exploring emotional challenge in virtual reality games. In Proceedings of the 2020 CHI conference on human factors in computing systems, pp. 1–13, 2020.
- Beyond horror and fear: Exploring player experience invoked by emotional challenge in vr games. In Extended abstracts of the 2019 CHI conference on human factors in computing systems, pp. 1–6, 2019.
- Detecting challenge from physiological signals: A primary study with a typical game scenario. In CHI Conference on Human Factors in Computing Systems Extended Abstracts, pp. 1–7, 2022.
- Challengedetect: Investigating the potential of detecting in-game challenge experience from physiological measures. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, pp. 1–29, 2023.
- Facenet: A unified embedding for face recognition and clustering. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 815–823, 2015.
- Use of motion capture in 3d animation: motion capture systems, challenges, and recent trends. In 2019 international conference on machine learning, big data, cloud and parallel computing (comitcon), pp. 289–294. IEEE, 2019.
- Face-to-parameter translation for game character auto-creation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 161–170, 2019.
- Fast and robust face-to-parameter translation for game character auto-creation. In Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 1733–1740, 2020.
- Headbox: A facial blendshape animation toolkit for the microsoft rocketbox library. In 2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW), pp. 39–42. IEEE, 2022.
- Unsupervised creation of parameterized avatars. In Proceedings of the IEEE International Conference on Computer Vision, pp. 1530–1538, 2017.