Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
149 tokens/sec
GPT-4o
9 tokens/sec
Gemini 2.5 Pro Pro
47 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Meta-Objects: Interactive and Multisensory Virtual Objects Learned from the Real World for Use in Augmented Reality (2404.17179v3)

Published 26 Apr 2024 in cs.HC and cs.ET

Abstract: We introduce the concept of a meta-object, a next-generation virtual object that inherits the form, properties, and functions of its real-world counterpart, enabling seamless synchronization, interaction, and sharing between the physical and virtual worlds. While plenty of today's virtual objects provide some sensory feedback and dynamic behavior, meta-objects fully integrate interactive and multisensory features within a structured data framework to enable real-time immersive experiences in a post-metaverse intelligent simulation platform. Three key components underpin the utilization of meta-objects in the post-metaverse: property-embedded modeling for physical and action realism, adaptive multisensory feedback tailored to user interactions, and a scene graph-based intelligence simulation platform for scalable and efficient ecosystem integration. By leveraging meta-objects through wearable AR/VR devices, the post-metaverse facilitates seamless interactions that transcend spatial and temporal barriers, paving the way for a transformative reality-virtuality convergence.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (28)
  1. Y. Lee, S. Oh, C. Shin, and W. Woo, “Recent trends in ubiquitous virtual reality,” in 2008 International Symposium on Ubiquitous Virtual Reality.   IEEE, 2008, pp. 33–36.
  2. J.-e. Shin, J. Park, H. Kim, W. Woo, and B. Yoon, “Evaluating metaverse platforms: Status, direction, and challenges,” IEEE Consumer Electronics Magazine, 2024.
  3. Q. Tong, W. Wei, Y. Zhang, J. Xiao, and D. Wang, “Survey on hand-based haptic interaction for virtual reality,” IEEE Transactions on Haptics, 2023.
  4. J. Liu, F. Yu, and T. Funkhouser, “Interactive 3d modeling with a generative adversarial network,” in 2017 International Conference on 3D Vision (3DV).   IEEE, 2017, pp. 126–134.
  5. B. Kerbl, G. Kopanas, T. Leimkühler, and G. Drettakis, “3d gaussian splatting for real-time radiance field rendering,” ACM Transactions on Graphics, vol. 42, no. 4, pp. 1–14, 2023.
  6. J. Park, H. Park, S.-E. Yoon, and W. Woo, “Physically-inspired deep light estimation from a homogeneous-material object for mixed reality lighting,” IEEE transactions on visualization and computer graphics, vol. 26, no. 5, pp. 2002–2011, 2020.
  7. J. Koo, S. Yoo, M. H. Nguyen, and M. Sung, “Salad: Part-level latent diffusion for 3d shape generation and manipulation,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023, pp. 14 441–14 451.
  8. L. Yang, K. Li, X. Zhan, F. Wu, A. Xu, L. Liu, and C. Lu, “Oakink: A large-scale knowledge repository for understanding hand-object interaction,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 20 953–20 962.
  9. A. Henlein, A. Gopinath, N. Krishnaswamy, A. Mehler, and J. Pustejovsky, “Grounding human-object interaction to affordance behavior in multimodal datasets,” Frontiers in artificial intelligence, vol. 6, p. 1084740, 2023.
  10. M. Melo, G. Gonçalves, P. Monteiro, H. Coelho, J. Vasconcelos-Raposo, and M. Bessa, “Do multisensory stimuli benefit the virtual reality experience? a systematic review,” IEEE transactions on visualization and computer graphics, vol. 28, no. 2, pp. 1428–1442, 2020.
  11. N. Cooper, F. Milella, C. Pinto, I. Cant, M. White, and G. Meyer, “The effects of substitute multisensory feedback on task performance and the sense of presence in a virtual reality environment,” PloS one, vol. 13, no. 2, p. e0191846, 2018.
  12. B. Williams, A. E. Garton, and C. J. Headleand, “Exploring visuo-haptic feedback congruency in virtual reality,” in 2020 International Conference on Cyberworlds (CW).   IEEE, 2020, pp. 102–109.
  13. M. Marucci, G. Di Flumeri, G. Borghini, N. Sciaraffa, M. Scandola, E. F. Pavone, F. Babiloni, V. Betti, and P. Aricò, “The impact of multisensory integration and perceptual load in virtual reality settings on performance, workload and presence,” Scientific Reports, vol. 11, no. 1, p. 4831, 2021.
  14. J.-M. Jot, R. Audfray, M. Hertensteiner, and B. Schmidt, “Rendering spatial sound for interoperable experiences in the audio metaverse,” in 2021 Immersive and 3D Audio: from Architecture to Automotive (I3DA).   IEEE, 2021, pp. 1–15.
  15. H. Deng, J. Li, Y. Gao, X. Liang, H. Wu, and A. Hao, “Phyvr: Physics-based multi-material and free-hand interaction in vr,” in 2023 IEEE International Symposium on Mixed and Augmented Reality (ISMAR).   IEEE, 2023, pp. 454–462.
  16. N. Hughes, Y. Chang, and L. Carlone, “Hydra: A real-time spatial perception system for 3d scene graph construction and optimization,” arXiv preprint arXiv:2201.13360, 2022.
  17. Q. Gu, A. Kuwajerwala, S. Morin, K. Jatavallabhula, B. Sen, A. Agarwal, C. Rivera, W. Paul, K. Ellis, R. Chellappa, C. Gan, C. de Melo, J. Tenenbaum, A. Torralba, F. Shkurti, and L. Paull, “Conceptgraphs: Open-vocabulary 3d scene graphs for perception and planning,” arXiv, 2023.
  18. C. Li, R. Zhang, J. Wong, C. Gokmen, S. Srivastava, R. Martín-Martín, C. Wang, G. Levine, W. Ai, B. Martinez et al., “Behavior-1k: A human-centered, embodied ai benchmark with 1,000 everyday activities and realistic simulation,” arXiv preprint arXiv:2403.09227, 2024.
  19. C. Gan, J. Schwartz, S. Alter, M. Schrimpf, J. Traer, J. De Freitas, J. Kubilius, A. Bhandwaldar, N. Haber, M. Sano et al., “Threedworld: A platform for interactive multi-modal physical simulation,” in Annual Conference on Neural Information Processing Systems, 2021.
  20. K. Kim, S. Oh, J. Han, and W. Woo, “u-contents: Describing contents in an emerging ubiquitous virtual reality,” Proc. of IWUVR, pp. 9–12, 2009.
  21. Y. Oh, S. Duval, S. Kim, H. Yoon, T. Ha, and W. Woo, “Foundation of a new digital ecosystem for u-content: Needs, definition, and design,” in Virtual and Mixed Reality-Systems and Applications: International Conference, Virtual and Mixed Reality 2011, Held as Part of HCI International 2011, Orlando, FL, USA, July 9-14, 2011, Proceedings, Part II 4.   Springer, 2011, pp. 377–386.
  22. Y. Oh, T. Ha, C. Kang, and W. Woo, “Conceptualizing u-content ecosystem in ubiquitous vr environments,” in 2011 International Symposium on Ubiquitous Virtual Reality.   IEEE, 2011, pp. 13–16.
  23. X.-F. Han, H. Laga, and M. Bennamoun, “Image-based 3d object reconstruction: State-of-the-art and trends in the deep learning era,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 43, no. 5, p. 1578–1604, May 2021.
  24. K. Chen, D. Zhang, L. Yao, B. Guo, Z. Yu, and Y. Liu, “Deep learning for sensor-based human activity recognition: Overview, challenges, and opportunities,” ACM Comput. Surv., vol. 54, no. 4, may 2021.
  25. J. Wald, N. Navab, and F. Tombari, “Learning 3d semantic scene graphs with instance embeddings,” Int. J. Comput. Vision, vol. 130, no. 3, p. 630–651, mar 2022.
  26. M. Nadini, L. Alessandretti, F. Di Giacinto, M. Martino, M. Luca, and A. Baronchelli, “Mapping the nft revolution: market trends, trade networks and visual features,” 06 2021.
  27. J. Lin, A. Zeng, H. Wang, L. Zhang, and Y. Li, “One-stage 3d whole-body mesh recovery with component aware transformer,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR), 2023, pp. 21 159–21 168.
  28. G. Guo and L. He, “Adaptive lighting modeling in 3d reconstruction with illumination properties recovery,” in 2023 9th International Conference on Virtual Reality (ICVR), 2023, pp. 321–328.
Citations (1)

Summary

We haven't generated a summary for this paper yet.