Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Study of Subjective and Objective Quality Assessment of Mobile Cloud Gaming Videos (2305.17260v1)

Published 26 May 2023 in cs.CV and cs.MM

Abstract: We present the outcomes of a recent large-scale subjective study of Mobile Cloud Gaming Video Quality Assessment (MCG-VQA) on a diverse set of gaming videos. Rapid advancements in cloud services, faster video encoding technologies, and increased access to high-speed, low-latency wireless internet have all contributed to the exponential growth of the Mobile Cloud Gaming industry. Consequently, the development of methods to assess the quality of real-time video feeds to end-users of cloud gaming platforms has become increasingly important. However, due to the lack of a large-scale public Mobile Cloud Gaming Video dataset containing a diverse set of distorted videos with corresponding subjective scores, there has been limited work on the development of MCG-VQA models. Towards accelerating progress towards these goals, we created a new dataset, named the LIVE-Meta Mobile Cloud Gaming (LIVE-Meta-MCG) video quality database, composed of 600 landscape and portrait gaming videos, on which we collected 14,400 subjective quality ratings from an in-lab subjective study. Additionally, to demonstrate the usefulness of the new resource, we benchmarked multiple state-of-the-art VQA algorithms on the database. The new database will be made publicly available on our website: \url{https://live.ece.utexas.edu/research/LIVE-Meta-Mobile-Cloud-Gaming/index.html}

Definition Search Book Streamline Icon: https://streamlinehq.com
References (47)
  1. “Cloud Gaming Market by offering (Infrastructure and Gaming Platform Service), Device Type (Smartphones, Tablets, Gaming Consoles, PCs & Laptops, Smart TVs, and HMDs), and Solution (File streaming and video streaming): Global Opportunity Analysis and Industry Forecast, 2021–2030.” https://www.alliedmarketresearch.com/cloud-gaming-market-A07461, 2021, [Online; accessed 30-January-2022].
  2. N. Barman, S. Zadtootaghaj, S. Schmidt, M. G. Martini, and S. Möller, “Gamingvideoset: A dataset for gaming video streaming applications,” 2018 16th Annual Workshop on Network and Systems Support for Games (NetGames), pp. 1–6, 2018.
  3. N. Barman, E. Jammeh, S. A. Ghorashi, and M. G. Martini, “No-reference video quality estimation based on machine learning for passive gaming video streaming applications,” IEEE Access, vol. 7, pp. 74 511–74 527, 2019.
  4. S. Zadtootaghaj, S. Schmidt, S. S. Sabet, S. Möller, and C. Griwodz, “Quality estimation models for gaming video streaming services using perceptual video quality dimensions,” in Proceedings of the 11th ACM Multimedia Systems Conference, ser. MMSys ’20.   New York, NY, USA: Association for Computing Machinery, 2020, p. 213–224. [Online]. Available: https://doi.org/10.1145/3339825.3391872
  5. S. Wen, S. Ling, J. Wang, X. Chen, L. Fang, Y. Jing, and P. L. Callet, “Subjective and objective quality assessment of mobile gaming video,” ArXiv, vol. abs/2103.05099, 2021.
  6. X. Yu, Z. Tu, Z. Ying, A. C. Bovik, N. Birkbeck, Y. Wang, and B. Adsumilli, “Subjective quality assessment of user-generated content gaming videos,” in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2022, pp. 74–83.
  7. S. Zadtootaghaj, N. Barman, S. Schmidt, M. G. Martini, and S. Möller, “Nr-gvqm: A no reference gaming video quality metric,” 2018 IEEE International Symposium on Multimedia (ISM), pp. 131–134, 2018.
  8. Z. Li, A. Aaron, I. Katsavounidis, A. Moorthy, and M. Manohara, “Toward a practical perceptual video quality metric,” vol. 6, 2016, p. 2.
  9. S. Göring, R. R. Ramachandra Rao, and A. Raake, “nofu -a lightweight no-reference pixel based video quality model for gaming content,” 06 2019.
  10. M. Utke, S. Zadtootaghaj, S. Schmidt, S. Bosse, and S. Moeller, “NDNetGaming - Development of a No-Reference Deep CNN for Gaming Video Quality Prediction,” in Multimedia Tools and Applications.   Springer, 2020.
  11. S. Zadtootaghaj, N. Barman, R. R. R. Rao, S. Göring, M. G. Martini, A. Raake, and S. Möller, “Demi: Deep video quality estimation model using perceptual video quality dimensions,” in 2020 IEEE 22nd International Workshop on Multimedia Signal Processing (MMSP), 2020, pp. 1–6.
  12. Y.-C. Chen, A. Saha, C. Davis, B. Qiu, X. Wang, R. Gowda, I. Katsavounidis, and A. C. Bovik, “Gamival: Video quality prediction on mobile cloud gaming content,” IEEE Signal Processing Letters, vol. 30, pp. 324–328, 2023.
  13. G. Huang, Z. Liu, and K. Q. Weinberger, “Densely connected convolutional networks,” CoRR, vol. abs/1608.06993, 2016. [Online]. Available: http://arxiv.org/abs/1608.06993
  14. S. Schmidt, S. Möller, and S. Zadtootaghaj, “A comparison of interactive and passive quality assessment for gaming research,” in 2018 Tenth International Conference on Quality of Multimedia Experience (QoMEX), 2018, pp. 1–6.
  15. D. Ghadiyaram, J. Pan, and A. C. Bovik, “A subjective and objective study of stalling events in mobile streaming videos,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 29, no. 1, pp. 183–197, 2019.
  16. C. G. Bampis, Z. Li, A. K. Moorthy, I. Katsavounidis, A. Aaron, and A. C. Bovik, “Study of temporal effects on subjective video quality of experience,” IEEE Transactions on Image Processing, vol. 26, no. 11, pp. 5217–5231, 2017.
  17. C. G. Bampis, Z. Li, I. Katsavounidis, T.-Y. Huang, C. Ekanadham, and A. C. Bovik, “Towards perceptually optimized end-to-end adaptive video streaming,” 2018. [Online]. Available: https://arxiv.org/abs/1808.03898
  18. D. Hasler and S. E. Suesstrunk, “Measuring colorfulness in natural images,” in Human Vision and Electronic Imaging VIII, B. E. Rogowitz and T. N. Pappas, Eds., vol. 5007, International Society for Optics and Photonics.   SPIE, 2003, pp. 87 – 95. [Online]. Available: https://doi.org/10.1117/12.477378
  19. S. Winkler, “Analysis of public image and video databases for quality assessment,” IEEE Journal of Selected Topics in Signal Processing, vol. 6, no. 6, pp. 616–625, 2012.
  20. “NVENC Video Encoder API Programming Guide,” https://docs.nvidia.com/video-technologies/video-codec-sdk/nvenc-video-encoder-api-prog-guide/, 2021, [Online; accessed 30-January-2022].
  21. “Google Pixel 5 Display Review: Worthy of a Flagship,” https://www.xda-developers.com/google-pixel-5-display-review/#color_accuracy, 2021, [Online; accessed 19-February-2023].
  22. “Visual Screening, Laboratory of Image and Video Engineering,” https://live.ece.utexas.edu/research/Quality/visualScreening.htm, [Online; accessed 30-January-2022].
  23. Z. Li and C. G. Bampis, “Recover subjective quality scores from noisy measurements,” CoRR, vol. abs/1611.01715, 2016. [Online]. Available: http://arxiv.org/abs/1611.01715
  24. T. Hossfeld, C. Keimel, M. Hirth, B. Gardlo, J. Habigt, K. Dieopold, and P. Tran-Gia, “Best practices for qoe crowdtesting: Qoe assessment with crowdsourcing,” Multimedia, IEEE Transactions on, vol. 16, pp. 541–558, 02 2014.
  25. A. Mittal, R. Soundararajan, and A. C. Bovik, “Making a “completely blind” image quality analyzer,” IEEE Signal Processing Letters, vol. 20, no. 3, pp. 209–212, 2013.
  26. A. Mittal, A. K. Moorthy, and A. C. Bovik, “No-reference image quality assessment in the spatial domain,” IEEE Transactions on Image Processing, vol. 21, no. 12, pp. 4695–4708, 2012.
  27. J. Korhonen, “Two-level approach for no-reference consumer video quality assessment,” IEEE Transactions on Image Processing, vol. 28, no. 12, pp. 5923–5938, 2019.
  28. Z. Tu, Y. Wang, N. Birkbeck, B. Adsumilli, and A. C. Bovik, “Ugc-vqa: Benchmarking blind video quality assessment for user generated content,” IEEE Transactions on Image Processing, vol. 30, pp. 4449–4464, 2021.
  29. Z. Tu, X. Yu, Y. Wang, N. Birkbeck, B. Adsumilli, and A. C. Bovik, “RAPIQUE: rapid and accurate video quality prediction of user generated content,” CoRR, vol. abs/2101.10955, 2021. [Online]. Available: https://arxiv.org/abs/2101.10955
  30. D. Li, T. Jiang, and M. Jiang, “Quality assessment of in-the-wild videos,” CoRR, vol. abs/1908.00375, 2019. [Online]. Available: http://arxiv.org/abs/1908.00375
  31. X. Yu, Z. Ying, N. Birkbeck, Y. Wang, B. Adsumilli, and A. C. Bovik, “Subjective and objective analysis of streamed gaming videos,” 2022. [Online]. Available: https://arxiv.org/abs/2203.12824
  32. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” CoRR, vol. abs/1512.03385, 2015. [Online]. Available: http://arxiv.org/abs/1512.03385
  33. K. Cho, B. van Merrienboer, Ç. Gülçehre, F. Bougares, H. Schwenk, and Y. Bengio, “Learning phrase representations using RNN encoder-decoder for statistical machine translation,” CoRR, vol. abs/1406.1078, 2014. [Online]. Available: http://arxiv.org/abs/1406.1078
  34. J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet: A large-scale hierarchical image database,” in 2009 IEEE Conference on Computer Vision and Pattern Recognition, 2009, pp. 248–255.
  35. K. Seshadrinathan, R. Soundararajan, A. C. Bovik, and L. K. Cormack, “Study of subjective and objective quality assessment of video,” IEEE Transactions on Image Processing, vol. 19, no. 6, pp. 1427–1441, 2010.
  36. Y. Jin, A. Patney, R. Webb, and A. C. Bovik, “FOVQA: blind foveated video quality assessment,” CoRR, vol. abs/2106.13328, 2021. [Online]. Available: https://arxiv.org/abs/2106.13328
  37. F. Götz-Hahn, V. Hosu, H. Lin, and D. Saupe, “Konvid-150k: A dataset for no-reference video quality assessment of videos in-the-wild,” IEEE Access, vol. 9, pp. 72 139–72 160, 2021.
  38. H. Wu, C. Chen, J. Hou, L. Liao, A. Wang, W. Sun, Q. Yan, and W. Lin, “Fast-vqa: Efficient end-to-end video quality assessment with fragment sampling,” in Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part VI.   Springer, 2022, pp. 538–554.
  39. A.-X. Zhang, Y.-G. Wang, W. Tang, L. Li, and S. Kwong, “Hvs revisited: A comprehensive video quality assessment framework,” 2022. [Online]. Available: https://arxiv.org/abs/2210.04158
  40. Z. Ying, M. Mandal, D. Ghadiyaram, and A. C. Bovik, “Patch-vq: ’patching up’ the video quality problem,” CoRR, vol. abs/2011.13544, 2020. [Online]. Available: https://arxiv.org/abs/2011.13544
  41. W. Kay, J. Carreira, K. Simonyan, B. Zhang, C. Hillier, S. Vijayanarasimhan, F. Viola, T. Green, T. Back, P. Natsev, M. Suleyman, and A. Zisserman, “The kinetics human action video dataset,” CoRR, vol. abs/1705.06950, 2017. [Online]. Available: http://arxiv.org/abs/1705.06950
  42. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE transactions on image processing, vol. 13, no. 4, pp. 600–612, 2004.
  43. Z. Wang, E. P. Simoncelli, and A. C. Bovik, “Multiscale structural similarity for image quality assessment,” in The Thrity-Seventh Asilomar Conference on Signals, Systems & Computers, 2003, vol. 2.   Ieee, 2003, pp. 1398–1402.
  44. R. Soundararajan and A. C. Bovik, “Video quality assessment by reduced reference spatio-temporal entropic differencing,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 23, no. 4, pp. 684–694, 2012.
  45. C. G. Bampis, P. Gupta, R. Soundararajan, and A. C. Bovik, “Speed-qa: Spatial efficient entropic differencing for image and video quality,” IEEE Signal Processing Letters, vol. 24, no. 9, pp. 1333–1337, 2017.
  46. P. C. Madhusudana, N. Birkbeck, Y. Wang, B. Adsumilli, and A. C. Bovik, “ST-GREED: space-time generalized entropic differences for frame rate dependent video quality prediction,” CoRR, vol. abs/2010.13715, 2020. [Online]. Available: https://arxiv.org/abs/2010.13715
  47. M. Crawshaw, “Multi-task learning with deep neural networks: A survey,” CoRR, vol. abs/2009.09796, 2020. [Online]. Available: https://arxiv.org/abs/2009.09796
Citations (7)

Summary

We haven't generated a summary for this paper yet.