Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
125 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

fMRI Exploration of Visual Quality Assessment (2404.18162v1)

Published 28 Apr 2024 in cs.MM and q-bio.NC

Abstract: Despite significant strides in visual quality assessment, the neural mechanisms underlying visual quality perception remain insufficiently explored. This study employed fMRI to examine brain activity during image quality assessment and identify differences in human processing of images with varying quality. Fourteen healthy participants underwent tasks assessing both image quality and content classification while undergoing functional MRI scans. The collected behavioral data was statistically analyzed, and univariate and functional connectivity analyses were conducted on the imaging data. The findings revealed that quality assessment is a more complex task than content classification, involving enhanced activation in high-level cognitive brain regions for fine-grained visual analysis. Moreover, the research showed the brain's adaptability to different visual inputs, adopting different strategies depending on the input's quality. In response to high-quality images, the brain primarily uses specialized visual areas for precise analysis, whereas with low-quality images, it recruits additional resources including higher-order visual cortices and related cognitive and attentional networks to decode and recognize complex, ambiguous signals effectively. This study pioneers the intersection of neuroscience and image quality research, providing empirical evidence through fMRI linking image quality to neural processing. It contributes novel insights into the human visual system's response to diverse image qualities, thereby paving the way for advancements in objective image quality assessment algorithms.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (65)
  1. D. C. Van Essen, J. W. Lewis, H. A. Drury, N. Hadjikhani, R. B. Tootell, M. Bakircioglu, and M. I. Miller, “Mapping visual cortex in monkeys and humans using surface-based atlases,” Vision research, vol. 41, no. 10-11, pp. 1359–1378, 2001.
  2. K. Grill-Spector and R. Malach, “The human visual cortex,” Annu. Rev. Neurosci., vol. 27, pp. 649–677, 2004.
  3. N. K. Logothetis, “What we can do and what we cannot do with fmri,” Nature, vol. 453, no. 7197, pp. 869–878, 2008.
  4. N. Kanwisher, J. McDermott, and M. M. Chun, “The fusiform face area: a module in human extrastriate cortex specialized for face perception,” Journal of neuroscience, vol. 17, no. 11, pp. 4302–4311, 1997.
  5. L. Chang and D. Y. Tsao, “The code for facial identity in the primate brain,” Cell, vol. 169, no. 6, pp. 1013–1028, 2017.
  6. M. Spiridon, B. Fischl, and N. Kanwisher, “Location and spatial profile of category-specific regions in human extrastriate cortex,” Human brain mapping, vol. 27, no. 1, pp. 77–89, 2006.
  7. K. Denys, W. Vanduffel, D. Fize, K. Nelissen, H. Peuskens, D. Van Essen, and G. A. Orban, “The processing of visual shape in the cerebral cortex of human and nonhuman primates: a functional magnetic resonance imaging study,” Journal of Neuroscience, vol. 24, no. 10, pp. 2551–2565, 2004.
  8. G. Zhai and X. Min, “Perceptual image quality assessment: a survey,” Science China Information Sciences, vol. 63, pp. 1–52, 2020.
  9. W. Lin and C.-C. J. Kuo, “Perceptual visual quality metrics: A survey,” Journal of visual communication and image representation, vol. 22, no. 4, pp. 297–312, 2011.
  10. K. Seshadrinathan, R. Soundararajan, A. C. Bovik, and L. K. Cormack, “Study of subjective and objective quality assessment of video,” IEEE Transactions on Image Processing, vol. 19, no. 6, pp. 1427–1441, 2010.
  11. R. BT, “Methodology for the subjective assessment of the quality of television pictures,” International Telecommunication Union, vol. 4, 2002.
  12. I. T. S. Sector, “Subjective video quality assessment methods for multimedia applications,” ITU-T Recommendation, p. 910, 2008.
  13. U. Engelke, D. P. Darcy, G. H. Mulliken, S. Bosse, M. G. Martini, S. Arndt, J.-N. Antons, K. Y. Chan, N. Ramzan, and K. Brunnström, “Psychophysiology-based qoe assessment: A survey,” IEEE Journal of Selected Topics in Signal Processing, vol. 11, no. 1, pp. 6–21, 2016.
  14. S. Scholler, S. Bosse, M. S. Treder, B. Blankertz, G. Curio, K.-R. Muller, and T. Wiegand, “Toward a direct measure of video quality perception using eeg,” IEEE transactions on Image Processing, vol. 21, no. 5, pp. 2619–2629, 2012.
  15. S. Bosse, L. Acqualagna, W. Samek, A. K. Porbadnigk, G. Curio, B. Blankertz, K.-R. Müller, and T. Wiegand, “Assessing perceived image quality using steady-state visual evoked potentials and spatio-spectral decomposition,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 28, no. 8, pp. 1694–1706, 2017.
  16. X. Liu, X. Tao, M. Xu, Y. Zhan, and J. Lu, “An eeg-based study on perception of video distortion under various content motion conditions,” IEEE Transactions on Multimedia, vol. 22, no. 4, pp. 949–960, 2019.
  17. W. Zhang, K. Ma, J. Yan, D. Deng, and Z. Wang, “Blind image quality assessment using a deep bilinear convolutional neural network,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 30, no. 1, pp. 36–47, 2020.
  18. S. Su, Q. Yan, Y. Zhu, C. Zhang, X. Ge, J. Sun, and Y. Zhang, “Blindly assess image quality in the wild guided by a self-adaptive hyper network,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020.
  19. J. Ke, Q. Wang, Y. Wang, P. Milanfar, and F. Yang, “Musiq: Multi-scale image quality transformer,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 5148–5157.
  20. S. Yang, T. Wu, S. Shi, S. Lao, Y. Gong, M. Cao, J. Wang, and Y. Yang, “Maniqa: Multi-dimension attention network for no-reference image quality assessment,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 1191–1200.
  21. J. Wang, H. Fan, X. Hou, Y. Xu, T. Li, X. Lu, and L. Fu, “Mstriq: No reference image quality assessment based on swin transformer with multi-stage fusion,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 1269–1278.
  22. O. Esteban, C. Markiewicz, R. W. Blair, C. Moodie, A. I. Isik, A. Erramuzpe Aliaga, J. Kent, M. Goncalves, E. DuPre, M. Snyder, H. Oya, S. Ghosh, J. Wright, J. Durnez, R. Poldrack, and K. J. Gorgolewski, “fMRIPrep: a robust preprocessing pipeline for functional MRI,” Nature Methods, vol. 16, pp. 111–116, 2019.
  23. O. Esteban, R. Blair, C. J. Markiewicz, S. L. Berleant, C. Moodie, F. Ma, A. I. Isik, A. Erramuzpe, M. Kent, James D. andGoncalves, E. DuPre, K. R. Sitek, D. E. P. Gomez, D. J. Lurie, Z. Ye, R. A. Poldrack, and K. J. Gorgolewski, “fmriprep,” Software, 2018.
  24. K. Gorgolewski, C. D. Burns, C. Madison, D. Clark, Y. O. Halchenko, M. L. Waskom, and S. Ghosh, “Nipype: a flexible, lightweight and extensible neuroimaging data processing framework in python,” Frontiers in Neuroinformatics, vol. 5, p. 13, 2011.
  25. K. J. Gorgolewski, O. Esteban, C. J. Markiewicz, E. Ziegler, D. G. Ellis, M. P. Notter, and D. Jarecka, “Nipype,” Software, 2018.
  26. N. J. Tustison, B. B. Avants, P. A. Cook, Y. Zheng, A. Egan, P. A. Yushkevich, and J. C. Gee, “N4itk: Improved n3 bias correction,” IEEE Transactions on Medical Imaging, vol. 29, no. 6, pp. 1310–1320, 2010.
  27. B. Avants, C. Epstein, M. Grossman, and J. Gee, “Symmetric diffeomorphic image registration with cross-correlation: Evaluating automated labeling of elderly and neurodegenerative brain,” Medical Image Analysis, vol. 12, no. 1, pp. 26–41, 2008.
  28. Y. Zhang, M. Brady, and S. Smith, “Segmentation of brain MR images through a hidden markov random field model and the expectation-maximization algorithm,” IEEE Transactions on Medical Imaging, vol. 20, no. 1, pp. 45–57, 2001.
  29. A. M. Dale, B. Fischl, and M. I. Sereno, “Cortical surface-based analysis: I. segmentation and surface reconstruction,” NeuroImage, vol. 9, no. 2, pp. 179–194, 1999.
  30. A. Klein, S. S. Ghosh, F. S. Bao, J. Giard, Y. Häme, E. Stavsky, N. Lee, B. Rossa, M. Reuter, E. C. Neto, and A. Keshavan, “Mindboggling morphometry of human brains,” PLOS Computational Biology, vol. 13, no. 2, p. e1005350, 2017.
  31. R. Ciric, W. H. Thompson, R. Lorenz, M. Goncalves, E. MacNicol, C. J. Markiewicz, Y. O. Halchenko, S. S. Ghosh, K. J. Gorgolewski, R. A. Poldrack, and O. Esteban, “TemplateFlow: FAIR-sharing of multi-scale, multi-species brain models,” Nature Methods, vol. 19, pp. 1568–1571, 2022.
  32. V. Fonov, A. Evans, R. McKinstry, C. Almli, and D. Collins, “Unbiased nonlinear average age-appropriate brain templates from birth to adulthood,” NeuroImage, vol. 47, Supplement 1, p. S102, 2009.
  33. M. Jenkinson, P. Bannister, M. Brady, and S. Smith, “Improved optimization for the robust and accurate linear registration and motion correction of brain images,” NeuroImage, vol. 17, no. 2, pp. 825–841, 2002.
  34. R. W. Cox and J. S. Hyde, “Software tools for analysis and visualization of fmri data,” NMR in Biomedicine, vol. 10, no. 4-5, pp. 171–178, 1997.
  35. D. N. Greve and B. Fischl, “Accurate and robust brain image alignment using boundary-based registration,” NeuroImage, vol. 48, no. 1, pp. 63–72, 2009.
  36. C. Lanczos, “Evaluation of noisy data,” Journal of the Society for Industrial and Applied Mathematics Series B Numerical Analysis, vol. 1, no. 1, pp. 76–85, 1964.
  37. H. Wu, Z. Zhang, E. Zhang, C. Chen, L. Liao, A. Wang, C. Li, W. Sun, Q. Yan, G. Zhai et al., “Q-bench: A benchmark for general-purpose foundation models on low-level vision,” arXiv preprint arXiv:2309.14181, 2023.
  38. L. Zhang, L. Zhang, X. Mou, and D. Zhang, “Fsim: A feature similarity index for image quality assessment,” IEEE transactions on Image Processing, vol. 20, no. 8, pp. 2378–2386, 2011.
  39. S. Alagapan, C. Lustenberger, E. Hadar, H. W. Shin, and F. Frӧhlich, “Low-frequency direct cortical stimulation of left superior frontal gyrus enhances working memory performance,” Neuroimage, vol. 184, pp. 697–706, 2019.
  40. E. E. Benarroch, “Insular cortex: functional complexity and clinical correlations,” Neurology, vol. 93, no. 21, pp. 932–938, 2019.
  41. L. K. Tyler, E. A. Stamatakis, B. Post, B. Randall, and W. Marslen-Wilson, “Temporal and frontal systems in speech comprehension: An fmri study of past tense processing,” Neuropsychologia, vol. 43, no. 13, pp. 1963–1974, 2005.
  42. T. Sharot, R. Kanai, D. Marston, C. W. Korn, G. Rees, and R. J. Dolan, “Selectively altering belief formation in the human brain,” Proceedings of the National Academy of Sciences, vol. 109, no. 42, pp. 17 058–17 062, 2012.
  43. K. Grill-Spector, Z. Kourtzi, and N. Kanwisher, “The lateral occipital complex and its role in object recognition,” Vision research, vol. 41, no. 10-11, pp. 1409–1422, 2001.
  44. B. D. McCandliss, L. Cohen, and S. Dehaene, “The visual word form area: expertise for reading in the fusiform gyrus,” Trends in cognitive sciences, vol. 7, no. 7, pp. 293–299, 2003.
  45. C. G. De Moraes, “Anatomy of the visual pathways,” Journal of glaucoma, vol. 22, pp. S2–S7, 2013.
  46. E. D. Boorman, J. P. O’Doherty, R. Adolphs, and A. Rangel, “The behavioral and neural mechanisms underlying the tracking of expertise,” Neuron, vol. 80, no. 6, pp. 1558–1571, 2013.
  47. S. W. Chang, J.-F. Gariépy, and M. L. Platt, “Neuronal reference frames for social decisions in primate frontal cortex,” Nature neuroscience, vol. 16, no. 2, pp. 243–250, 2013.
  48. T. H. FitzGerald, B. Seymour, and R. J. Dolan, “The role of human orbitofrontal cortex in value comparison for incommensurable objects,” Journal of Neuroscience, vol. 29, no. 26, pp. 8388–8395, 2009.
  49. A. Nicolle, M. C. Klein-Flügge, L. T. Hunt, I. Vlaev, R. J. Dolan, and T. E. Behrens, “An agent independent axis for executed and modeled choice in medial prefrontal cortex,” Neuron, vol. 75, no. 6, pp. 1114–1121, 2012.
  50. M. Noonan, R. Mars, and M. Rushworth, “Distinct roles of three frontal cortical areas in reward-guided behavior,” Journal of Neuroscience, vol. 31, no. 40, pp. 14 399–14 412, 2011.
  51. D. H. Hubel and T. N. Wiesel, “Ferrier lecture-functional architecture of macaque monkey visual cortex,” Proceedings of the Royal Society of London. Series B. Biological Sciences, vol. 198, no. 1130, pp. 1–59, 1977.
  52. M. Livingstone and D. Hubel, “Segregation of form, color, movement, and depth: anatomy, physiology, and perception,” Science, vol. 240, no. 4853, pp. 740–749, 1988.
  53. D. H. Hubel and T. N. Wiesel, “Receptive fields and functional architecture of monkey striate cortex,” The Journal of physiology, vol. 195, no. 1, pp. 215–243, 1968.
  54. R. Gattass, C. G. Gross, and J. H. Sandell, “Visual topography of v2 in the macaque,” Journal of Comparative Neurology, vol. 201, no. 4, pp. 519–539, 1981.
  55. J. S. Bakin, K. Nakayama, and C. D. Gilbert, “Visual responses in monkey areas v1 and v2 to three-dimensional surface configurations,” Journal of Neuroscience, vol. 20, no. 21, pp. 8188–8198, 2000.
  56. J. Larsson and D. J. Heeger, “Two retinotopic visual areas in human lateral occipital cortex,” Journal of neuroscience, vol. 26, no. 51, pp. 13 128–13 142, 2006.
  57. S. M. Zeki, “Functional organization of a visual area in the posterior bank of the superior temporal sulcus of the rhesus monkey,” The Journal of physiology, vol. 236, no. 3, pp. 549–573, 1974.
  58. K. Patterson, P. J. Nestor, and T. T. Rogers, “Where do you know what you know? the representation of semantic knowledge in the human brain,” Nature reviews neuroscience, vol. 8, no. 12, pp. 976–987, 2007.
  59. K. Grill-Spector, R. Henson, and A. Martin, “Repetition and the brain: neural models of stimulus-specific effects,” Trends in cognitive sciences, vol. 10, no. 1, pp. 14–23, 2006.
  60. L. R. Squire, J. T. Wixted, and R. E. Clark, “Recognition memory and the medial temporal lobe: a new perspective,” Nature Reviews Neuroscience, vol. 8, no. 11, pp. 872–883, 2007.
  61. C. Ranganath and M. Ritchey, “Two cortical systems for memory-guided behaviour,” Nature reviews neuroscience, vol. 13, no. 10, pp. 713–726, 2012.
  62. K. Grill-Spector and K. S. Weiner, “The functional architecture of the ventral temporal cortex and its role in categorization,” Nature Reviews Neuroscience, vol. 15, no. 8, pp. 536–548, 2014.
  63. J. V. Haxby, M. I. Gobbini, M. L. Furey, A. Ishai, J. L. Schouten, and P. Pietrini, “Distributed and overlapping representations of faces and objects in ventral temporal cortex,” Science, vol. 293, no. 5539, pp. 2425–2430, 2001.
  64. M. Irish and O. Piguet, “The pivotal role of semantic memory in remembering the past and imagining the future,” Frontiers in behavioral neuroscience, vol. 7, p. 27, 2013.
  65. J. Xu, J. Wang, L. Fan, H. Li, W. Zhang, Q. Hu, and T. Jiang, “Tractography-based parcellation of the human middle temporal gyrus,” Scientific reports, vol. 5, no. 1, p. 18883, 2015.

Summary

  • The paper reveals that assessing image quality activates higher-order visual and cognitive brain regions compared to simple content classification.
  • It employs fMRI combined with behavioral analysis to demonstrate that high- and low-quality images trigger distinct neural processing strategies.
  • These insights inform the development of advanced image quality algorithms and user-centric visual design improvements.

Understanding Brain Responses to Image Quality Through fMRI

The Study's Approach and Significance

Researchers have utilized functional magnetic resonance imaging (fMRI) to delve into how the brain processes and responds to varying qualities of images. This paper stands out because it marries the fields of visual neuroscience and image quality assessment by employing fMRI to understand the neural mechanisms behind evaluating image quality and classifying content. It marks a pioneering attempt to empirically link image quality with specific brain activities, leveraging both behavioral data and complex imaging analyses.

Key Findings from the Study

The research uncovers several intriguing aspects of how our brains process images of different qualities:

  • Enhanced Brain Activity for Quality Assessment: Compared to tasks where subjects classify content, assessing image quality activates more complex and higher-order visual and cognitive brain regions. This includes areas known for detailed visual processing and those involved in cognitive control and decision-making. It highlights that evaluating image quality might be a more demanding cognitive task than previously appreciated.
  • Distinct Processing Strategies for Different Image Qualities: The paper shows that high-quality images primarily activate specialized visual areas which focus on detailed and precise analysis. In contrast, low-quality images prompt the brain to recruit additional cognitive and attention resources, suggesting a more complex and resource-intensive process to interpret such images.
  • Neural Adaptation to Image Quality: There’s a dynamic shift in how various brain regions are activated in response to changes in image quality. This adaptation suggests that our brains employ different strategies based on the visual information's fidelity.

Practical and Theoretical Implications

The findings from this research have significant implications for both theoretical understanding and practical applications:

  1. Advancement in Image Quality Algorithms: Insights into which brain areas are activated by different image qualities can inform the development of more advanced image quality assessment algorithms. This could improve how machines emulate human-like image quality evaluations.
  2. Enhanced User Experience Designs: Understanding brain responses to image quality can help designers create more effective visual materials that are easier to process and more aligned with human visual perception, potentially enhancing user experience.
  3. Deeper Insight into Visual Processing: The paper provides a deeper understanding of the neural basis of visual quality perception, adding a rich layer of knowledge to visual neuroscience.

Looking Forward: Speculations on Future Developments

Moving forward, the paper could inspire further research in several directions. Larger and more diverse fMRI datasets might be developed, incorporating a broader array of image qualities and more complex task designs. This could facilitate more detailed machine learning analyses and potentially lead to the development of real-time, brain-based evaluation systems for image quality assessment.

Moreover, as the technology for brain imaging and the algorithms for image quality assessment evolve, there could be a convergence of AI and neuroscience that would enable highly sophisticated systems capable of mimicking human perceptual and cognitive processes even more closely.

Conclusion

By leveraging fMRI, the paper unveils the nuanced ways our brains engage with visual stimuli of varying quality, highlighting a complex interplay between specialized visual processing and broader cognitive strategies. This has profound implications not only for neuroscientific theory but also for practical applications in technology and design, potentially guiding the next generation of imaging technologies and cognitive models in AI.