Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
149 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Feature representations useful for predicting image memorability (2303.07679v2)

Published 14 Mar 2023 in cs.CV, cs.LG, and eess.IV

Abstract: Prediction of image memorability has attracted interest in various fields. Consequently, the prediction accuracy of convolutional neural network (CNN) models has been approaching the empirical upper bound estimated based on human consistency. However, identifying which feature representations embedded in CNN models are responsible for the high memorability prediction accuracy remains an open question. To tackle this problem, we sought to identify memorability-related feature representations in CNN models using brain similarity. Specifically, memorability prediction accuracy and brain similarity were examined across 16,860 layers in 64 CNN models pretrained for object recognition. A clear tendency was observed in this comprehensive analysis that layers with high memorability prediction accuracy had higher brain similarity with the inferior temporal (IT) cortex, which is the highest stage in the ventral visual pathway. Furthermore, fine-tuning of the 64 CNN models for memorability prediction revealed that brain similarity with the IT cortex at the penultimate layer positively correlated with the memorability prediction accuracy of the models. This analysis also showed that the best fine-tuned model provided accuracy comparable to state-of-the-art CNN models developed for memorability prediction. Overall, the results of this study indicated that the CNN models' great success in predicting memorability relies on feature representation acquisition, similar to the IT cortex. This study advances our understanding of feature representations and their use in predicting image memorability.

Citations (1)

Summary

We haven't generated a summary for this paper yet.