Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
149 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Toward automatic comparison of visualization techniques: Application to graph visualization (1910.09477v2)

Published 21 Oct 2019 in cs.HC and cs.LG

Abstract: Many end-user evaluations of data visualization techniques have been run during the last decades. Their results are cornerstones to build efficient visualization systems. However, designing such an evaluation is always complex and time-consuming and may end in a lack of statistical evidence and reproducibility. We believe that modern and efficient computer vision techniques, such as deep convolutional neural networks (CNNs), may help visualization researchers to build and/or adjust their evaluation hypothesis. The basis of our idea is to train machine learning models on several visualization techniques to solve a specific task. Our assumption is that it is possible to compare the efficiency of visualization techniques based on the performance of their corresponding model. As current machine learning models are not able to strictly reflect human capabilities, including their imperfections, such results should be interpreted with caution. However, we think that using machine learning-based pre-evaluation, as a pre-process of standard user evaluations, should help researchers to perform a more exhaustive study of their design space. Thus, it should improve their final user evaluation by providing it better test cases. In this paper, we present the results of two experiments we have conducted to assess how correlated the performance of users and computer vision techniques can be. That study compares two mainstream graph visualization techniques: node-link (\NL) and adjacency-matrix (\MD) diagrams. Using two well-known deep convolutional neural networks, we partially reproduced user evaluations from Ghoniem \textit{et al.} and from Okoe \textit{et al.}. These experiments showed that some user evaluation results can be reproduced automatically.

Citations (23)

Summary

We haven't generated a summary for this paper yet.