Papers
Topics
Authors
Recent
2000 character limit reached

AutoLegend: A User Feedback-Driven Adaptive Legend Generator for Visualizations (2407.16331v1)

Published 23 Jul 2024 in cs.HC

Abstract: We propose AutoLegend to generate interactive visualization legends using online learning with user feedback. AutoLegend accurately extracts symbols and channels from visualizations and then generates quality legends. AutoLegend enables a two-way interaction between legends and interactions, including highlighting, filtering, data retrieval, and retargeting. After analyzing visualization legends from IEEE VIS papers over the past 20 years, we summarized the design space and evaluation metrics for legend design in visualizations, particularly charts. The generation process consists of three interrelated components: a legend search agent, a feedback model, and an adversarial loss model. The search agent determines suitable legend solutions by exploring the design space and receives guidance from the feedback model through scalar scores. The feedback model is continuously updated by the adversarial loss model based on user input. The user study revealed that AutoLegend can learn users' preferences through legend editing.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (51)
  1. M. Bostock, V. Ogievetsky, and J. Heer, “D3: Data-Driven Documents,” IEEE Trans. Vis. Comput. Graph., vol. 17, no. 12, pp. 2301–2309, Dec. 2011.
  2. A. Satyanarayan, D. Moritz, K. Wongsuphasawat, and J. Heer, “Vega-lite: A grammar of interactive graphics,” IEEE Trans. Vis. Comp. Graph., vol. 23, no. 1, pp. 341–350, 2017.
  3. D. Li, H. Mei, Y. Shen, S. Su, W. Zhang, J. Wang, M. Zu, and W. Chen, “ECharts: A declarative framework for rapid construction of web-based visualization,” Visual Informatics, vol. 2, no. 2, pp. 136–146, 2018.
  4. R. Sieber, C. Schmid, and S. Wiesmann, “Smart legend–Smart atlas,” in Proceedings of International Conference of the ICA, 2005.
  5. F. Göbel, P. Kiefer, I. Giannopoulos, A. T. Duchowski, and M. Raubal, “Improving map reading with gaze-adaptive legends,” in Proceedings of the ACM Symposium on Eye Tracking Research and Applications, 2018.
  6. J. Dykes, J. Wood, and A. Slingsby, “Rethinking map legends with visualization,” IEEE Trans. Vis. Comput. Graph., vol. 16, no. 6, pp. 890–899, 2010.
  7. D. Edler, J. Keil, M.-C. Tuller, A.-K. Bestgen, and F. Dickmann, “Searching for the ‘right’legend: the impact of legend position on legend decoding in a cartographic memory task,” The Cartographic Journal, vol. 57, no. 1, pp. 6–17, 2020.
  8. N. H. Riche, B. Lee, and C. Plaisant, “Understanding interactive legends: a comparative evaluation with standard widgets,” Computer Graphics Forum, vol. 29, no. 3, pp. 1193–1202, 2010.
  9. K. Shahira and A. Lijiya, “Towards assisting the visually impaired: a review on techniques for decoding the visual data from chart images,” IEEE Access, vol. 9, pp. 52 926–52 943, 2021.
  10. J. Choi, S. Jung, D. G. Park, J. Choo, and N. Elmqvist, “Visualizing for the non-visual: Enabling the visually impaired to use visualization,” Computer Graphics Forum, vol. 38, no. 3, pp. 249–260, 2019.
  11. M. Savva, N. Kong, A. Chhajta, F.-F. Li, M. Agrawala, and J. Heer, “ReVision: Automated classification, analysis and redesign of chart images,” in Proceedings of ACM UIST, 2011, pp. 393–402.
  12. J. Poco and J. Heer, “Reverse-engineering visualizations: Recovering visual encodings from chart images,” Computer Graphics Forum, vol. 36, no. 3, pp. 353–363, 2017.
  13. L.-P. Yuan, W. Zeng, S. Fu, Z. Zeng, H. Li, C.-W. Fu, and H. Qu, “Deep colormap extraction from visualizations,” IEEE Trans. Vis. Comput. Graph., vol. 28, no. 12, pp. 4048–4060, 2022.
  14. J. Luo, Z. Li, J. Wang, and C.-Y. Lin, “ChartOCR: Data extraction from charts images via a deep hybrid framework,” in Proceedings of IEEE WACV, 2021, pp. 1917–1925.
  15. C. Lai, Z. Lin, R. Jiang, Y. Han, C. Liu, and X. Yuan, “Automatic annotation synchronizing with textual description for visualization,” in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, paper No. 316, 2020.
  16. F. Zhou, Y. Zhao, W. Chen, Y. Tan, Y. Xu, Y. Chen, C. Liu, and Y. Zhao, “Reverse-engineering bar charts using neural networks,” Journal of Visualization, vol. 24, pp. 419–435, 2021.
  17. Y. Zhang, M. Chen, and B. Coecke, “MI3: Machine-Initiated Intelligent Interaction for Interactive Classification and Data Reconstruction,” ACM Trans. Interact. Intell. Syst., vol. 11, no. 3–4, 2021.
  18. X. Liu, D. Klabjan, and P. NBless, “Data extraction from charts via single deep neural network,” arXiv preprint arXiv:1906.11906, 2019.
  19. J. Poco, A. Mayhua, and J. Heer, “Extracting and retargeting color mappings from bitmap images of visualizations,” IEEE Trans. Vis. Comput. Graph., vol. 24, no. 1, pp. 637–646, 2018.
  20. E. Hoque and M. Agrawala, “Searching the visual style and structure of d3 visualizations,” IEEE Trans. Vis. Comput. Graph., vol. 26, no. 1, pp. 1236–1245, 2019.
  21. W. Cui, J. Wang, H. Huang, Y. Wang, C.-Y. Lin, H. Zhang, and D. Zhang, “A mixed-initiative approach to reusing infographic charts,” IEEE Trans. Vis. Comput. Graph., vol. 28, no. 1, pp. 173–183, 2021.
  22. Z. Chen, Y. Wang, Q. Wang, Y. Wang, and H. Qu, “Towards automated infographic design: Deep learning-based auto-extraction of extensible timeline,” IEEE Trans. Vis. Comput. Graph., vol. 26, no. 1, pp. 917–926, 2019.
  23. N. Kong and M. Agrawala, “Graphical overlays: Using layered elements to aid chart reading,” IEEE Trans. Vis. Comput. Graph., vol. 18, no. 12, pp. 2631–2638, 2012.
  24. M. Lu, N. Fish, S. Wang, J. Lanir, D. Cohen-Or, and H. Huang, “Enhancing static charts with data-driven animations,” IEEE Trans. Vis. Comput. Graph., vol. 28, no. 7, pp. 2628–2640, 2022.
  25. J. Choi, D. G. Park, Y. L. Wong, E. Fisher, and N. Elmqvist, “VisDock: A toolkit for cross-cutting interactions in visualization,” IEEE Trans. Vis. Comput. Graph., vol. 21, no. 9, pp. 1087–1100, 2015.
  26. J. Harper and M. Agrawala, “Deconstructing and restyling D3 visualizations,” in Proceedings of ACM UIST, 2014, pp. 253–262.
  27. ——, “Converting basic D3 charts into reusable style templates,” IEEE Trans. Vis. Comput. Graph., vol. 24, no. 3, pp. 1274–1286, 2018.
  28. M. Lu, J. Liang, Y. Zhang, G. Li, S. Chen, Z. Li, and X. Yuan, “Interaction+: Interaction enhancement for web-based visualizations,” in Proceedings of the IEEE Pacific Visualization Symposium, 2017, pp. 61–70.
  29. C. Liu, Y. Zhang, C. Wu, C. Li, and X. Yuan, “A spatial constraint model for manipulating static visualizations,” ACM Transactions on Interactive Intelligent Systems, vol. 14, no. 2, pp. 1–29, 2024.
  30. B. Alper, N. Riche, G. Ramos, and M. Czerwinski, “Design study of linesets, a novel set visualization technique,” IEEE Trans. Vis. Comput. Graph., vol. 17, no. 12, pp. 2259–2267, 2011.
  31. F. Chevalier, R. Vuillemot, and G. Gali, “Using concrete scales: A practical framework for effective visual depiction of complex measures,” IEEE Trans. Vis. Comput. Graph., vol. 19, no. 12, pp. 2426–2435, 2013.
  32. Y. Yang, T. Dwyer, S. Goodwin, and K. Marriott, “Many-to-many geographically-embedded flow visualisation: An evaluation,” IEEE Trans. Vis. Comput. Graph., vol. 23, no. 1, pp. 411–420, 2017.
  33. C. Palomo, Z. Guo, C. T. Silva, and J. Freire, “Visually exploring transportation schedules,” IEEE Trans. Vis. Comput. Graph., vol. 22, no. 1, pp. 170–179, 2016.
  34. M. Rubio-Sánchez and A. Sanchez, “Axis calibration for improving data attribute estimation in star coordinates plots,” IEEE Trans. Vis. Comput. Graph., vol. 20, no. 12, pp. 2013–2022, 2014.
  35. M. Jarema, I. Demir, J. Kehrer, and R. Westermann, “Comparative visual analysis of vector field ensembles,” in Proceedings of IEEE VAST, 2015, pp. 81–88.
  36. Y. Lu, M. Steptoe, S. Burke, H. Wang, J.-Y. Tsai, H. Davulcu, D. Montgomery, S. R. Corman, and R. Maciejewski, “Exploring evolving media discourse through event cueing,” IEEE Trans. Vis. Comput. Graph., vol. 22, no. 1, pp. 220–229, 2016.
  37. W. Dou, C. Ziemkiewicz, L. Harrison, D. H. Jeong, R. Ryan, W. Ribarsky, X. Wang, and R. Chang, “Comparing different levels of interaction constraints for deriving visual problem isomorphs,” in Proceedings of IEEE VAST, 2010, pp. 195–202.
  38. J. Zhao, F. Chevalier, C. Collins, and R. Balakrishnan, “Facilitating discourse analysis with interactive visualization,” IEEE Trans. Vis. Comput. Graph., vol. 18, no. 12, pp. 2639–2648, 2012.
  39. W. Meulemans, J. Dykes, A. Slingsby, C. Turkay, and J. Wood, “Small multiples with gaps,” IEEE Trans. Vis. Comput. Graph., vol. 23, no. 1, pp. 381–390, 2017.
  40. R. Scheepens, N. Willems, H. van de Wetering, G. Andrienko, N. Andrienko, and J. J. van Wijk, “Composite density maps for multivariate trajectories,” IEEE Trans. Vis. Comput. Graph., vol. 17, no. 12, pp. 2518–2527, 2011.
  41. P. Bak, F. Mansmann, H. Janetzko, and D. Keim, “Spatiotemporal analysis of sensor logs using growth ring maps,” IEEE Trans. Vis. Comput. Graph., vol. 15, no. 6, pp. 913–920, 2009.
  42. J. Wood, A. Slingsby, N. Khalili-Shavarini, J. Dykes, and D. Mountain, “Visualization of uncertainty and analysis of geographical data,” in Proceedings of IEEE VAST, 2009, pp. 261–262.
  43. X. Yuan, P. Guo, H. Xiao, H. Zhou, and H. Qu, “Scattering points in parallel coordinates,” IEEE Trans. Vis. Comput. Graph., vol. 15, no. 6, pp. 1001–1008, 2009.
  44. D. Deng, Y. Wu, X. Shu, J. Wu, S. Fu, W. Cui, and Y. Wu, “VisImages: A fine-grained expert-annotated visualization dataset,” IEEE Trans. Vis. Comput. Graph., no. 01, pp. 1–1, 2022.
  45. K. Hu, M. A. Bakker, S. Li, T. Kraska, and C. Hidalgo, “VizML: A machine learning approach to visualization recommendation,” in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2019.
  46. C. Liu, L. Xie, Y. Han, X. Yuan et al., “Autocaption: An approach to generate natural language description from visualization automatically,” in Proceedings of the IEEE Pacific Visualization Symposium (Notes), 2020, pp. 191–195.
  47. S. E. Kahou, V. Michalski, A. Atkinson, Á. Kádár, A. Trischler, and Y. Bengio, “Figureqa: An annotated figure dataset for visual reasoning,” arXiv preprint arXiv:1710.07300, 2017.
  48. C. Liu, Y. Guo, and X. Yuan, “AutoTitle: An interactive title generator for visualizations,” IEEE Trans. Vis. Comp. Graph., pp. 1–12, 2023.
  49. M. Ester, H.-P. Kriegel, J. Sander, X. Xu et al., “A density-based algorithm for discovering clusters in large spatial databases with noise,” in KDD, vol. 96, no. 34, 1996, pp. 226–231.
  50. G. J. Székely, M. L. Rizzo, and N. K. Bakirov, “Measuring and testing dependence by correlation of distances,” Ann. Stat., vol. 35, no. 6, pp. 2769 – 2794, 2007.
  51. A. F. Gad, “Pygad: An intuitive genetic algorithm python library,” Multimedia Tools and Applications, pp. 1–14, 2023.

Summary

We haven't generated a summary for this paper yet.

Whiteboard

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.