Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
156 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

FeatureEnVi: Visual Analytics for Feature Engineering Using Stepwise Selection and Semi-Automatic Extraction Approaches (2103.14539v4)

Published 26 Mar 2021 in cs.LG, cs.HC, and stat.ML

Abstract: The ML life cycle involves a series of iterative steps, from the effective gathering and preparation of the data, including complex feature engineering processes, to the presentation and improvement of results, with various algorithms to choose from in every step. Feature engineering in particular can be very beneficial for ML, leading to numerous improvements such as boosting the predictive results, decreasing computational times, reducing excessive noise, and increasing the transparency behind the decisions taken during the training. Despite that, while several visual analytics tools exist to monitor and control the different stages of the ML life cycle (especially those related to data and algorithms), feature engineering support remains inadequate. In this paper, we present FeatureEnVi, a visual analytics system specifically designed to assist with the feature engineering process. Our proposed system helps users to choose the most important feature, to transform the original features into powerful alternatives, and to experiment with different feature generation combinations. Additionally, data space slicing allows users to explore the impact of features on both local and global scales. FeatureEnVi utilizes multiple automatic feature selection techniques; furthermore, it visually guides users with statistical evidence about the influence of each feature (or subsets of features). The final outcome is the extraction of heavily engineered features, evaluated by multiple validation metrics. The usefulness and applicability of FeatureEnVi are demonstrated with two use cases and a case study. We also report feedback from interviews with two ML experts and a visualization researcher who assessed the effectiveness of our system.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (109)
  1. U. Khurana, H. Samulowitz, and D. Turaga, “Feature engineering for predictive modeling using reinforcement learning,” in Proc. of AAAI.   AAAI, 2018.
  2. P. Domingos, “A few useful things to know about machine learning,” Commun. ACM, vol. 55, no. 10, pp. 78–87, Oct. 2012.
  3. S. Kandel, A. Paepcke, J. M. Hellerstein, and J. Heer, “Enterprise data analysis and visualization: An interview study,” IEEE TVCG, vol. 18, no. 12, pp. 2917–2926, Dec. 2012.
  4. S. R. Hong, J. Hullman, and E. Bertini, “Human factors in model interpretability: Industry practices, challenges, and needs,” PACMHCI, vol. 4, no. CSCW1, May 2020.
  5. European Parliament and Council of the European Union, “Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation),” https://eur-lex.europa.eu/eli/reg/2016/679/oj, Apr. 2016, accessed Dec. 8, 2021.
  6. J. Zhou and F. Chen, “2D transparency space—Bring domain users and machine learning experts together,” in Human and Machine Learning.   Springer, 2018, pp. 3–19.
  7. J. Heaton, “An empirical analysis of feature engineering for predictive modeling,” in Proc. of SoutheastCon.   IEEE, 2016.
  8. M. Brooks, S. Amershi, B. Lee, S. M. Drucker, A. Kapoor, and P. Simard, “FeatureInsight: Visual support for error-driven feature ideation in text classification,” in Proc. of IEEE VAST.   IEEE, 2015, pp. 105–112.
  9. J. Cheng and M. S. Bernstein, “Flock: Hybrid crowd-machine learning classifiers,” in Proc. of ACM CSCW.   ACM, 2015, pp. 600–611.
  10. H. Liu and H. Motoda, “Feature transformation and subset selection,” IEEE ISTA, vol. 13, no. 2, pp. 26–28, Mar.–Apr. 1998.
  11. K. Patel, J. Fogarty, J. A. Landay, and B. Harrison, “Investigating statistical machine learning as a tool for software development,” in Proc. of ACM CHI.   ACM, 2008, pp. 667–676.
  12. S. Markovitch and D. Rosenstein, “Feature generation using general constructor functions,” Mach. Learn., vol. 49, no. 1, pp. 59–98, Oct. 2002.
  13. B. Schuller, S. Reiter, and G. Rigoll, “Evolutionary feature generation in speech emotion recognition,” in Proc. of IEEE ICME.   IEEE, 2006, pp. 5–8.
  14. Y. Kankanige and J. Bailey, “Improved feature transformations for classification using density estimation,” in PRICAI 2014: Trends in Artificial Intelligence.   Springer, 2014, pp. 117–129.
  15. T. Mühlbacher and H. Piringer, “A partition-based framework for building and validating regression models,” IEEE TVCG, vol. 19, no. 12, pp. 1962–1971, Dec. 2013.
  16. I. Guyon and A. Elisseeff, “An introduction to variable and feature selection,” JMLR, vol. 3, pp. 1157–1182, Mar. 2003.
  17. G. Chandrashekar and F. Sahin, “A survey on feature selection methods,” J. Electr. Comput. Eng., vol. 40, no. 1, pp. 16–28, Jan. 2014.
  18. R. Kohavi and G. H. John, “Wrappers for feature subset selection,” Artif. Intell., vol. 97, no. 1–2, pp. 273–324, Dec. 1997.
  19. A. L. Blum and P. Langley, “Selection of relevant features and examples in machine learning,” Artificial Intelligence, vol. 97, no. 1–2, pp. 245–271, Dec. 1997.
  20. K.-B. Duan, J. C. Rajapakse, H. Wang, and F. Azuaje, “Multiple SVM-RFE for gene selection in cancer classification with expression data,” IEEE TNB, vol. 4, no. 3, pp. 228–234, Sep. 2005.
  21. J. Ding, J. Shi, and F.-X. Wu, “SVM-RFE based feature selection for tandem mass spectrum quality assessment,” Int. J. Data. Min. Bioinform., vol. 5, no. 1, pp. 73–88, Feb. 2011.
  22. Y. Yang and J. O. Pedersen, “A comparative study on feature selection in text categorization,” in Proc. of ICML.   Morgan Kaufmann Publishers Inc., 1997, pp. 412–420.
  23. Y. Aphinyanaphongs, L. D. Fu, Z. Li, E. R. Peskin, E. Efstathiadis, C. F. Aliferis, and A. Statnikov, “A comprehensive empirical comparison of modern supervised classification and feature selection methods for text categorization,” JASIST, vol. 65, no. 10, pp. 1964–1987, Oct. 2014.
  24. G. Forman, “An extensive empirical study of feature selection metrics for text classification,” JMLR, vol. 3, pp. 1289–1305, Mar. 2003.
  25. T. May, J. Davey, and T. Ruppert, “SmartStripes — Looking under the hood of feature subset selection methods,” in Proc. of EuroVA.   The Eurographics Association, 2011.
  26. T. May, A. Bannach, J. Davey, T. Ruppert, and J. Kohlhammer, “Guiding feature subset selection with an interactive visualization,” in Proc. of IEEE VAST.   IEEE, 2011, pp. 111–120.
  27. F. L. Dennig, T. Polk, Z. Lin, T. Schreck, H. Pfister, and M. Behrisch, “FDive: Learning relevance models using pattern-based similarity measures,” in Proc. of IEEE VAST.   IEEE, 2019, pp. 69–80.
  28. T. Chen and C. Guestrin, “XGBoost: A scalable tree boosting system,” in Proc. of ACM KDD.   ACM, 2016, pp. 785–794.
  29. S. Liu, J. Xiao, J. Liu, X. Wang, J. Wu, and J. Zhu, “Visual diagnosis of tree boosting methods,” IEEE TVCG, vol. 24, no. 1, pp. 163–173, Jan. 2018.
  30. J. Fogarty and S. E. Hudson, “Toolkit support for developing and deploying sensor-based statistical models of human situations,” in Proc. of ACM CHI.   ACM, 2007, pp. 135–144.
  31. J. Zhao, M. Karimzadeh, A. Masjedi, T. Wang, X. Zhang, M. M. Crawford, and D. S. Ebert, “FeatureExplorer: Interactive feature selection and exploration of regression models for hyperspectral images,” in Proc. of IEEE VIS.   IEEE, 2019, pp. 161–165.
  32. D. Rojo, N. N. Htun, and K. Verbert, “GaCoVi: A correlation visualization to support interpretability-aware feature selection for regression models,” in Proc. of EuroVis — Short Papers.   The Eurographics Association, 2020.
  33. D. Guo, “Coordinating computational and visual approaches for interactive feature selection and multivariate clustering,” Inf. Vis., vol. 2, no. 4, pp. 232–246, Dec. 2003.
  34. Y. Perez-Riverol, M. Kuhn, J. A. Vizcaíno, M.-P. Hitz, and E. Audain, “Accurate and fast feature selection workflow for high-dimensional omics data,” PLoS ONE, vol. 12, no. 12, p. e0189875, Dec. 2017.
  35. M. A. Hall, “Correlation-based feature selection for discrete and numeric class machine learning,” in Proc. of ICML.   Morgan Kaufmann Publishers Inc., 2000, pp. 359–366.
  36. D. Collaris and J. J. van Wijk, “ExplainExplore: Visual exploration of machine learning explanations,” in Proc. of IEEE PacificVis.   IEEE, 2020, pp. 26–35.
  37. H. Piringer, W. Berger, and J. Krasser, “HyperMoVal: Interactive visual validation of regression models for real-time simulation,” CGF, vol. 29, no. 3, pp. 983–992, Jun. 2010.
  38. M. A. Hearst, S. T. Dumais, E. Osuna, J. Platt, and B. Scholkopf, “Support vector machines,” IEEE ISTA, vol. 13, no. 4, pp. 18–28, Jul.–Aug. 1998.
  39. I. Guyon, J. Weston, S. Barnhill, and V. Vapnik, “Gene selection for cancer classification using support vector machines,” Mach. Learn., vol. 46, no. 1, pp. 389–422, Jan. 2002.
  40. J. Yang and V. Honavar, “Feature subset selection using a genetic algorithm,” IEEE ISTA, vol. 13, no. 2, pp. 44–49, Mar.–Apr. 1998.
  41. M. Friendly, “Corrgrams: Exploratory displays for correlation matrices,” Am. Stat., vol. 56, no. 4, pp. 316–324, Nov. 2002.
  42. A. MacEachren, D. Xiping, F. Hardisty, D. Guo, and G. Lengerich, “Exploring high-D spaces with multiform matrices and small multiples,” in Proc. of IEEE INFOVIS.   IEEE, 2003, pp. 31–38.
  43. A. Sanchez, C. Soguero-Ruiz, I. Mora-Jiménez, F. Rivas-Flores, D. Lehmann, and M. Rubio-Sánchez, “Scaled radial axes for interactive visual feature selection: A case study for analyzing chronic conditions,” Expert Syst. Appl., vol. 100, pp. 182–196, Jun. 2018.
  44. E. Artur and R. Minghim, “A novel visual approach for enhanced attribute analysis and selection,” C&G, vol. 84, pp. 160–172, Nov. 2019.
  45. Y. Wang, J. Li, F. Nie, H. Theisel, M. Gong, and D. J. Lehmann, “Linear discriminative star coordinates for exploring class and cluster separation of high dimensional data,” CGF, vol. 36, no. 3, pp. 401–410, Jun. 2017.
  46. J. Yang, A. Patro, S. Huang, N. Mehta, M. O. Ward, and E. A. Rundensteiner, “Value and relation display for interactive exploration of high dimensional datasets,” in Proc. of IEEE INFOVIS.   IEEE, 2004, pp. 73–80.
  47. N. Elmqvist, P. Dragicevic, and J.-D. Fekete, “Rolling the dice: Multidimensional visual exploration using scatterplot matrix navigation,” IEEE TVCG, vol. 14, no. 6, pp. 1539–1148, Nov.–Dec. 2008.
  48. J. Yang, M. O. Ward, E. A. Rundensteiner, and S. Huang, “Visual hierarchical dimension reduction for exploration of high dimensional datasets,” in Proc. of VISSYM.   The Eurographics Association, 2003, pp. 19–28.
  49. J. Krause, A. Perer, and E. Bertini, “INFUSE: Interactive feature selection for predictive modeling of high dimensional data,” IEEE TVCG, vol. 20, no. 12, pp. 1614–1623, Dec. 2014.
  50. H. Piringer, W. Berger, and H. Hauser, “Quantifying and comparing features in high-dimensional datasets,” in Proc. of IV.   IEEE, 2008, pp. 240–245.
  51. J. Seo and B. Shneiderman, “A rank-by-feature framework for interactive exploration of multidimensional data,” Inf. Vis., vol. 4, no. 2, pp. 96–113, Jun. 2005.
  52. C. Turkay, P. Filzmoser, and H. Hauser, “Brushing dimensions — A dual visual analysis model for high-dimensional data,” IEEE TVCG, vol. 17, no. 12, pp. 2591–2599, Dec. 2011.
  53. S. Johansson and J. Johansson, “Interactive dimensionality reduction through user-defined combinations of quality metrics,” IEEE TVCG, vol. 15, no. 6, pp. 993–1000, Nov.–Dec. 2009.
  54. Y. Lu, R. Krüger, D. Thom, F. Wang, S. Koch, T. Ertl, and R. Maciejewski, “Integrating predictive analytics and social media,” in Proc. of IEEE VAST.   IEEE, 2014, pp. 193–202.
  55. S. Gratzl, A. Lex, N. Gehlenborg, H. Pfister, and M. Streit, “LineUp: Visual analysis of multi-attribute rankings,” IEEE TVCG, vol. 19, no. 12, pp. 2277–2286, Dec. 2013.
  56. J. Yang, W. Peng, M. O. Ward, and E. A. Rundensteiner, “Interactive hierarchical dimension ordering, spacing and filtering for exploration of high dimensional datasets,” in Proc. of IEEE INFOVIS.   IEEE, 2003, pp. 105–112.
  57. H. Lin, S. Gao, D. Gotz, F. Du, J. He, and N. Cao, “RCLens: Interactive rare category exploration and identification,” IEEE TVCG, vol. 24, no. 7, pp. 2223–2237, Jul. 2018.
  58. P. E. Rauber, R. R. O. d. Silva, S. Feringa, M. E. Celebi, A. X. Falcão, and A. C. Telea, “Interactive image feature selection aided by dimensionality reduction,” in Proc. of EuroVA.   The Eurographics Association, 2015.
  59. P. Klemm, K. Lawonn, S. Glaßer, U. Niemann, K. Hegenscheid, H. Völzke, and B. Preim, “3D regression heat map analysis of population study data,” IEEE TVCG, vol. 22, no. 1, pp. 81–90, Jan. 2016.
  60. D. Dingen, M. van’t Veer, P. Houthuizen, E. H. J. Mestrom, E. H. H. M. Korsten, A. R. A. Bouwman, and J. van Wijk, “RegressionExplorer: Interactive exploration of logistic regression models with subgroup analysis,” IEEE TVCG, vol. 25, no. 1, pp. 246–255, Jan. 2019.
  61. S. Barlowe, T. Zhang, Y. Liu, J. Yang, and D. Jacobs, “Multivariate visual explanation for high dimensional datasets,” in Proc. of IEEE VAST.   IEEE, 2008, pp. 147–154.
  62. F. Hohman, K. Wongsuphasawat, M. B. Kery, and K. Patel, “Understanding and visualizing data iteration in machine learning,” in Proc. of ACM CHI.   ACM, 2020, pp. 1–13.
  63. J. Bernard, M. Steiger, S. Widmer, H. Lücke-Tieke, T. May, and J. Kohlhammer, “Visual-interactive exploration of interesting multivariate relations in mixed research data sets,” CGF, vol. 33, no. 3, pp. 291–300, Jun. 2014.
  64. A. Chatzimparmpas, R. M. Martins, I. Jusufi, and A. Kerren, “A survey of surveys on the use of visualization for interpreting machine learning models,” Inf. Vis., vol. 19, no. 3, pp. 207–233, Jul. 2020.
  65. A. Chatzimparmpas, R. M. Martins, I. Jusufi, K. Kucher, F. Rossi, and A. Kerren, “The state of the art in enhancing trust in machine learning models with the use of visualizations,” CGF, vol. 39, no. 3, pp. 713–756, Jun. 2020.
  66. A. Chatzimparmpas, R. M. Martins, and A. Kerren, “t-viSNE: Interactive assessment and interpretation of t-SNE projections,” IEEE TVCG, vol. 26, no. 8, pp. 2696–2714, Aug. 2020.
  67. A. Chatzimparmpas, R. M. Martins, K. Kucher, and A. Kerren, “StackGenVis: Alignment of data, algorithms, and models for stacking ensemble learning using performance metrics,” IEEE TVCG, vol. 27, no. 2, pp. 1547–1557, Feb. 2021.
  68. ——, “VisEvol: Visual analytics to support hyperparameter search through evolutionary optimization,” CGF, vol. 40, no. 3, pp. 201–214, Jun. 2021.
  69. E. I. Altman, “Financial ratios, discriminant analysis and the prediction of corporate bankruptcy,” J. Finance, vol. 23, no. 4, pp. 589–609, Sep. 1968.
  70. Y. Li, Z.-F. Wu, J.-M. Liu, and Y.-Y. Tang, “Efficient feature selection for high-dimensional data using two-level filter,” in Proc. of ICMLC, vol. 3.   IEEE, 2004, pp. 1711–1716.
  71. I. Dohoo, C. Ducrot, C. Fourichon, A. Donald, and D. Hurnik, “An overview of techniques for dealing with large numbers of independent variables in epidemiologic studies,” Prev. Vet. Med., vol. 29, no. 3, pp. 221–239, Jan. 1997.
  72. F. Nogueira, “Bayesian Optimization,” https://git.io/vov5M, 2014, accessed Dec. 8, 2021.
  73. P. Cortez, A. Cerdeira, F. Almeida, T. Matos, and J. Reis, “Modeling wine preferences by data mining from physicochemical properties,” Decis. Support Syst., vol. 47, no. 4, pp. 547–553, Nov. 2009.
  74. D. Dua and C. Graff, “UCI machine learning repository,” http://archive.ics.uci.edu/ml, 2017, accessed Dec. 8, 2021.
  75. A. Laughter and S. Omari, “A study of modeling techniques for prediction of wine quality,” in Intelligent Computing.   Springer, 2020, pp. 373–399.
  76. S. Silva, B. Sousa Santos, and J. Madeira, “Using color in visualization: A survey,” C&G, vol. 35, no. 2, pp. 320–333, 2011.
  77. S. Jain and A. Saha, “Rank-based univariate feature selection methods on machine learning classifiers for code smell detection,” Evol. Intell., pp. 1–30, 2021.
  78. G. Louppe, L. Wehenkel, A. Sutera, and P. Geurts, “Understanding variable importances in forests of randomized trees,” in Proc. of NIPS, vol. 1.   Curran Associates Inc., 2013, pp. 431–439.
  79. P. Radivojac, Z. Obradovic, A. K. Dunker, and S. Vucetic, “Feature selection filters based on the permutation test,” in Machine Learning: ECML 2004.   Springer, 2004, pp. 334–346.
  80. L. Breiman, “Random forests,” Mach. Learn., vol. 45, pp. 5–32, Oct. 2001.
  81. A. Janecek, W. Gansterer, M. Demel, and G. Ecker, “On the relationship between feature selection and classification accuracy,” in Proc. of FSDM at ECML/PKDD 2008, vol. 4.   PMLR, 2008, pp. 90–105.
  82. J. S. Cramer, “The origins of logistic regression,” Tinbergen Institute, Discussion Paper 2002-119/4, Nov. 2002.
  83. F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay, “Scikit-Learn: Machine learning in Python,” JMLR, vol. 12, pp. 2825–2830, Nov. 2011.
  84. H. Jeon and S. Oh, “Hybrid-recursive feature elimination for efficient feature selection,” Appl. Sci., vol. 10, no. 9, 2020.
  85. A. Lex, M. Streit, E. Kruijff, and D. Schmalstieg, “Caleydo: Design and evaluation of a visual analysis framework for gene expression data in its biological context,” in Proc. of IEEE PacificVis.   IEEE, 2010, pp. 57–64.
  86. M. B. Eisen, P. T. Spellman, P. O. Brown, and D. Botstein, “Cluster analysis and display of genome-wide expression patterns,” PNAS, vol. 95, no. 25, pp. 14 863–14 868, 1998.
  87. K. Pearson, “Note on regression and inheritance in the case of two parents,” Proc. R. Soc. Lond., vol. 58, no. 347–352, pp. 240–242, 1895.
  88. R. Johnston, K. Jones, and D. Manley, “Confounding and collinearity in regression analysis: A cautionary tale and an alternative procedure, illustrated by studies of British voting behaviour,” Qual. Quant., vol. 52, no. 4, pp. 1957–1976, Jul. 2018.
  89. R. Gove, “It pays to be lazy: Reusing force approximations to compute better graph layouts faster,” in Proc. of FMT.   St. Pölten University of Applied Sciences, 2018, pp. 43–51.
  90. J. P. Siebert, “Vehicle recognition using rule based methods,” Turing Institute, Research Memorandum TIRM-87-018, Mar. 1987.
  91. K. Mansouri, T. Ringsted, D. Ballabio, R. Todeschini, and V. Consonni, “Quantitative structure–activity relationship models for ready biodegradability of chemicals,” J. Chem. Inf. Model., vol. 53, no. 4, pp. 867–878, Apr. 2013.
  92. E. Fix and J. L. Hodges, “Discriminatory analysis. Nonparametric discrimination: Consistency properties,” Int. Stat. Rev., vol. 57, no. 3, pp. 238–247, Dec. 1989.
  93. R. G. Brereton and G. R. Lloyd, “Partial least squares discriminant analysis: Taking the magic away,” J. Chemom., vol. 28, no. 4, pp. 213–225, Apr. 2014.
  94. Y. Ma, T. Xie, J. Li, and R. Maciejewski, “Explaining vulnerabilities to adversarial machine learning through visual analytics,” IEEE TVCG, vol. 26, no. 1, pp. 1075–1085, Jan. 2020.
  95. Y. Ming, P. Xu, F. Cheng, H. Qu, and L. Ren, “ProtoSteer: Steering deep sequence model with prototypes,” IEEE TVCG, vol. 26, no. 1, pp. 238–248, Jan. 2020.
  96. K. Xu, M. Xia, X. Mu, Y. Wang, and N. Cao, “EnsembleLens: Ensemble-based visual exploration of anomaly detection algorithms with multidimensional data,” IEEE TVCG, vol. 25, no. 1, pp. 109–119, Jan. 2019.
  97. T. Swearingen, W. Drevo, B. Cyphers, A. Cuesta-Infante, A. Ross, and K. Veeramachaneni, “ATM: A distributed, collaborative, scalable system for automated machine learning,” in Proc. of IEEE Big Data.   IEEE, 2017, pp. 151–162.
  98. M. Espadoto, R. M. Martins, A. Kerren, N. S. T. Hirata, and A. C. Telea, “Toward a quantitative survey of dimension reduction techniques,” IEEE TVCG, vol. 27, no. 3, pp. 2153–2173, Mar. 2021.
  99. S. M. Moosavi, K. M. Jablonka, and B. Smit, “The role of machine learning in the understanding and design of materials,” JACS, vol. 142, no. 48, pp. 20 273–20 287, Nov. 2020.
  100. C. D. Stolper, A. Perer, and D. Gotz, “Progressive visual analytics: User-driven visual exploration of in-progress analytics,” IEEE TVCG, vol. 20, no. 12, pp. 1653–1662, Dec. 2014.
  101. C. Turkay, N. Pezzotti, C. Binnig, H. Strobelt, B. Hammer, D. A. Keim, J. Fekete, T. Palpanas, Y. Wang, and F. Rusu, “Progressive data science: Potential and challenges,” ArXiv e-prints, vol. 1812.08032, 2018.
  102. S. M. Lundberg and S.-I. Lee, “A unified approach to interpreting model predictions,” in Proc. of NIPS, 2017, pp. 4768–4777.
  103. R. Shwartz-Ziv and A. Armon, “Tabular data: Deep learning is not all you need,” ArXiv e-prints, vol. 2106.03253, 2021.
  104. D. Babaev, M. Savchenko, A. Tuzhilin, and D. Umerenkov, “E.T.-RNN: Applying deep learning to credit loan applications,” in Proc. of ACM KDD.   ACM, 2019, pp. 2183–2190.
  105. E.-S. M. El-Kenawy, A. Ibrahim, S. Mirjalili, M. M. Eid, and S. E. Hussein, “Novel feature selection and voting classifier algorithms for COVID-19 classification in CT images,” IEEE Access, vol. 8, pp. 179 317–179 335, Sep. 2020.
  106. “Vue.js — The progressive JavaScript framework,” https://vuejs.org/, 2014, accessed Dec. 8, 2021.
  107. “D3 — Data-driven documents,” https://d3js.org/, 2011, accessed Dec. 8, 2021.
  108. “Plotly — JavaScript open source graphing library,” https://plot.ly, 2010, accessed Dec. 8, 2021.
  109. “Flask — A micro web framework written in Python,” https://flask.palletsprojects.com/, 2010, accessed Dec. 8, 2021.
Citations (20)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com