Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Adaptoring: Adapter Generation to Provide an Alternative API for a Library (2401.07053v1)

Published 13 Jan 2024 in cs.SE

Abstract: Third-party libraries are a cornerstone of fast application development. To enable efficient use, libraries must provide a well-designed API. An obscure API instead slows down the learning process and can lead to erroneous use. The usual approach to improve the API of a library is to edit its code directly, either keeping the old API but deprecating it (temporarily increasing the API size) or dropping it (introducing breaking changes). If maintainers are unwilling to make such changes, others need to create a hard fork, which they can refactor. But then it is difficult to incorporate changes to the original library, such as bug fixes or performance improvements. In this paper, we instead explore the use of the adapter pattern to provide a new API as a new library that calls the original library internally. This allows the new library to leverage all implementation changes to the original library, at no additional cost. We call this approach adaptoring. To make the approach practical, we identify API transformations for which adapter code can be generated automatically, and investigate which transformations can be inferred automatically, based on the documentation and usage patterns of the original library. For cases where automated inference is not possible, we present a tool that lets developers manually specify API transformations. Finally, we consider the issue of migrating the generated adapters if the original library introduces breaking changes. We implemented our approach for Python, demonstrating its effectiveness to quickly provide an alternative API even for large libraries.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (73)
  1. C. R. B. de Souza and D. L. M. Bentolila, “Automatic evaluation of API usability using complexity metrics and visualizations,” in 2009 31st International Conference on Software Engineering - Companion Volume, May 2009, pp. 299–302.
  2. W. Wang and M. W. Godfrey, “Detecting api usage obstacles: A study of ios and android developer questions,” in 2013 10th Working Conference on Mining Software Repositories (MSR), 2013, pp. 61–64.
  3. M. Zibran, F. Eishita, and C. Roy, “Useful, but usable? factors affecting the usability of apis,” in 2011 18th Working Conference on Reverse Engineering, 10 2011, pp. 151–155.
  4. M. P. Robillard, “What makes apis hard to learn? answers from developers,” IEEE Software, vol. 26, no. 6, pp. 27–34, 2009.
  5. M. P. Robillard and R. DeLine, “A field study of api learning obstacles,” Empirical Software Engineering, vol. 16, p. 703–732, 2011.
  6. D. Dig and R. Johnson, “The role of refactorings in API evolution,” in 21st IEEE International Conference on Software Maintenance (ICSM’05), Sep. 2005, pp. 389–398.
  7. R. Koçi, X. Franch, P. Jovanovic, and A. Abelló, “Classification of Changes in API Evolution,” in 2019 IEEE 23rd International Enterprise Distributed Object Computing Conference (EDOC), Oct. 2019, pp. 243–249.
  8. T. Mens and T. Tourwe, “A survey of software refactoring,” vol. 30, no. 2, pp. 126–139, 2004.
  9. M. Abebe and C.-J. Yoo, “Trends, Opportunities and Challenges of Software Refactoring: A Systematic Literature Review,” 2014.
  10. A. A. B. Baqais and M. Alshayeb, “Automatic software refactoring: A systematic literature review,” vol. 28, no. 2, pp. 459–502, 2020. [Online]. Available: https://doi.org/10.1007/s11219-019-09477-y
  11. J. H. Perkins, “Automatically generating refactorings to support API evolution,” in Proceedings of the 6th ACM SIGPLAN-SIGSOFT Workshop on Program Analysis for Software Tools and Engineering, ser. PASTE ’05.   New York, NY, USA: Association for Computing Machinery, Sep. 2005, pp. 111–114.
  12. J. Henkel and A. Diwan, “CatchUp! capturing and replaying refactorings to support API evolution,” in Proceedings of the 27th International Conference on Software Engineering, ser. ICSE ’05.   New York, NY, USA: Association for Computing Machinery, May 2005, pp. 274–283.
  13. A. Hora, A. Etien, N. Anquetil, S. Ducasse, and M. T. Valente, “APIEvolutionMiner: Keeping API evolution under control,” in 2014 Software Evolution Week - IEEE Conference on Software Maintenance, Reengineering, and Reverse Engineering (CSMR-WCRE), Feb. 2014, pp. 420–424.
  14. Z. Xing and E. Stroulia, “API-Evolution Support with Diff-CatchUp,” IEEE Transactions on Software Engineering, vol. 33, no. 12, pp. 818–836, Dec. 2007.
  15. M. Lamothe and W. Shang, “Exploring the use of automated api migrating techniques in practice: An experience report on android,” 2018 IEEE/ACM 15th International Conference on Mining Software Repositories (MSR), pp. 503–514, 2018. [Online]. Available: https://api.semanticscholar.org/CorpusID:50775620
  16. C. H. Kao, C.-Y. Chang, and H. C. Jiau, “Towards cost-effective api deprecation: A win–win strategy for api developers and api users,” Information and Software Technology, vol. 142, p. 106746, 2022. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0950584921001944
  17. U. Jugel, “Generating Smart Wrapper Libraries for Arbitrary APIs,” in Software Language Engineering, ser. Lecture Notes in Computer Science, M. van den Brand, D. Gašević, and J. Gray, Eds.   Springer, 2010, pp. 354–373.
  18. B. A. Myers and J. Stylos, “Improving API usability,” Communications of the ACM, vol. 59, no. 6, pp. 62–69, May 2016.
  19. T. Scheller and E. Kuhn, “Influencing Factors on the Usability of API Classes and Methods,” in 2012 IEEE 19th International Conference and Workshops on Engineering of Computer-Based Systems, Apr. 2012, pp. 232–241.
  20. Z. Kurbatova, I. Veselov, Y. Golubev, and T. Bryksin, “Recommendation of move method refactoring using path-based representation of code,” in Proceedings of the IEEE/ACM 42nd International Conference on Software Engineering Workshops, ser. ICSEW’20.   New York, NY, USA: Association for Computing Machinery, 2020, p. 315–322. [Online]. Available: https://doi.org/10.1145/3387940.3392191
  21. R. Terra, M. T. Valente, S. Miranda, and V. Sales, “Jmove: A novel heuristic and tool to detect move method refactoring opportunities,” Journal of Systems and Software, vol. 138, pp. 19–36, 2018. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0164121217302960
  22. C. M. S. Couto, H. Rocha, and R. Terra, “A quality-oriented approach to recommend move method refactorings,” in Proceedings of the XVII Brazilian Symposium on Software Quality, ser. SBQS ’18.   New York, NY, USA: Association for Computing Machinery, 2018, p. 11–20. [Online]. Available: https://doi.org/10.1145/3275245.3275247
  23. G. Bavota, R. Oliveto, M. Gethers, D. Poshyvanyk, and A. De Lucia, “Methodbook: Recommending move method refactorings via relational topic models,” IEEE Transactions on Software Engineering, vol. 40, no. 7, pp. 671–694, 2014.
  24. V. Sales, R. Terra, L. F. Miranda, and M. T. Valente, “Recommending move method refactorings using dependency sets,” in 2013 20th Working Conference on Reverse Engineering (WCRE), 2013, pp. 232–241.
  25. T. Xu, L. Jin, X. Fan, Y. Zhou, S. Pasupathy, and R. Talwadker, “Hey, you have given me too many knobs!: Understanding and dealing with over-designed configuration in system software,” in Proceedings of the 2015 10th Joint Meeting on Foundations of Software Engineering, ser. ESEC/FSE 2015.   New York, NY, USA: Association for Computing Machinery, Aug. 2015, pp. 307–319.
  26. Wikipedia contributors, “P-value — Wikipedia, the free encyclopedia,” 2022, [Online; accessed 31-August-2022]. [Online]. Available: https://en.wikipedia.org/w/index.php?title=P-value&oldid=1102157223
  27. T. Freeman and F. Pfenning, “Refinement types for ml,” SIGPLAN Not., vol. 26, no. 6, p. 268–277, may 1991. [Online]. Available: https://doi.org/10.1145/113446.113468
  28. reStructuredText Primer — Sphinx documentation. Sphinx. [Online]. Available: https://www.sphinx-doc.org/en/master/usage/restructuredtext/basics.html
  29. Style guide — numpydoc v1.6.0rc2.dev0 Manual. Numpy. [Online]. Available: https://numpydoc.readthedocs.io/en/latest/format.html
  30. Styleguide. Google. [Online]. Available: https://google.github.io/styleguide/pyguide.html
  31. The Epytext Markup Language. Epydoc. [Online]. Available: https://epydoc.sourceforge.net/epytext.html
  32. M. Honnibal, I. Montani, S. Van Landeghem, and A. Boyd, “spaCy: Industrial-strength Natural Language Processing in Python,” 2020.
  33. X. Wang and L. Zhao, “Apicad: Augmenting api misuse detection through specifications from code and documents,” in 2023 IEEE/ACM 45th International Conference on Software Engineering (ICSE), 2023, pp. 245–256.
  34. Z. Xing and E. Stroulia, “UMLDiff: An algorithm for object-oriented design differencing,” in Proceedings of the 20th IEEE/ACM International Conference on Automated Software Engineering - ASE ’05.   ACM Press, p. 54. [Online]. Available: http://portal.acm.org/citation.cfm?doid=1101908.1101919
  35. ——, “API-Evolution Support with Diff-CatchUp,” IEEE Transactions on Software Engineering, vol. 33, no. 12, pp. 818–836, Dec. 2007. [Online]. Available: http://ieeexplore.ieee.org/document/4359473/
  36. B. Dagenais and M. P. Robillard, “Recommending adaptive changes for framework evolution,” in Proceedings of the 30th International Conference on Software Engineering, ser. ICSE ’08.   Association for Computing Machinery, pp. 481–490. [Online]. Available: https://doi.org/10.1145/1368088.1368154
  37. ——, “SemDiff: Analysis and recommendation support for API evolution,” in 2009 IEEE 31st International Conference on Software Engineering.   IEEE, pp. 599–602. [Online]. Available: http://ieeexplore.ieee.org/document/5070565/
  38. W. Wu, Y.-G. Guéhéneuc, G. Antoniol, and M. Kim, “AURA: A hybrid approach to identify framework evolution,” in Proceedings of the 32nd ACM/IEEE International Conference on Software Engineering - Volume 1.   ACM, pp. 325–334. [Online]. Available: https://dl.acm.org/doi/10.1145/1806799.1806848
  39. S. Meng, X. Wang, L. Zhang, and H. Mei, “A history-based matching approach to identification of framework evolution,” in 2012 34th International Conference on Software Engineering (ICSE).   IEEE, pp. 353–363. [Online]. Available: http://ieeexplore.ieee.org/document/6227179/
  40. T. Mens, “A state-of-the-art survey on software merging,” IEEE Transactions on Software Engineering, vol. 28, no. 5, pp. 449–462, 2002.
  41. L. Buitinck, G. Louppe, M. Blondel, F. Pedregosa, A. Mueller, O. Grisel, V. Niculae, P. Prettenhofer, A. Gramfort, J. Grobler, R. Layton, J. VanderPlas, A. Joly, B. Holt, and G. Varoquaux, “API design for machine learning software: experiences from the scikit-learn project,” in ECML PKDD Workshop: Languages for Data Mining and Machine Learning, 2013, pp. 108–122.
  42. F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay, “Scikit-learn: Machine learning in Python,” Journal of Machine Learning Research, vol. 12, pp. 2825–2830, 2011.
  43. J. D. Hunter, “Matplotlib: A 2d graphics environment,” Computing in Science & Engineering, vol. 9, no. 3, pp. 90–95, 2007.
  44. T. Boren and J. Ramey, “Thinking aloud: Reconciling theory and practice,” IEEE Transactions on Professional Communication, vol. 43, no. 3, pp. 261–278, Sep. 2000.
  45. E. Charters, “The Use of Think-aloud Methods in Qualitative Research An Introduction to Think-aloud Methods,” Brock Education Journal, vol. 12, no. 2, Jul. 2003.
  46. M. W. M. Jaspers, T. Steen, C. van den Bos, and M. Geenen, “The think aloud method: A guide to user interface design,” International Journal of Medical Informatics, vol. 73, no. 11, pp. 781–795, Nov. 2004.
  47. K. A. Young, “Direct from the source: The value of ’think-aloud’ data in understanding learning,” Journal of Educational Enquiry, vol. 6, p. 10, 2009.
  48. D. W. Eccles and G. Arsal, “The think aloud method: What is it and how do I use it?” Qualitative Research in Sport, Exercise and Health, vol. 9, no. 4, pp. 514–531, Aug. 2017.
  49. J. Brooke, “SUS - A quick and dirty usability scale,” Usability Eval. Ind., vol. 189, p. 7, 1995.
  50. A. Bangor, P. T. Kortum, and J. T. Miller, “An Empirical Evaluation of the System Usability Scale,” International Journal of Human–Computer Interaction, vol. 24, no. 6, pp. 574–594, Jul. 2008.
  51. S. Mclellan, A. Muddimer, and S. Peres, “The Effect of Experience on System Usability Scale Ratings,” Journal of Usability Studies, vol. 7, Nov. 2011.
  52. J. R. Lewis, “The System Usability Scale: Past, Present, and Future,” International Journal of Human–Computer Interaction, vol. 34, no. 7, pp. 577–590, Jul. 2018.
  53. T. Grill, O. Polacek, and M. Tscheligi, “Methods towards api usability: A structural analysis of usability problem categories,” in Human-Centered Software Engineering, M. Winckler, P. Forbrig, and R. Bernhaupt, Eds.   Berlin, Heidelberg: Springer Berlin Heidelberg, 2012, pp. 164–180.
  54. T. Scheller and E. Kühn, “Automated measurement of API usability,” Information and Software Technology, vol. 61, no. C, pp. 145–162, May 2015.
  55. J. Stylos and B. A. Myers, “The implications of method placement on API learnability,” in Proceedings of the 16th ACM SIGSOFT International Symposium on Foundations of Software Engineering, ser. SIGSOFT ’08/FSE-16.   New York, NY, USA: Association for Computing Machinery, Nov. 2008, pp. 105–112.
  56. R. Lämmel, E. Pek, and J. Starek, “Large-scale, AST-based API-usage analysis of open-source Java projects,” in Proceedings of the 2011 ACM Symposium on Applied Computing, ser. SAC ’11.   Association for Computing Machinery, 2011, pp. 1317–1324. [Online]. Available: https://dl.acm.org/doi/10.1145/1982185.1982471
  57. J. Stylos and S. Clarke, “Usability Implications of Requiring Parameters in Objects’ Constructors,” in Proceedings of the 29th International Conference on Software Engineering, ser. ICSE ’07.   USA: IEEE Computer Society, May 2007, pp. 529–539.
  58. G. M. Rama and A. Kak, “Some structural measures of API usability: SOME STRUCTURAL MEASURES OF API USABILITY,” Software: Practice and Experience, vol. 45, no. 1, pp. 75–110, Jan. 2015.
  59. Q. Xin, M. Kim, Q. Zhang, and A. Orso, “Subdomain-based generality-aware debloating,” in 2020 35th IEEE/ACM International Conference on Automated Software Engineering (ASE), 2020, pp. 224–236.
  60. Q. Xin, Q. Zhang, and A. Orso, “Studying and understanding the tradeoffs between generality and reduction in software debloating,” in Proceedings of the 37th IEEE/ACM International Conference on Automated Software Engineering, ser. ASE ’22.   New York, NY, USA: Association for Computing Machinery, 2023. [Online]. Available: https://doi.org/10.1145/3551349.3556970
  61. J. Bloch, “How to design a good API and why it matters,” in Companion to the 21st ACM SIGPLAN Symposium on Object-oriented Programming Systems, Languages, and Applications, ser. OOPSLA ’06.   New York, NY, USA: Association for Computing Machinery, Oct. 2006, pp. 506–507.
  62. D. Hou and L. Li, “Obstacles in Using Frameworks and APIs: An Exploratory Study of Programmers’ Newsgroup Discussions,” in 2011 IEEE 19th International Conference on Program Comprehension, Jun. 2011, pp. 91–100.
  63. L. Murphy, M. B. Kery, O. Alliyu, A. Macvean, and B. A. Myers, “API Designers in the Field: Design Practices and Challenges for Creating Usable APIs,” in 2018 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC), Oct. 2018, pp. 249–258.
  64. G. Lacerda, F. Petrillo, M. Pimenta, and Y. G. Guéhéneuc, “Code smells and refactoring: A tertiary systematic review of challenges and observations,” vol. 167, p. 110610, 2020. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0164121220300881
  65. J. A. M. Santos, J. B. Rocha-Junior, L. C. L. Prates, R. S. do Nascimento, M. F. Freitas, and M. G. de Mendonça, “A systematic review on the code smell effect,” vol. 144, pp. 450–477, 2018. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0164121218301444
  66. A. Yamashita and L. Moonen, “Do developers care about code smells? An exploratory survey,” in 2013 20th Working Conference on Reverse Engineering (WCRE), 2013, pp. 242–251.
  67. F. Arcelli Fontana, P. Braione, and M. Zanoni, “Automatic detection of bad smells in code: An experimental assessment,” vol. 11, 2012.
  68. J. Al Dallal, “Identifying refactoring opportunities in object-oriented code: A systematic literature review,” vol. 58, pp. 231–249, 2015. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0950584914001918
  69. G. Rasool and Z. Arshad, “A review of code smell mining techniques,” vol. 27, no. 11, pp. 867–895, 2015. [Online]. Available: https://onlinelibrary.wiley.com/doi/abs/10.1002/smr.1737
  70. D. Di Nucci, F. Palomba, D. A. Tamburri, A. Serebrenik, and A. De Lucia, “Detecting code smells using machine learning techniques: Are we there yet?” in 2018 IEEE 25th International Conference on Software Analysis, Evolution and Reengineering (SANER), 2018, pp. 612–621.
  71. Inspect — Inspect live objects. Python documentation. [Online]. Available: https://docs.python.org/3/library/inspect.html
  72. Importlib — The implementation of import. Python documentation. [Online]. Available: https://docs.python.org/3/library/importlib.html
  73. H. Wright. Hyrum’s Law. [Online]. Available: https://www.hyrumslaw.com/
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Lars Reimann (5 papers)
  2. Günter Kniesel-Wünsche (5 papers)

Summary

We haven't generated a summary for this paper yet.