Computational toolkit for predicting thickness of 2D materials using machine learning and autogenerated dataset by large language model
Abstract: The thickness of 2D materials not only plays a crucial role in determining the performance of nanoelectronic and optoelectronic devices but also introduces complexities in predicting volume-dependent properties such as energy storage capacity, due to the intrinsic vacuum within these materials. Although a plethora of experimental techniques, including but not limited to optical contrast, Raman spectroscopy, nonlinear optical spectroscopy, near-field optical imaging, and hyperspectral imaging, facilitate the measurement of 2D material thickness, comprehensive data for many materials remains elusive. Over the last decade, the exponential proliferation of 2D materials and their heterostructures has outstripped the capabilities of conventional experimental and computational approaches. In this evolving landscape, ML has emerged as an indispensable tool, offering novel avenues to augment these traditional methodologies. Addressing the critical gap, we introduce THICK2D - Thickness Hierarchy Inference and Calculation Kit for 2D Materials. This Python-based computational framework harnesses an autogenerated thickness database, developed using LLMs, and advanced ML algorithms to facilitate the rapid and scalable estimation of material thickness, relying solely on crystallographic data. To demonstrate the utility and robustness of THICK2D, we successfully employed the toolkit to predict the thickness of more than 8000 2D-based materials, sourced from two extensive 2D material databases. THICK2D is disseminated as an open-source utility, accessible on GitHub https://github.com/gmp007/THICK2D, and archived on Zenodo at https://doi.org/10.5281/zenodo.11216648}{10.5281/zenodo.11216648.
- doi:10.1038/nature11458. URL http://dx.doi.org/10.1038/nature11458
- doi:https://doi.org/10.1016/j.mattod.2020.04.030. URL https://www.sciencedirect.com/science/article/pii/S1369702120301516
- doi:10.1021/acsami.3c19251. URL https://doi.org/10.1021/acsami.3c19251
- doi:10.1007/s12274-019-2424-6. URL https://doi.org/10.1007/s12274-019-2424-6
- arXiv:https://onlinelibrary.wiley.com/doi/pdf/10.1002/lpor.202200357, doi:https://doi.org/10.1002/lpor.202200357. URL https://onlinelibrary.wiley.com/doi/abs/10.1002/lpor.202200357
- arXiv:https://doi.org/10.1021/acsnano.2c12773, doi:10.1021/acsnano.2c12773. URL https://doi.org/10.1021/acsnano.2c12773
- C. E. Ekuma, Dynamic in-context learning with conversational models for data extraction and materials property predictionAccessed: 2024-05-17 (2024). arXiv:arXiv:2405.10448v1.
- doi:10.1088/2053-1583/aacfc1. URL https://doi.org/10.1088/2053-1583/aacfc1
- doi:10.1038/s41597-019-0097-3. URL https://doi.org/10.1038/s41597-019-0097-3
- doi:10.1088/1361-648X/aa680e. URL https://dx.doi.org/10.1088/1361-648X/aa680e
- arXiv:https://pubs.aip.org/aip/aml/article-pdf/doi/10.1063/5.0189497/19865006/026102\_1\_5.0189497.pdf, doi:10.1063/5.0189497. URL https://doi.org/10.1063/5.0189497
- doi:10.1186/s40537-019-0197-0. URL https://doi.org/10.1186/s40537-019-0197-0
- doi:10.1109/CVPR.2016.485. URL https://doi.ieeecomputersociety.org/10.1109/CVPR.2016.485
- doi:10.24963/ijcai.2019/403. URL https://doi.org/10.24963/ijcai.2019/403
- doi:10.1088/0957-4484/27/12/125704. URL https://dx.doi.org/10.1088/0957-4484/27/12/125704
- arXiv:https://doi.org/10.1021/nl071254m, doi:10.1021/nl071254m. URL https://doi.org/10.1021/nl071254m
- arXiv:https://pubs.aip.org/aip/apl/article-pdf/doi/10.1063/1.4803041/14270625/161906\_1\_online.pdf, doi:10.1063/1.4803041. URL https://doi.org/10.1063/1.4803041
- arXiv:https://doi.org/10.1021/acs.est.1c02363, doi:10.1021/acs.est.1c02363. URL https://doi.org/10.1021/acs.est.1c02363
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.