Papers
Topics
Authors
Recent
Search
2000 character limit reached

Computational toolkit for predicting thickness of 2D materials using machine learning and autogenerated dataset by large language model

Published 24 May 2024 in cond-mat.mtrl-sci and cond-mat.str-el | (2405.15131v1)

Abstract: The thickness of 2D materials not only plays a crucial role in determining the performance of nanoelectronic and optoelectronic devices but also introduces complexities in predicting volume-dependent properties such as energy storage capacity, due to the intrinsic vacuum within these materials. Although a plethora of experimental techniques, including but not limited to optical contrast, Raman spectroscopy, nonlinear optical spectroscopy, near-field optical imaging, and hyperspectral imaging, facilitate the measurement of 2D material thickness, comprehensive data for many materials remains elusive. Over the last decade, the exponential proliferation of 2D materials and their heterostructures has outstripped the capabilities of conventional experimental and computational approaches. In this evolving landscape, ML has emerged as an indispensable tool, offering novel avenues to augment these traditional methodologies. Addressing the critical gap, we introduce THICK2D - Thickness Hierarchy Inference and Calculation Kit for 2D Materials. This Python-based computational framework harnesses an autogenerated thickness database, developed using LLMs, and advanced ML algorithms to facilitate the rapid and scalable estimation of material thickness, relying solely on crystallographic data. To demonstrate the utility and robustness of THICK2D, we successfully employed the toolkit to predict the thickness of more than 8000 2D-based materials, sourced from two extensive 2D material databases. THICK2D is disseminated as an open-source utility, accessible on GitHub https://github.com/gmp007/THICK2D, and archived on Zenodo at https://doi.org/10.5281/zenodo.11216648}{10.5281/zenodo.11216648.

Authors (1)
Definition Search Book Streamline Icon: https://streamlinehq.com
References (18)
  1. doi:10.1038/nature11458. URL http://dx.doi.org/10.1038/nature11458
  2. doi:https://doi.org/10.1016/j.mattod.2020.04.030. URL https://www.sciencedirect.com/science/article/pii/S1369702120301516
  3. doi:10.1021/acsami.3c19251. URL https://doi.org/10.1021/acsami.3c19251
  4. doi:10.1007/s12274-019-2424-6. URL https://doi.org/10.1007/s12274-019-2424-6
  5. arXiv:https://onlinelibrary.wiley.com/doi/pdf/10.1002/lpor.202200357, doi:https://doi.org/10.1002/lpor.202200357. URL https://onlinelibrary.wiley.com/doi/abs/10.1002/lpor.202200357
  6. arXiv:https://doi.org/10.1021/acsnano.2c12773, doi:10.1021/acsnano.2c12773. URL https://doi.org/10.1021/acsnano.2c12773
  7. C. E. Ekuma, Dynamic in-context learning with conversational models for data extraction and materials property predictionAccessed: 2024-05-17 (2024). arXiv:arXiv:2405.10448v1.
  8. doi:10.1088/2053-1583/aacfc1. URL https://doi.org/10.1088/2053-1583/aacfc1
  9. doi:10.1038/s41597-019-0097-3. URL https://doi.org/10.1038/s41597-019-0097-3
  10. doi:10.1088/1361-648X/aa680e. URL https://dx.doi.org/10.1088/1361-648X/aa680e
  11. arXiv:https://pubs.aip.org/aip/aml/article-pdf/doi/10.1063/5.0189497/19865006/026102\_1\_5.0189497.pdf, doi:10.1063/5.0189497. URL https://doi.org/10.1063/5.0189497
  12. doi:10.1186/s40537-019-0197-0. URL https://doi.org/10.1186/s40537-019-0197-0
  13. doi:10.1109/CVPR.2016.485. URL https://doi.ieeecomputersociety.org/10.1109/CVPR.2016.485
  14. doi:10.24963/ijcai.2019/403. URL https://doi.org/10.24963/ijcai.2019/403
  15. doi:10.1088/0957-4484/27/12/125704. URL https://dx.doi.org/10.1088/0957-4484/27/12/125704
  16. arXiv:https://doi.org/10.1021/nl071254m, doi:10.1021/nl071254m. URL https://doi.org/10.1021/nl071254m
  17. arXiv:https://pubs.aip.org/aip/apl/article-pdf/doi/10.1063/1.4803041/14270625/161906\_1\_online.pdf, doi:10.1063/1.4803041. URL https://doi.org/10.1063/1.4803041
  18. arXiv:https://doi.org/10.1021/acs.est.1c02363, doi:10.1021/acs.est.1c02363. URL https://doi.org/10.1021/acs.est.1c02363

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 1 like about this paper.