Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 157 tok/s
Gemini 2.5 Pro 49 tok/s Pro
GPT-5 Medium 35 tok/s Pro
GPT-5 High 31 tok/s Pro
GPT-4o 97 tok/s Pro
Kimi K2 218 tok/s Pro
GPT OSS 120B 450 tok/s Pro
Claude Sonnet 4.5 35 tok/s Pro
2000 character limit reached

Learning with Delayed Rewards -- A case study on inverse defect design in 2D materials (2106.10557v1)

Published 19 Jun 2021 in cond-mat.mtrl-sci

Abstract: Defect dynamics in materials are of central importance to a broad range of technologies from catalysis to energy storage systems to microelectronics. Material functionality depends strongly on the nature and organization of defects, their arrangements often involve intermediate or transient states that present a high barrier for transformation. The lack of knowledge of these intermediate states and the presence of this energy barrier presents a serious challenge for inverse defect design, especially for gradient-based approaches. Here, we present a reinforcement learning (Monte Carlo Tree Search) based on delayed rewards that allow for efficient search of the defect configurational space and allows us to identify optimal defect arrangements in low dimensional materials. Using a representative case of 2D MoS2, we demonstrate that the use of delayed rewards allows us to efficiently sample the defect configurational space and overcome the energy barrier for a wide range of defect concentrations (from 1.5% to 8% S vacancies), the system evolves from an initial randomly distributed S vacancies to one with extended S line defects consistent with previous experimental studies. Detailed analysis in the feature space allows us to identify the optimal pathways for this defect transformation and arrangement. Comparison with other global optimization schemes like genetic algorithms suggests that the MCTS with delayed rewards takes fewer evaluations and arrives at a better quality of the solution. The implications of the various sampled defect configurations on the 2H to 1T phase transitions in MoS2 are discussed. Overall, we introduce a Reinforcement Learning (RL) strategy employing delayed rewards that can accelerate the inverse design of defects in materials for achieving targeted functionality.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.