Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Revealing the structure-property relationships of copper alloys with FAGC (2404.09515v2)

Published 15 Apr 2024 in cs.CV

Abstract: Understanding how the structure of materials affects their properties is a cornerstone of materials science and engineering. However, traditional methods have struggled to accurately describe the quantitative structure-property relationships for complex structures. In our study, we bridge this gap by leveraging machine learning to analyze images of materials' microstructures, thus offering a novel way to understand and predict the properties of materials based on their microstructures. We introduce a method known as FAGC (Feature Augmentation on Geodesic Curves), specifically demonstrated for Cu-Cr-Zr alloys. This approach utilizes machine learning to examine the shapes within images of the alloys' microstructures and predict their mechanical and electronic properties. This generative FAGC approach can effectively expand the relatively small training datasets due to the limited availability of materials images labeled with quantitative properties. The process begins with extracting features from the images using neural networks. These features are then mapped onto the Pre-shape space to construct the Geodesic curves. Along these curves, new features are generated, effectively increasing the dataset. Moreover, we design a pseudo-labeling mechanism for these newly generated features to further enhance the training dataset. Our FAGC method has shown remarkable results, significantly improving the accuracy of predicting the electronic conductivity and hardness of Cu-Cr-Zr alloys, with R-squared values of 0.978 and 0.998, respectively. These outcomes underscore the potential of FAGC to address the challenge of limited image data in materials science, providing a powerful tool for establishing detailed and quantitative relationships between complex microstructures and material properties.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (40)
  1. Machine learning in materials informatics: recent applications and prospects. npj Computational Materials, 3(1):54, 2017.
  2. Vladimir Vapnik. Support-vector networks. Machine learning, 20:273–297, 1995.
  3. Thorsten Joachims. Text categorization with support vector machines: Learning with many relevant features. In European conference on machine learning, pages 137–142. Springer, 1998.
  4. Support vector regression machines. Advances in neural information processing systems, 9, 1996.
  5. Leo Breiman. Classification and regression trees. Routledge, 2017.
  6. Machine learning approaches for elastic localization linkages in high-contrast composite materials. Integrating Materials and Manufacturing Innovation, 4:192–208, 2015.
  7. Prediction of material removal rate for chemical mechanical planarization using decision tree-based ensemble learning. Journal of Manufacturing Science and Engineering, 141(3):031003, 2019.
  8. Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems, 25, 2012.
  9. Visualizing and understanding convolutional networks. In Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part I 13, pages 818–833. Springer, 2014.
  10. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
  11. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998.
  12. Imagenet large scale visual recognition challenge. International journal of computer vision, 115:211–252, 2015.
  13. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
  14. Efficientnet: Rethinking model scaling for convolutional neural networks. In International conference on machine learning, pages 6105–6114. PMLR, 2019.
  15. Microstructural evolution and coarsening behavior of the precipitates in 2205 duplex stainless steel aged at 850° c. Journal of Materials Research and Technology, 26:2560–2574, 2023.
  16. Highly accurate and efficient prediction of effective thermal conductivity of sintered silver based on deep learning method. International Journal of Heat and Mass Transfer, 201:123654, 2023.
  17. Predicting the effective thermal conductivities of composite materials and porous media by machine learning methods. International Journal of Heat and Mass Transfer, 127:908–916, 2018.
  18. Mesoscopic predictions of the effective thermal conductivity for microscale random porous media. Physical Review E, 75(3):036702, 2007.
  19. A survey on image data augmentation for deep learning. Journal of big data, 6(1):1–48, 2019.
  20. Generative adversarial nets. Advances in neural information processing systems, 27, 2014.
  21. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.
  22. A simple feature augmentation for domain generalization. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 8886–8895, 2021.
  23. On feature normalization and data augmentation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 12383–12392, 2021.
  24. David G Kendall. Shape manifolds, procrustean metrics, and complex projective spaces. Bulletin of the London mathematical society, 16(2):81–121, 1984.
  25. Yuexing Han. Recognize objects with three kinds of information in landmarks. Pattern Recognition, 46(11):2860–2873, 2013.
  26. Recognizing objects with multiple configurations. Pattern Analysis and Applications, 17:195–209, 2014.
  27. Kim Evans. Curve-fitting in shape space. Quantitative biology, shape analysis, and wavelets. Leeds University Press, Leeds, 2005.
  28. Shape curves and geodesic modelling. Biometrika, 97(3):567–584, 2010.
  29. Recognition of multiple configurations of objects with limited data. Pattern Recognition, 43(4):1467–1475, 2010.
  30. Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4510–4520, 2018.
  31. François Chollet. Xception: Deep learning with depthwise separable convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1251–1258, 2017.
  32. Searching for activation functions. arXiv preprint arXiv:1710.05941, 2017.
  33. Fagc:feature augmentation on geodesic curve in the pre-shape space, 2023.
  34. IL Dryden and KV Mardia. Statistical shape analysis.,(wiley: New york, ny.). 1998.
  35. Christopher G Small. The statistical theory of shape. Springer Science & Business Media, 2012.
  36. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
  37. Nearest neighbor pattern classification. IEEE transactions on information theory, 13(1):21–27, 1967.
  38. Experiments with a new boosting algorithm. In icml, volume 96, pages 148–156. Citeseer, 1996.
  39. Extremely randomized trees. Machine learning, 63:3–42, 2006.
  40. Leo Breiman. Bagging predictors. Machine learning, 24:123–140, 1996.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Yuexing Han (10 papers)
  2. Guanxin Wan (2 papers)
  3. Bing Wang (246 papers)
  4. Yi Liu (545 papers)
  5. Tao Han (233 papers)

Summary

We haven't generated a summary for this paper yet.