Machine learning of microstructure--property relationships in materials leveraging microstructure representation from foundational vision transformers
Machine learning of microstructure--property relationships in materials leveraging microstructure representation from foundational vision transformers
Machine learning of microstructure--property relationships from data is an emerging approach in computational materials science. Most existing machine learning efforts focus on the development of task-specific models for each microstructure--property relationship. We propose utilizing pre-trained foundational vision transformers for the extraction of task-agnostic microstructure features and subsequent light-weight machine learning of a microstructure-dependent property. We demonstrate our approach with pre-trained state-of-the-art vision transformers (CLIP, DINOv2, SAM) in two case studies on machine-learning: (i) elastic modulus of two-phase microstructures based on simulations data; and (ii) Vicker's hardness of Ni-base and Co-base superalloys based on experimental data published in literature. Our results show the potential of foundational vision transformers for robust microstructure representation and efficient machine learning of microstructure--property relationships without the need for expensive task-specific training or fine-tuning of bespoke deep learning models.
Sheila E. Whitman、Marat I. Latypov
材料科学计算技术、计算机技术
Sheila E. Whitman,Marat I. Latypov.Machine learning of microstructure--property relationships in materials leveraging microstructure representation from foundational vision transformers[EB/OL].(2025-06-26)[2025-07-09].https://arxiv.org/abs/2501.18637.点此复制
评论