Materials Property Prediction with Graph Transformers and Uncertainty-Calibrated Active Learning
DOI:
https://doi.org/10.71465/fapm547Keywords:
Graph Neural Networks, Transformer Architecture, Active Learning, Uncertainty Quantification, Materials InformaticsAbstract
The acceleration of materials discovery relies heavily on the ability to predict physicochemical properties of crystal structures with high accuracy and computational efficiency. While Density Functional Theory (DFT) provides precise ground-truth data, its cubic scaling with electron count renders it prohibitive for high-throughput screening of vast chemical spaces. Consequently, machine learning surrogates have emerged as a critical alternative. However, conventional Message Passing Neural Networks (MPNNs) often struggle to capture long-range atomic interactions and suffer from over-smoothing in deep architectures. Furthermore, standard active learning frameworks frequently rely on uncalibrated uncertainty estimates, leading to suboptimal sampling strategies and wasted computational resources. This paper introduces a novel framework: the Graph Transformer with Uncertainty-Calibrated Active Learning (GT-UCAL). We propose a geometric graph transformer architecture that integrates structural positional encodings to capture global topology, coupled with an evidential deep learning mechanism to quantify both aleatoric and epistemic uncertainties. Through rigorous experimentation on the Materials Project and JARVIS datasets, we demonstrate that GT-UCAL achieves state-of-the-art predictive performance while reducing the requisite labeled data by approximately 40% compared to random sampling.
Downloads
Downloads
Published
Issue
Section
License

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.