How Interpretable Machine Learning Can Benefit Process Understanding in the Geosciences
Research output: Contribution to journal › Research article › Contributed › peer-review
Contributors
Abstract
Interpretable Machine Learning (IML) has rapidly advanced in recent years, offering new opportunities to improve our understanding of the complex Earth system. IML goes beyond conventional machine learning by not only making predictions but also seeking to elucidate the reasoning behind those predictions. The combination of predictive power and enhanced transparency makes IML a promising approach for uncovering relationships in data that may be overlooked by traditional analysis. Despite its potential, the broader implications for the field have yet to be fully appreciated. Meanwhile, the rapid proliferation of IML, still in its early stages, has been accompanied by instances of careless application. In response to these challenges, this paper focuses on how IML can effectively and appropriately aid geoscientists in advancing process understanding—areas that are often underexplored in more technical discussions of IML. Specifically, we identify pragmatic application scenarios for IML in typical geoscientific studies, such as quantifying relationships in specific contexts, generating hypotheses about potential mechanisms, and evaluating process-based models. Moreover, we present a general and practical workflow for using IML to address specific research questions. In particular, we identify several critical and common pitfalls in the use of IML that can lead to misleading conclusions, and propose corresponding good practices. Our goal is to facilitate a broader, yet more careful and thoughtful integration of IML into Earth science research, positioning it as a valuable data science tool capable of enhancing our current understanding of the Earth system.
Details
Original language | English |
---|---|
Article number | e2024EF004540 |
Journal | Earth's Future |
Volume | 12 |
Issue number | 7 |
Publication status | Published - Jul 2024 |
Peer-reviewed | Yes |
Keywords
ASJC Scopus subject areas
Keywords
- big data, interpretability, interpretable machine learning, knowledge discovery, machine learning, XAI