Developing a Benchmarking Framework for a Ready-to-Use RBB Library for Forest Ecology: pyMANGA

Research output: Contribution to conferencesPresentation slidesContributedpeer-review

Abstract

Computer models are commonly used in forest ecology to simulate how trees adapt to changes, such as those caused by climate change, and how this is ultimately reflected in the response patterns of forest stands. However, the development of models, especially for specific research questions, is often ad-hoc, which can make them error-prone and inefficient. To address this issue, we have developed pyMANGA (PYthon Models for AgeNt-based resource Gathering), a platform that collects model descriptions for the growth of tree-like plants in response to environmental conditions, or competition and facilitation among neighboring trees. pyMANGA is not only a collection of single descriptions, or reusable building blocks (RBB), but a ready-to-use model library, where RBB can be switched on and off, so that a model can be configured in order to meet different research demands. The development of the platform is based on four design objectives: (i) modularity, (ii) transparency (iii) automation and (iv) foster contribution. Modularity is achieved using object-oriented programming (OOP) paradigms, whereas the other objectives are supported by the use of several GitHub services. However, the flexibility needed for the platform to grow is not fully provided from the beginning. One reason, of course, is that we are not skilled programmers. Another is that the reusability of individual functions or BBs was often not recognized during initial implementation. As a result, the continuous expansion of pyMANGA often requires adaptation of existing BBs. To ensure the functionality of the individual RBBs and the whole platform, we have developed a benchmarking framework for pyMANGA. For each RBB, a benchmark following defined design rules needs to be provided. From a technical perspective, this allows to test the functionally of new RBB, i.e., the functioning of interfaces, as well as ensures code functionality after platform updates, i.e., tests whether model output is similar before and after an update. Moreover, those benchmarks are a first test for model consistency, i.e., does the model do what it should do, and they allow users to compare RBBs with other implementations, e.g., in other programming languages. While sharing models with (complex) case studies is common practice, we advocate for presenting RBBs along with simple and easy-to-replicate benchmarks. Here, we present our benchmarking framework and its integration into an automated testing workflow.

Details

Original languageGerman
Publication statusPublished - Sept 2023
Peer-reviewedYes

Conference

Title9th European Conference on Ecological Modelling
SubtitleEcological Modelling for Transformation
Abbreviated titleECEM 2023
Conference number9
Duration4 - 8 September 2023
Website
Degree of recognitionInternational event
LocationHelmholtz-Zentrum für Umweltforschung – UFZ
CityLeipzig
CountryGermany