A Framework for the Execution of Python Tests in SystemC and Specman Testbenches

Research output: Contribution to book/Conference proceedings/Anthology/ReportConference contributionContributedpeer-review

Contributors

Abstract

Modern HW/SW co-design approaches require the parallel development of hardware (HW) and software (SW) in order to meet demanding time-to-market goals. That, in turn, results in both HW and SW development teams working with different platforms and tools, creating a gap between the two domains and making reuse difficult. The SW team usually develops and tests its components with a virtual prototype implemented in SystemC, for example. For an early availability of this model, the abstraction level is chosen higher and results in differences compared to the real HW. Therefore, it is necessary to validate the functionality of the SW components also in a more precise platform, e.g. RTL simulation, which is typically used by the HW development team and can be carried out in Specman testbenches. In this work, we present a solution to close the gap between top-level SystemC and RTL simulations. We propose a framework that allows executing the same test cases, implemented in Python, in both simulation platforms without additional effort. Thus, the SW development team has easy and flexible access to SystemC and RTL simulation. For this purpose, we have implemented a Python API for SystemC and Specman testbenches. The API facilitates scripted host interaction with the device under test (DUT) and simulation control.

Details

Original languageEnglish
Title of host publication2022 Design and Verification Conference and Exhibition Europe (DVCon)
Publication statusPublished - Dec 2022
Peer-reviewedYes

Conference

Title2022 Design and Verification Conference and Exhibition Europe
Abbreviated titleDVCON
Duration6 - 7 December 2022
Degree of recognitionInternational event
CityMünchen
CountryGermany

External IDs

ORCID /0000-0003-2571-8441/work/142240555
ORCID /0000-0001-5005-0928/work/142241973