Explainability of Neural Networks for Symbol Detection in Molecular Communication Channels

Research output: Contribution to journalResearch articleContributedpeer-review

Contributors

Abstract

Recent molecular communication (MC) research suggests machine learning (ML) models for symbol detection, avoiding the unfeasibility of end-to-end channel models. However, ML models are applied as black boxes, lacking proof of correctness of the underlying neural networks (NNs) to detect incoming symbols. This paper studies approaches to the explainability of NNs for symbol detection in MC channels. Based on MC channel models and real testbed measurements, we generate synthesized data and train a NN model to detect of binary transmissions in MC channels. Using the local interpretable model-agnostic explanation (LIME) method and the individual conditional expectation (ICE), the findings in this paper demonstrate the analogy between the trained NN and the standard peak and slope detectors.

Details

Original languageEnglish
Pages (from-to)323-328
Number of pages6
Journal IEEE Transactions on molecular, biological and multi-scale communications : TMBMC
Volume9
Issue number3
Publication statusPublished - 1 Sept 2023
Peer-reviewedYes

External IDs

ORCID /0000-0001-8469-9573/work/161891070

Keywords

Keywords

  • Explainable AI, individual conditional expectation, local interpretable model-agnostic explanation, machine learning, molecular communication, neural network, testbed