Human-inspired models for tactile computing

Research output: Contribution to book/Conference proceedings/Anthology/ReportChapter in book/Anthology/ReportContributed

Abstract

The development of Tactile Internet with Human-in-the-Loop (TaHiL) applications faces many challenges for a successful interplay between Cyber-Physical System (CPS) and humans. Technical constraints on the communication latency, computation time, energy, and failure rates have to be met to ensure a seamless and safe integration. Furthermore, the systems developed should be able to predict human actions to enable timely reactions, support humans in their decisions, and to explain the system's behavior to the human appropriately. Currently, there is a lack of models and algorithms that evaluate human goal-directed behavior while fulfilling the constraints required in Tactile Internet (TI) applications.

In this chapter, we sketch first ideas towards automated human-style reasoning using stochastic operational models that mimic human decision-making. For this, we consider recent insights from cognitive neuroscience and formal methods, bridging the gap between the human brain and computation. Besides being useful to predict human behavior, we expect our models also to be eligible for efficient decision-making by machines. The latter is motivated by the fact that the human brain is undoubtedly efficient when operating on hard constraints—our operational models for human reasoning could hence be the basis for novel algorithms that meet the requirements on TI applications. We present prerequisites, first steps, and concepts, as well as challenges towards an integrated automated human-style reasoning.

Details

Original languageEnglish
Title of host publicationTactile Internet
EditorsFrank H.P. Fitzek, Shu-Chen Li, Stefanie Speidel, Thorsten Strufe, Meryem Simsek, Martin Reisslein
PublisherAcademic Press
Chapter8
Pages169-195
Number of pages27
ISBN (electronic)9780128213438
ISBN (print)978-0-12-821343-8
Publication statusPublished - 1 Jan 2021
Peer-reviewedNo

External IDs

Scopus 85158991234
ORCID /0000-0002-5321-9343/work/142236761

Keywords

Keywords

  • Human-style reasoning, goal-directed control, reinforcement learning, stochastic operational models