A Memory-oriented Optimization Approach to Reinforcement Learning on FPGA-based Embedded Systems.
Research output: Contribution to book/Conference proceedings/Anthology/Report › Conference contribution › Contributed › peer-review
Contributors
Abstract
Reinforcement Learning (RL) represents the machine learning method that has come closest to showing human-like learning. While Deep RL is becoming increasingly popular for complex applications such as AI-based gaming, it has a high implementation cost in terms of both power and latency. Q-Learning, on the other hand, is a much simpler method that makes it more feasible for implementation on resource-constrained embedded systems for control and navigation. However, the optimal policy search in Q-Learning is a compute-intensive and inherently sequential process and a software-only implementation may not be able to satisfy the latency and throughput constraints of such applications. To this end, we propose a novel accelerator design with multiple design trade-offs for implementing Q-Learning on FPGA-based SoCs. Specifically, we analyze the various stages of the Epsilon-Greedy algorithm for RL and propose a novel microarchitecture that reduces the latency by optimizing the memory access during each iteration. Consequently, we present multiple designs that provide varying trade-offs between performance, power dissipation, and resource utilization of the accelerator. With the proposed approach, we report considerable improvement in throughput with lower resource utilization over state-of-The-Art design implementations.
Details
Original language | English |
---|---|
Title of host publication | GLSVLSI 2021 - Proceedings of the 2021 Great Lakes Symposium on VLSI |
Pages | 339-346 |
Number of pages | 8 |
Publication status | Published - 22 Jun 2021 |
Peer-reviewed | Yes |
External IDs
Scopus | 85109211240 |
---|
Keywords
Research priority areas of TU Dresden
ASJC Scopus subject areas
Keywords
- energy-efficient computing, fpga, hardware accelerators, high-level synthesis, memory-centric computing