Job Performance Overview of Apache Flink and Apache Spark Applications
Research output: Contribution to conferences › Poster › Contributed › peer-review
Contributors
Abstract
Apache Spark and Apache Flink are two Big Data frameworks used for fast data exploration and analysis. Both frameworks provide the runtime of program sections and performance metrics, such as the number of bytes read or written, via an integrated dashboard. Performance metrics available in the dashboard lack timely information and are only shown aggregated in a separate part of the dashboard. However, performance investigations and optimizations would benefit from an integrated view with detailed performance metric events. Thus, we propose a system that samples metrics at runtime and collects information about the program sections after the execution finishes. The performance data is stored in an established format independent from Spark and Flink versions and can be viewed with state-of-the-art performance tools, i.e. Vampir. The overhead depends on the sampling interval and was below 10% in our experiments.
Details
Original language | English |
---|---|
Publication status | Published - 2019 |
Peer-reviewed | Yes |
Conference
Title | The International Conference for High Performance Computing, Networking, Storage, and Analysis |
---|---|
Abbreviated title | SC 19 |
Duration | 17 - 22 November 2019 |
Website | |
Location | Colorado Convention Center |
City | Denver |
Country | United States of America |
External IDs
ORCID | /0009-0007-5755-1427/work/142250922 |
---|