Apache Spark and Apache Flink are two Big Data frameworks used for fast data exploration and analysis. Both frameworks provide the runtime of program sections and performance metrics, such as the number of bytes read or written, via an integrated dashboard. Performance metrics available in the dashboard lack timely information and are only shown aggregated in a separate part of the dashboard. However, performance investigations and optimizations would benefit from an integrated view with detailed performance metric events. Thus, we propose a system that samples metrics at runtime and collects information about the program sections after the execution finishes. The performance data is stored in an established format independent from Spark and Flink versions and can be viewed with state-of-the-art performance tools, i.e. Vampir. The overhead depends on the sampling interval and was below 10% in our experiments.
|Published - 2019
|The International Conference for High Performance Computing, Networking, Storage, and Analysis
|17 - 22 November 2019
|Colorado Convention Center
|United States of America