Configuring Parallelism for Hybrid Layouts Using Multi-Objective Optimization
Research output: Contribution to journal › Research article › Contributed › peer-review
Contributors
Abstract
Modern organizations typically store their data in a raw format in data lakes. These data are then processed and usually stored under hybrid layouts, because they allow projection and selection operations. Thus, they allow (when required) to read less data from the disk. However, this is not very well exploited by distributed processing frameworks (e.g., Hadoop, Spark) when analytical queries are posed. These frameworks divide the data into multiple partitions and then process each partition in a separate task, consequently creating tasks based on the total file size and not the actual size of the data to be read. This typically leads to launching more tasks than needed, which, in turn, increases the query execution time and induces significant waste of computing resources. To allow a more efficient use of resources and reduce the query execution time, we propose a method that decides the number of tasks based on the data being read. To this end, we first propose a cost-based model for estimating the size of data read in hybrid layouts. Next, we use the estimated reading size in a multi-objective optimization method to decide the number of tasks and computational resources to be used. We prototyped our solution for Apache Parquet and Spark and found that our estimations are highly correlated (0.96) with the real executions. Further, using TPC-H we show that our recommended configurations are only 5.6% away from the Pareto front and provide 2.1 × speedup compared with default solutions.
Details
Original language | English |
---|---|
Pages (from-to) | 235-247 |
Number of pages | 13 |
Journal | Big data |
Volume | 8 |
Issue number | 3 |
Publication status | Published - 1 Jun 2020 |
Peer-reviewed | Yes |
External IDs
Scopus | 85085960812 |
---|---|
PubMed | 32397735 |
ORCID | /0000-0001-8107-2775/work/142253443 |
Keywords
ASJC Scopus subject areas
Keywords
- big data, hybrid storage layouts, parallelism, Parquet, Spark