Configuring Parallelism for Hybrid Layouts Using Multi-Objective Optimization
Publikation: Beitrag in Fachzeitschrift › Forschungsartikel › Beigetragen › Begutachtung
Beitragende
Abstract
Modern organizations typically store their data in a raw format in data lakes. These data are then processed and usually stored under hybrid layouts, because they allow projection and selection operations. Thus, they allow (when required) to read less data from the disk. However, this is not very well exploited by distributed processing frameworks (e.g., Hadoop, Spark) when analytical queries are posed. These frameworks divide the data into multiple partitions and then process each partition in a separate task, consequently creating tasks based on the total file size and not the actual size of the data to be read. This typically leads to launching more tasks than needed, which, in turn, increases the query execution time and induces significant waste of computing resources. To allow a more efficient use of resources and reduce the query execution time, we propose a method that decides the number of tasks based on the data being read. To this end, we first propose a cost-based model for estimating the size of data read in hybrid layouts. Next, we use the estimated reading size in a multi-objective optimization method to decide the number of tasks and computational resources to be used. We prototyped our solution for Apache Parquet and Spark and found that our estimations are highly correlated (0.96) with the real executions. Further, using TPC-H we show that our recommended configurations are only 5.6% away from the Pareto front and provide 2.1 × speedup compared with default solutions.
Details
Originalsprache | Englisch |
---|---|
Seiten (von - bis) | 235-247 |
Seitenumfang | 13 |
Fachzeitschrift | Big data |
Jahrgang | 8 |
Ausgabenummer | 3 |
Publikationsstatus | Veröffentlicht - 1 Juni 2020 |
Peer-Review-Status | Ja |
Externe IDs
Scopus | 85085960812 |
---|---|
PubMed | 32397735 |
ORCID | /0000-0001-8107-2775/work/142253443 |
Schlagworte
ASJC Scopus Sachgebiete
Schlagwörter
- big data, hybrid storage layouts, parallelism, Parquet, Spark