Memory Performance of AMD EPYC Rome and Intel Cascade Lake SP Server Processors
Research output: Contribution to book/conference proceedings/anthology/report › Conference contribution › Contributed › peer-review
Modern processors, in particular within the server segment, integrate more cores with each generation. This increases their complexity in general, and that of the memory hierarchy in particular. Software executed on such processors can suffer from performance degradation when data is distributed disadvantageously over the available resources. To optimize data placement and access patterns, an in-depth analysis of the processor design and its implications for performance is necessary. This paper describes and experimentally evaluates the memory hierarchy of AMD EPYC Rome and Intel Xeon Cascade Lake SP server processors in detail. Their distinct microarchitectures cause different performance patterns for memory latencies, in particular for remote cache accesses. Our findings illustrate the complex NUMA properties and how data placement and cache coherence states impact access latencies to local and remote locations. This paper also compares theoretical and effective bandwidths for accessing data at the different memory levels and main memory bandwidth saturation at reduced core counts. The presented insight is a foundation for modeling performance of the given microarchitectures, which enables practical performance engineering of complex applications. Moreover, security research on side-channel attacks can also leverage the presented findings.
|Title of host publication||ICPE 2022 - Proceedings of the 2022 ACM/SPEC International Conference on Performance Engineering|
|Number of pages||11|
|Publication status||Published - 9 Apr 2022|
ASJC Scopus subject areas
- amd zen 2, cache coherence, amd epyc rome, memory hierarchy, intel xeon skylake, intel xeon cascade lake, AMD EPYC Rome, Intel Xeon Cascade Lake, AMD Zen 2, Intel Xeon Skylake