Publication

Sie verwenden einen Browser, in dem JavaScript deaktiviert ist. Dadurch wird verhindert, dass Sie die volle Funktionalität dieser Webseite nutzen können. Zur Navigation müssen Sie daher die Sitemap nutzen.

You are currently using a browser with deactivated JavaScript. There you can't use all the features of this website. In order to navigate the site, please use the Sitemap .

Evaluating the Effect of Last-Level Cache Sharing on Integrated GPU-CPU Systems with Heterogeneous Applications

Authors:
Garcia, V. ,  Gomez-Luna, J. ,  Grass, T. ,  Rico, A. ,  Ayguade, E. ,  Peña, A.
Book Title:
Proceedings of the 2016 IEEE International Symposium on Workload Characterization (IISWC)
Date:
2016
DOI:
10.1109/IISWC.2016.7581277
Language:
English

Abstract

Heterogeneous systems are ubiquitous in the field of High- Performance Computing (HPC). Graphics processing units (GPUs) are widely used as accelerators for their enormous computing potential and energy efficiency; furthermore, on-die integration of GPUs and general-purpose cores (CPUs) enables unified virtual address spaces and seamless sharing of data structures, improving programmability and softening the entry barrier for heterogeneous programming. Although on-die GPU integration seems to be the trend among the major microprocessor manufacturers, there are still many open questions regarding the architectural design of these systems. This paper is a step forward towards understanding the effect of on-chip resource sharing between GPU and CPU cores, and in particular, of the impact of last-level cache (LLC) sharing in heterogeneous computations. To this end, we analyze the behavior of a variety of heterogeneous GPU-CPU benchmarks on different cache configurations. We perform an evaluation of the popular Rodinia benchmark suite modified to leverage the unified memory address space. We find such GPGPU workloads to be mostly insensitive to changes in the cache hierarchy due to the limited interaction and data sharing between GPU and CPU. We then evaluate a set of heterogeneous benchmarks specifically designed to take advantage of the finegrained data sharing and low-overhead synchronization between GPU and CPU cores that these integrated architectures enable. We show how these algorithms are more sensitive to the design of the cache hierarchy, and find that when GPU and CPU share the LLC execution times are reduced by 25% on average, and energy-to-solution by over 20% for all benchmarks.

Download

BibTeX