Abstract
The increasing adoption of GPUs as mainstream computing devices, coupled with the imminent availability of large high-bandwidth caches based on die-stacked memory makes it important to analyze and understand modern GPU compute applications from the perspective of their memory access and data reuse characteristics. This paper presents detailed workload characterization studies on four GPU compute applications that process large data sets. The applications studied include tree traversal and search algorithms, a partial differential equation (PDE) solver, and a synthetic array processing application. Our studies indicate that while the memory footprint consumed by these applications can be very large, the effectiveness of several GB worth of cache may vary significantly across workloads. This suggests that provisioning cache resources in a die-stacked memory based system needs to be done very carefully, through detailed characterization of target workloads. An added benefit of our work was the discovery that accurate memory characterization data can lead to a significantly more optimized strategy for scheduling GPU threads by taking advantage of a workload's access characteristics. In particular, for the PDE solver, our analysis led to an optimization that achieved 30% measured gain in application performance. This paper also describes our analysis methodology for conducting these types of studies. The methodology is based on trace analysis, where the traces capture memory traffic and calls to the GPU compute API. For each application we highlight the characterization metrics and analysis techniques that were most useful in generating insights about their memory access patterns.