Loading...
2016
Re-ordering Memory Requests for Improving the Performance of GPUs
GPU 성능 향상을 위한 메모리 요청 재배치 기법
한국차세대컴퓨팅학회
논문정보
- Publisher
- 한국차세대컴퓨팅학회 논문지
- Issue Date
- 2016-12-31
- Keywords
- -
- Citation
- -
- Source
- -
- Journal Title
- -
- Volume
- 12
- Number
- 6
- Start Page
- 7
- End Page
- 18
- DOI
- ISSN
- 1975681X
Abstract
Graphics Processing Units (GPUs) with massive parallel processing architecture are able to leverage thread-level parallelism. Especially, with programming models like CUDA, OpenCL, such architectures become one of the most attractive platforms for handling not only graphics but also general-purpose applications (GPGPUs). In modern GPUs, caches have been introduced to deal with applications with irregular memory access patterns. However, GPU caches exhibit poor efficiency due to constraints in terms of size as well as many performance challenges such as cache contention, resulting from launching a large number of active threads in GPUs. In this paper, we propose a technique that can order memory requests to the L1 data cache in a friendly way than the baseline cache management by using a small number of simple queues. Experimental results show that our technique can improve GPU cache performance over the baseline architecture, thus improving IPC by 4.3% on average.
- 전남대학교
- KCI
- 한국차세대컴퓨팅학회 논문지
저자 정보
| 이름 | 소속 | ||
|---|---|---|---|
| 등록된 데이터가 없습니다. | |||