Abstract
Aggressive prefetching mechanisms improve performance of some important applications, but substantially increase bus traffic and "pressure" on cache tag arrays. They may even reduce performance of applications that are not memory bounded. We introduce a "feedback" mechanism, termed Prefetcher Assessment Buffer (PAB), which filters out requests that are unlikely to be useful. With this, applications that cannot benefit from aggressive prefetching will not suffer from their side-effects. The PAB is evaluated with different configurations, e.g., "all L1 accesses trigger prefetches" and "only misses to L1 trigger prefetches". When compared with the non-selective concurrent use of multiple prefetchers, the PAB's application to prefetching from main memory to the L2 cache can reduce the number of loads from main memory by up to 25% without losing performance. Application of more sophisticated techniques to prefetches between the L2- and L1-cache can increase IPC by 4% while reducing the traffic between the caches 8-fold.
Original language | English |
---|---|
Pages (from-to) | 171-188 |
Number of pages | 18 |
Journal | International Journal of Parallel Programming |
Volume | 34 |
Issue number | 2 |
DOIs | |
State | Published - Apr 2006 |
Keywords
- Cache tag pressure
- Memory wall
- Prefetching
ASJC Scopus subject areas
- Software
- Theoretical Computer Science
- Information Systems