M Ii' COMPUTER ORGANIZATION AND DESIGN The Hardware/Software Interface m Chapter 5 Large and Fast: Exploiting Memory Hierarchy Memory Technology Static RAM (SRAM) - 0.5ns - 2.5ns, $2000 - $5000 per GB Dynamic RAM (DRAM) - 50ns - 70ns, $20 - $75 per GB Magnetic disk - 5ms - 20ms, $0.20 - $2 per GB Ideal memory ■ Access time of SRAM ■ Capacity and cost/GB of disk Chapter 5 — Large and Fast: Exploiting Memory Hierarchy Principle of Locality ■ Programs access a small proportion of their address space at any time ■ Temporal locality ■ Items accessed recently are likely to be accessed again soon ■ e.g., instructions in a loop, induction variables ■ Spatial locality ■ Items near those accessed recently are likely to be accessed soon ■ E.g., sequential instruction access, array data Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 3 Taking Advantage of Locality Memory hierarchy Store everything on disk Copy recently accessed (and nearby) items from disk to smaller DRAM memory ■ Main memory Copy more recently accessed (and nearby) items from DRAM to smaller SRAM memory ■ Cache memory attached to CPU Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 4 Memory Hierarchy Levels Processor Data is transferred Block (aka line): unit of copying ■ May be multiple words If accessed data is present in upper level ■ Hit: access satisfied by upper level Hit ratio: hits/accesses If accessed data is absent ■ Miss: block copied from lower level Time taken: miss penalty Miss ratio: misses/accesses = 1 - hit ratio ■ Then accessed data supplied from upper level 4 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 5 Cache Memory ■ Cache memory ■ The level of the memory hierarchy closest to the CPU ■ Given accesses Xn-1, Xn Xn-i x. X, Xn_ ■ X» X,, How do we know if the data is present? Where do we look? a. Before the reference to X„ b. After the reference to Xn Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 6 Direct Mapped Cache Location determined by address Direct mapped: only one choice ■ (Block address) modulo (#Blocks in cache) Cache #Blocks is a power of 2 Use low-order address bits 00001 00101 01001 01101 10001 10101 11001 11101 Memory Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 7 Tags and Valid Bits How do we know which particular block is stored in a cache location? ■ Store block address as well as the data ■ Actually, only need the high-order bits ■ Called the tag What if there is no data in a location? ■ Valid bit: 1 = present, 0 = not present ■ Initially 0 14 ® Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 8 Cache Example 8-blocks, 1 word/block, direct mapped Initial state Index V Tag Data 000 N 001 N 010 N 011 N 100 N 101 N 110 N 111 N Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 9 Cache Example Word addr Binary addr Hit/miss Cache block 22 10 110 Miss 110 Index V Tag Data 000 N 001 N 010 N 011 N 100 N 101 N 111 N Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 10 Cache Example Word addr Binary addr Hit/miss Cache block 26 11 010 Miss 010 Index V Tag Data 000 N 001 N 011 N 100 N 101 N 110 Y 10 Mem[10110] 111 N Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 11 Cache Example Word addr Binary addr Hit/miss Cache block 22 10 110 Hit 110 26 11 010 Hit 010 Index V Tag Data 000 N 001 N 010 Y 11 Mem[11010] 011 N 100 N 101 N 110 Y 10 Mem[10110] 111 N Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 12 Cache Example Word addr Binary addr Hit/miss Cache block 16 10 000 Miss 000 3 00 011 Miss 011 16 10 000 Hit 000 4 Index V Tag Data 001 N 010 Y 11 Mem[11010] 100 N 101 N 110 Y 10 Mem[10110] 111 N Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 13 Cache Example Word addr Binary addr Hit/miss Cache block 18 10010 Miss 010 Index V Tag Data 000 Y 10 Mem[10000] 001 N 010 Y 10 Mem[10010] 011 Y 00 Mem[00011] 100 N 101 N 110 Y 10 Mem[10110] 111 N Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 14 Address Subdivision 4 Address (showing bit positions) Index 0 1 2 1021 1022 1023 20 -€> 5 31 30 ■ ■ 131211- ■ -2 1 0 Byte offset Hit Tag > 20 > J0 Index Valid Tag Data 32 Data Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 15 Example: Larger Block Size 64 blocks, 16 bytes/block ■ To what block number does address 1200 map? Block address = Li 200/16j = 75 Block number = 75 modulo 64 = 11 31 10 9 4 3 0 Tag Index Offset 22 bits 6 bits 4 bits Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 16 Block Size Considerations Larger blocks should reduce miss rate ■ Due to spatial locality But in a fixed-sized cache ■ Larger blocks =^> fewer of them ■ More competition => increased miss rate ■ Larger blocks =^> pollution Larger miss penalty ■ Can override benefit of reduced miss rate ■ Early restart and critical-word-first can help M 14 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 17 Cache Misses ■ On cache hit, CPU proceeds normally ■ On cache miss . Stall the CPU pipeline ■ Fetch block from next level of hierarchy ■ Instruction cache miss ■ Restart instruction fetch ■ Data cache miss ■ Complete data access M 14 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 18 Write-Through ■ On data-write hit, could just update the block in cache ■ But then cache and memory would be inconsistent ■ Write through: also update memory ■ But makes writes take longer ■ e.g., if base CPI = 1, 10% of instructions are stores, write to memory takes 100 cycles . Effective CPI = 1 + 0.1x100= 11 ■ Solution: write buffer ■ Holds data waiting to be written to memory ■ CPU continues immediately Only stalls on write if write buffer is already full M 14 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 19 Write-Back Alternative: On data-write hit, just update the block in cache ■ Keep track of whether each block is dirty When a dirty block is replaced ■ Write it back to memory ■ Can use a write buffer to allow replacing block to be read first M 14 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 20 Write Allocation What should happen on a write miss? Alternatives for write-through ■ Allocate on miss: fetch the block ■ Write around: don't fetch the block Since programs often write a whole block before reading it (e.g., initialization) For write-back ■ Usually fetch the block M 14 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 21 Example: Intrinsity FastMATH ■ Embedded MIPS processor ■ 12-stage pipeline ■ Instruction and data access on each cycle ■ Split cache: separate l-cache and D-cache - Each 16KB: 256 blocks x 16 words/block ■ D-cache: write-through or write-back ■ SPEC2000 miss rates - l-cache: 0.4% . D-cache: 11.4% ■ Weighted average: 3.2% Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 22 Example: Intrinsity FastMATH Address (showing bit positions) 31 ■■■ 14 13--65 — 2 1 0 Hit Tag 18 Index 8 4 Byte offset 18 bits 512 bits V Tag Data 18 32 32 9 C Mu» ) 32 4 Data Block offset 256 entries 32 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 23 Main Memory Supporting Caches Use DRAMs for main memory ■ Fixed width (e.g., 1 word) ■ Connected by fixed-width clocked bus > Bus clock is typically slower than CPU clock Example cache block read ■ 1 bus cycle for address transfer ■ 15 bus cycles per DRAM access ■ 1 bus cycle per data transfer For 4-word block, 1-word-wide DRAM ■ Miss penalty = 1 +4x15 + 4x1 =65 bus cycles ■ Bandwidth = 16 bytes / 65 cycles = 0.25 B/cycle Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 24 Increasing Memory Bandwidth a. One-word-wide memory organization Processor ^^"Multiplexor Cache Bus Memory Memory Memory Memory Memory bankO bank 1 bank 2 bank 3 b. Wider memory organization c. Interleaved memory organization 4-word wide memory ■ Miss penalty =1 + 15+1 = 17 bus cycles ■ Bandwidth = 16 bytes /17 cycles = 0.94 B/cycle 4-bank interleaved memory ■ Miss penalty = 1 + 15 + 4x1 = 20 bus cycles - Bandwidth = 16 bytes / 20 cycles = 0.8 B/cycle Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 25 Advanced DRAM Organization Bits in a DRAM are organized as a rectangular array ■ DRAM accesses an entire row ■ Burst mode: supply successive words from a row with reduced latency Double data rate (DDR) DRAM ■ Transfer on rising and falling clock edges Quad data rate (QDR) DRAM ■ Separate DDR inputs and outputs M 14 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 26 DRAM Generations Year Capacity $/GB 1980 64Kbit $1500000 1983 256Kbit $500000 1985 1Mbit $200000 1989 4Mbit $50000 1992 16Mbit $15000 1996 64Mbit $10000 1998 128Mbit $4000 2000 256Mbit $1000 2004 512Mbit $250 2007 1Gbit $50 300 250 200 150 100 50 0 i i i i i i i i r '80 '83 '85 '89 '92 '96 '98 '00 '04 '07 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 27 Measuring Cache Performance Components of CPU time ■ Program execution cycles Includes cache hit time ■ Memory stall cycles ■ Mainly from cache misses With simplifying assumptions: Memory stall cycles _ Memory accesses Program x Miss ratex Miss penalty Instructions Misses =-x-xMiss penalty Program Instruction Chapter 5 — Large and Fast: Exploiting Memory Hierarchy Cache Performance Example Given ■ l-cache miss rate = 2% ■ D-cache miss rate = 4% ■ Miss penalty =100 cycles - Base CPI (ideal cache) = 2 ■ Load & stores are 36% of instructions Miss cycles per instruction - l-cache: 0.02 x 100 = 2 - D-cache: 0.36 x 0.04 x 100 = 1.44 Actual CPI = 2 + 2 + 1.44 = 5.44 . Ideal CPU is 5.44/2 =2.72 times faster M 14 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 29 Average Access Time ■ Hit time is also important for performance ■ Average memory access time (AMAT) ■ AMAT = Hit time + Miss rate * Miss penalty ■ Example ■ CPU with 1ns clock, hit time = 1 cycle, miss penalty = 20 cycles, l-cache miss rate = 5% . AMAT = 1 + 0.05 x 20 = 2ns 2 cycles per instruction M 14 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 30 Performance Summary When CPU performance increased ■ Miss penalty becomes more significant Decreasing base CPI ■ Greater proportion of time spent on memory stalls Increasing clock rate ■ Memory stalls account for more CPU cycles Can't neglect cache behavior when evaluating system performance M 14 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 31 Associative Caches Fully associative ■ Allow a given block to go in any cache entry ■ Requires all entries to be searched at once ■ Comparator per entry (expensive) n-way set associative ■ Each set contains n entries ■ Block number determines which set ■ (Block number) modulo (#Sets in cache) ■ Search all entries in a given set at once ■ n comparators (less expensive) Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 32 Associative Cache Example Direct mapped Block# 01234567 Data Tag Search Set associative Set# 0 1 2 3 Data Tag Search Fully associative Data Tag Search 1 2 M 14 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 33 Spectrum of Associativity For a cache with 8 entries One-way set associative (direct mapped) Block Tag Data 0 1 2 3 4 5 6 7 Two-way set associative Set Tag Data Tag Data 0 1 2 3 Four-way set associative Set Tag Data Tag Data Tag Data Tag Data 0 1 Eight-way set associative (fully associative) Tag Data Tag Data Tag Data Tag Data Tag Data Tag Data Tag Data Tag Data 4 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy Associativity Example Compare 4-block caches ■ Direct mapped, 2-way set associative, fully associative ■ Block access sequence: 0, 8, 0, 6, 8 Direct mapped Block Cache Hit/miss Cache content after access address index 0 1 2 3 0 0 miss 8 0 miss Mem[8] 0 0 miss Mem[0] 6 2 miss Mem[0] 8 0 miss Mem[8] Mem[6] M 14 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 35 Associativity Example 2-way set associative Block address Cache index Hit/miss Cache content after access SetO Set1 0 0 miss 8 0 miss Mem[0] 0 0 hit Mem[0] Mem[8] 6 0 miss Mem[0] Mem[6] 8 0 miss Mem[8] Mem[6] Fully associative Block address Hit/miss Cache content after access 0 miss 8 miss Mem[0] 0 hit Mem[0] Mem[8] 6 miss Mem[0] Mem[8] 8 hit Mem[0] Mem[8] Mem[6] M 14 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 36 How Much Associativity Increased associativity decreases miss rate ■ But with diminishing returns Simulation of a system with 64KB D-cache, 16-word blocks, SPEC2000 - 1-way: 10.3% - 2-way: 8.6% - 4-way: 8.3% - 8-way: 8.1% M 14 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 37 Set Associative Cache Organization Address 31 30 ■■■ 12 11 1098---32 1 0 Tag Index V Tag Data 0 .22 8 Index V Tag Data V Tag Data V Tag Data 4 ^-to-1 multiplexor) r Hit Data Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 38 Replacement Policy ■ Direct mapped: no choice ■ Set associative ■ Prefer non-valid entry, if there is one ■ Otherwise, choose among entries in the set ■ Least-recently used (LRU) ■ Choose the one unused for the longest time ■ Simple for 2-way, manageable for 4-way, too hard beyond that ■ Random ■ Gives approximately the same performance as LRU for high associativity M 14 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 39 Multilevel Caches Primary cache attached to CPU ■ Small, but fast Level-2 cache services misses from primary cache ■ Larger, slower, but still faster than main memory Main memory services L-2 cache misses Some high-end systems include L-3 cache M 14 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 40 Multilevel Cache Example Given - CPU base CPI = 1, clock rate = 4GHz ■ Miss rate/instruction = 2% ■ Main memory access time = 100ns With just primary cache ■ Miss penalty = 100ns/0.25ns = 400 cycles . Effective CPI = 1 + 0.02 x 400 = 9 M 14 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 41 Example (cont.) Now add L-2 cache ■ Access time = 5ns ■ Global miss rate to main memory = 0.5% Primary miss with L-2 hit ■ Penalty = 5ns/0.25ns = 20 cycles Primary miss with L-2 miss ■ Extra penalty = 400 cycles CPI = 1 + 0.02 x 20 + 0.005 x 400 = 3.4 Performance ratio = 9/3.4 = 2.6 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 42 Multilevel Cache Considerations ■ Primary cache ■ Focus on minimal hit time ■ L-2 cache ■ Focus on low miss rate to avoid main memory access ■ Hit time has less overall impact ■ Results ■ L-1 cache usually smaller than a single cache ■ L-1 block size smaller than L-2 block size M 14 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 43 Interactions with Advanced CPUs Out-of-order CPUs can execute instructions during cache miss ■ Pending store stays in load/store unit ■ Dependent instructions wait in reservation stations Independent instructions continue Effect of miss depends on program data flow ■ Much harder to analyse ■ Use system simulation Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 44 Interactions with Software Misses depend on memory access patterns ■ Algorithm behavior ■ Compiler optimization for memory access 4 1200 100(1 J sua 8 ,s boo i I 400 200 \ Radix Sort ' ' " '--■ ■-■-■ Radix Sort E JS 3 3 I 2 4 8 16 32 64 138 256 512 1024 204B 4096 Size (K items to sort) 4 8 16 32 64 138 2S6 512 1024 2046 4096 Size (K items to sort) 4 8 16 32 64 128 256 512 1024 2046 4096 Size (K items to sort] Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 45 Virtual Memory Use main memory as a "cache" for secondary (disk) storage ■ Managed jointly by CPU hardware and the operating system (OS) Programs share main memory ■ Each gets a private virtual address space holding its frequently used code and data ■ Protected from other programs CPU and OS translate virtual addresses to physical addresses ■ VM "block" is called a page ■ VM translation "miss" is called a page fault M 14 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — Address Translation Fixed-size pages (e.g., 4K) Virtual addresses Virtual address Physical addresses 31 30 29 28 27 ...................... 15 14 1312 11 10 9 8 ........... 3 2 1 0 Virtual page number c Page offset Translation Disk addresses 29 28 27 ) 1514131211 1098 Physical page number 32 10 Page offset Physical address Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 47 Page Fault Penalty On page fault, the page must be fetched from disk ■ Takes millions of clock cycles ■ Handled by OS code Try to minimize page fault rate ■ Fully associative placement ■ Smart replacement algorithms M 14 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 48 Page Tables Stores placement information ■ Array of page table entries, indexed by virtual page number ■ Page table register in CPU points to page table in physical memory If page is present in memory ■ PTE stores the physical page number ■ Plus other status bits (referenced, dirty, ...) If page is not present ■ PTE can refer to location in swap space on disk Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 49 Translation Using a Page Table Page table register 4 31 30 29 28 27- Virtual address ■■•15 14 13 12 1110 9 8 •3210 Virtual page number Valid Page table Page offset 20 If 0 then page is not present in memory 29 28 27- Physical page number 18 •15 14 13 12 1 1 1 0 9 8- Physical page number 12 •3210 Page offset Physical address Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 50 Mapping Pages to Storage Virtual page number Page table Physical page or Physical memory M 14 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 51 Replacement and Writes To reduce page fault rate, prefer least-recently used (LRU) replacement ■ Reference bit (aka use bit) in PTE set to 1 on access to page ■ Periodically cleared to 0 by OS ■ A page with reference bit = 0 has not been used recently Disk writes take millions of cycles ■ Block at once, not individual locations ■ Write through is impractical ■ Use write-back ■ Dirty bit in PTE set when page is written Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 52 Fast Translation Using a TLB Address translation would appear to require extra memory references ■ One to access the PTE ■ Then the actual memory access But access to page tables has good locality ■ So use a fast cache of PTEs within the CPU ■ Called a Translation Look-aside Buffer (TLB) . Typical: 16-512 PTEs, 0.5-1 cycle for hit, 10-100 cycles for miss, 0.01 %-1% miss rate ■ Misses could be handled by hardware or software M 14 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 53 Fast Translation Using a TLB TLB Virtual page Physical page number Valid Dirty Ref_Tag_address TLB Misses If page is in memory ■ Load the PTE from memory and retry ■ Could be handled in hardware Can get complex for more complicated page table structures ■ Or in software ■ Raise a special exception, with optimized handler If page is not in memory (page fault) ■ OS handles fetching the page and updating the page table ■ Then restart the faulting instruction 'J1 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 55 TLB Miss Handler TLB miss indicates ■ Page present, but PTE not in TLB ■ Page not preset Must recognize TLB miss before destination register overwritten ■ Raise exception Handler copies PTE from memory to TLB ■ Then restarts instruction ■ If page not present, page fault will occur M 14 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 56 Page Fault Handler ■ Use faulting virtual address to find PTE ■ Locate page on disk ■ Choose page to replace ■ If dirty, write to disk first ■ Read page into memory and update page table ■ Make process runnable again ■ Restart from faulting instruction M 14 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 57 TLB and Cache Interaction Virtual address 31 30 29.............................14 13 12 11 10 9.........3 2 10 Valid Dirty TLB TLB hit ■ Cache Valid Cache hit Virtual page number 20 Tag Page offset 12 Tag Physical page number 20 Physical page number I Page offset -Physical address Physical address tag | Cache index 18 Block offset Byte offset 12 Data 32 Data If cache tag uses physical address ■ Need to translate before cache lookup Alternative: use virtual address tag ■ Complications due to aliasing Different virtual addresses for shared physical address Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 58 Memory Protection Different tasks can share parts of their virtual address spaces ■ But need to protect against errant access ■ Requires OS assistance Hardware support for OS protection ■ Privileged supervisor mode (aka kernel mode) ■ Privileged instructions ■ Page tables and other state information only accessible in supervisor mode ■ System call exception (e.g., syscall in MIPS) Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 59 The Memory Hierarchy The BIG Picture Common principles apply at all levels of the memory hierarchy ■ Based on notions of caching At each level in the hierarchy ■ Block placement ■ Finding a block ■ Replacement on a miss ■ Write policy Chapter 5 — Large and Fast: Exploiting Memory Hierarchy Block Placement Determined by associativity ■ Direct mapped (1-way associative) ■ One choice for placement ■ n-way set associative ■ n choices within a set ■ Fully associative Any location Higher associativity reduces miss rate ■ Increases complexity, cost, and access time M 14 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 61 Finding a Block Associativity Location method Tag comparisons Direct mapped Index 1 n-way set associative Set index, then search entries within the set n Fully associative Search all entries #entries Full lookup table 0 ■ Hardware caches ■ Reduce comparisons to reduce cost ■ Virtual memory ■ Full table lookup makes full associativity feasible ■ Benefit in reduced miss rate M 14 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 62 Replacement Choice of entry to replace on a miss ■ Least recently used (LRU) ■ Complex and costly hardware for high associativity ■ Random ■ Close to LRU, easier to implement Virtual memory ■ LRU approximation with hardware support M 14 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 63 Write Policy Write-through ■ Update both upper and lower levels ■ Simplifies replacement, but may require write buffer Write-back ■ Update upper level only ■ Update lower level when block is replaced ■ Need to keep more state Virtual memory ■ Only write-back is feasible, given disk write latency M 14 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 64 Sources of Misses Compulsory misses (aka cold start misses) ■ First access to a block Capacity misses ■ Due to finite cache size ■ A replaced block is later accessed again Conflict misses (aka collision misses) ■ In a non-fully associative cache ■ Due to competition for entries in a set ■ Would not occur in a fully associative cache of the same total size M 14 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 65 Cache Design Trade-offs Design change Effect on miss rate Negative performance effect Increase cache size Decrease capacity misses May increase access time Increase associativity Decrease conflict misses May increase access time Increase block size Decrease compulsory misses Increases miss penalty. For very large block size, may increase miss rate due to pollution. M 14 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 66 Virtual Machines ■ Host computer emulates guest operating system and machine resources ■ Improved isolation of multiple guests I ■ Avoids security and reliability problems ■ Aids sharing of resources ■ Virtualization has some performance impact ■ Feasible with modern high-performance comptuers ■ Examples . IBM VM/370 (1970s technology!) . VMWare ■ Microsoft Virtual PC Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 67 Virtual Machine Monitor Maps virtual resources to physical resources ■ Memory, I/O devices, CPUs Guest code runs on native machine in user mode ■ Traps to VMM on privileged instructions and access to protected resources Guest OS may be different from host OS VMM handles real I/O devices ■ Emulates generic virtual I/O devices for guest M 14 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 68 Example: Timer Virtualization In native machine, on timer interrupt ■ OS suspends current process, handles interrupt, selects and resumes next process With Virtual Machine Monitor ■ VMM suspends current VM, handles interrupt, selects and resumes next VM If a VM requires timer interrupts ■ VMM emulates a virtual timer ■ Emulates interrupt for VM when physical timer interrupt occurs Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 69 Instruction Set Support User and System modes Privileged instructions only available in system mode ■ Trap to system if executed in user mode All physical resources only accessible using privileged instructions ■ Including page tables, interrupt controls, I/O registers Renaissance of virtualization support ■ Current ISAs (e.g., x86) adapting M 14 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 70 Cache Control Example cache characteristics ■ Direct-mapped, write-back, write allocate ■ Block size: 4 words (16 bytes) . Cache size: 16 KB (1024 blocks) ■ 32-bit byte addresses ■ Valid bit and dirty bit per block ■ Blocking cache > CPU waits until access is complete 31 10 9 4 3 0 Tag Index Offset 4 18 bits 10 bits 4 bits Chapter 5 — Large and Fast: Exploiting Memory Hierarchy - Interface Signals CPU Read/Write Cache Read/Write Memory -=-w Valid -w Valid -w Address 32 „ Address 32 > Write Data 32 , Write Data 123 , t Read Data 32 ( Read Data 128 „ Ready t Ready %-w- Multiple cycles per access 4 ® Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 72 Finite State Machines Use an FSM to sequence control steps Set of states, transition on each clock edge ■ State values are binary encoded ■ Current state stored in a register ■ Next state = fn (current state, current inputs) Control output signals = f0 (current state) Combinational control logic Outputs X Inputs A f Inputs from cache datapath Datapath control outputs State register Next state Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 73 Cache Controller FSM Could partition into separate states to reduce clock cycle time Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 74 Cache Coherence Problem ■ Suppose two CPU cores share a physical address space ■ Write-through caches Time step Event CPU As cache CPU B's cache Memory 0 0 1 CPU A reads X 0 0 2 CPU B reads X 0 0 0 3 CPU A writes 1 toX 1 0 1 M 14 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — Coherence Defined Informally: Reads return most recently written value Formally: ■ P writes X; P reads X (no intervening writes) =^> read returns written value ■ P1 writes X; P2 reads X (sufficiently later) =^> read returns written value ■ c.f. CPU B reading X after step 3 in example ■ P1 writes X, P2 writes X =^> all processors see writes in the same order ■ End up with the same final value for X M 14 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 76 Cache Coherence Protocols Operations performed by caches in multiprocessors to ensure coherence ■ Migration of data to local caches ■ Reduces bandwidth for shared memory ■ Replication of read-shared data ■ Reduces contention for access Snooping protocols ■ Each cache monitors bus reads/writes Directory-based protocols ■ Caches and memory record sharing status of blocks in a directory Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 77 Invalidating Snooping Protocols Cache gets exclusive access to a block when it is to be written ■ Broadcasts an invalidate message on the bus ■ Subsequent read in another cache misses ■ Owning cache supplies updated value CPU activity Bus activity CPU As cache CPU B's cache Memory 0 CPU A reads X Cache miss forX 0 0 CPU B reads X Cache miss for X 0 0 0 CPU A writes 1 toX Invalidate forX 1 0 CPU B read X Cache miss forX 1 1 1 M 14 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 78 Memory Consistency ■ When are writes seen by other processors ■ "Seen" means a read returns the written value ■ Can't be instantaneously ■ Assumptions ■ A write completes only when all processors have seen it ■ A processor does not reorder writes with other accesses ■ Consequence ■ P writes X then writes Y =^> all processors that see new Y also see new X ■ Processors can reorder reads, but not writes M 14 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 79 Multilevel On-Chip Caches Intel Nehalem 4-core processor Two channel (128 bit) memory interface IrtsBBnJII Per core: 32KB L1 l-cache, 32KB L1 D-cache, 512KB L2 cache Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 80 2-Level TLB Organization Intel Nehalem AMD Opteron X4 Virtual addr 48 bits 48 bits Physical addr 44 bits 48 bits Page size 4KB, 2/4MB 4KB, 2/4MB L1 TLB (per core) L1 l-TLB: 128 entries for small pages, 7 per thread (2x) for large pages L1 D-TLB: 64 entries for small pages, 32 for large pages Both 4-way, LRU replacement L1 l-TLB: 48 entries L1 D-TLB: 48 entries Both fully associative, LRU replacement L2 TLB (per core) Single L2 TLB: 512 entries 4-way, LRU replacement L2 l-TLB: 512 entries L2 D-TLB: 512 entries Both 4-way, round-robin LRU TLB misses Handled in hardware Handled in hardware M 14 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 81 3-Level Cache Organization Intel Nehalem AMD Opteron X4 L1 caches (per core) L1 l-cache: 32KB, 64-byte blocks, 4-way, approx LRU replacement, hit time n/a L1 D-cache: 32KB, 64-byte blocks, 8-way, approx LRU replacement, writeback/allocate, hit time n/a L1 l-cache: 32KB, 64-byte blocks, 2-way, LRU replacement, hit time 3 cycles L1 D-cache: 32KB, 64-byte blocks, 2-way, LRU replacement, writeback/allocate, hit time 9 cycles L2 unified cache (per core) 256KB, 64-byte blocks, 8-way, approx LRU replacement, writeback/allocate, hit time n/a 512KB, 64-byte blocks, 16-way, approx LRU replacement, writeback/allocate, hit time n/a L3 unified cache (shared) 8MB, 64-byte blocks, 16-way, replacement n/a, writeback/allocate, hit time n/a 2MB, 64-byte blocks, 32-way, replace block shared by fewest cores, write-back/allocate, hit time 32 cycles n/a: data not available M 14 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 82 Mis Penalty Reduction ■ Return requested word first ■ Then back-fill rest of block ■ Non-blocking miss processing ■ Hit under miss: allow hits to proceed ■ Mis under miss: allow multiple outstanding misses ■ Hardware prefetch: instructions and data ■ Opteron X4: bank interleaved L1 D-cache ■ Two concurrent accesses per cycle M 14 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 83 Pitfalls Byte vs. word addressing ■ Example: 32-byte direct-mapped cache, 4-byte blocks ■ Byte 36 maps to block 1 Word 36 maps to block 4 Ignoring memory system effects when writing or generating code ■ Example: iterating over rows vs. columns of arrays ■ Large strides result in poor locality Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — Pitfalls In multiprocessor with shared L2 or L3 cache ■ Less associativity than cores results in conflict misses ■ More cores =^> need to increase associativity Using AMAT to evaluate performance of out-of-order processors ■ Ignores effect of non-blocked accesses ■ Instead, evaluate performance by simulation M 14 Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 85 Pitfalls Extending address range using segments - E.g., Intel 80286 ■ But a segment is not always big enough ■ Makes address arithmetic complicated Implementing a VMM on an ISA not designed for virtualization ■ E.g., non-privileged instructions accessing hardware resources ■ Either extend ISA, or require guest OS not to use problematic instructions Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 86 Concluding Remarks ■ Fast memories are small, large memories are slow ■ We really want fast, large memories © ■ Caching gives this illusion © ■ Principle of locality ■ Programs use a small part of their memory space frequently ■ Memory hierarchy ■ L1 cache L2 cache ... <-» DRAM memory <-» disk ■ Memory system design is critical for multiprocessors Chapter 5 — Large and Fast: Exploiting Memory Hierarchy Exercises Answer the following exercises, and send your answers as a PDF attachment to the email address listed below xamiri@fi.muni.cz Leave body of the email blank Deadline is May 19th Exercises 5.3.2(b), 5.4.1(b), 5.4.2(b), 5.7.3(a), 5.8.1(b), 5.8.4(b), 5.8.5(b), 5.10.1(a), 5.11.1(a) Chapter 1 — Computer Abstractions and Technology — 88