Consider a system with cycles per instruction (CPI) is 1.0 when all memory accesses hit in the cache. The only data accesses are loads and stores, and these are 50% of the total instructions. If the miss penalty is 30 clock cycles and the miss rate is 5%, how much faster would the computer be if all instructions were cache hits?
can someone please explain this,
Just change the values to get your answer.
Ideal CPI where the cache always hits = 1.0
Next, calculate CPI with the real world cache.
(Ideal CPI when everything hits) + (Ideal CPI with fetch+load+store)*(miss rate)*(miss penalty)
$= 1 + 0.5*(1+0.5)*0.05*30 = 3.25 $
Note: this is a question on the concept of unified cache from Henessy and Patterson, same numerical with different data: