I am trying to implement a pipelined cache access as an optimization technique to increase my cache bandwidth for my I-cache which is a L-1 cache. I need to do this in verilog. The cache size is 64 KB, and two-way associative with a block size of 4 words.
I am still not clear on how does a pipelined cache access work. Will be really helpful if any explanation can be given theoretically or any link provided to have a better understanding. I have already researched on the net, and could not find any good read. I want to know what are the 2 stages in the pipelined cache access and how does it improve bandwidth?
You can check the following link https://courses.cs.washington.edu/courses/csep548/06au/lectures/cacheAdv.pdf
Search for Pipelined Cache and hopefully you would get the required information. Few updates -
The basic idea behind using a pipelined cache is to increase the throughput. The 2-stage pipeline would be used to do the following tasks - index the cache tag check & hit/miss logic data transfer back to CPU
Depending on the critical path you may decide which which pipeline stage does what.