Search code examples
cachingoptimizationcpu-architecturescientific-computingsupercomputers

Data locality relevance with The Machine and memristors?


Preliminary remark: I do not know whether this is the best stack exchange website for this question. If not, I apologize and it should be moved to the correct website.


Recently, HP has spoken about a research project called The Machine based on memristors and optical communications. The goal is not to discuss whether this project will become real in 4, 10 or 20 years. The goal is to discuss what such a computer could imply for the design of computationnally intensive softwares.

Today, we are aiming to exascale supercomputers. In this context, it is oftenly considered that code optimization should focus on:

  • Hybrid parallelization (MPI+Threading)

  • Vectorization (SIMD)

  • Data locality (computing is free compared to the cost of data transfer)

My question is : if an architecture like the one presented by HP becomes true, would it change these priorities and particularly the third one ? (i.e. could data transfer become free compared to computation time)


Solution

  • Memristors will be used as a replacement for SRAM cells. Even though they might increase the memory density/area and bring along improvements in power efficiency, I do not see them changing the concept of data locality as that is an abstract concept. Yes, it will lead to an increase in storage/performance capabilities on all layers of the memory hierarchy but you would still have your data blocks separated by a certain distance. Unless you have one magical memory block attached to your core that has zero cycle latency and an infinite capacity, data locality will always remain an optimization challenge.