Search code examples
sdkcudagpunvidiamemory-model

Questions about Cuda 4.0 and unified memory model


Nvidia seems to be touting that Cuda 4.0 allows programmers to use a unified memory model between the CPU and GPU. This is not going to replace the need to manage the memory manually in the GPU and CPU for best performance, but will it allow for easier implementations that can be tested, proven, and then optimised (manually manage GPU and CPU memory)? I'd like to hear comments or opinions :)


Solution

  • From what I read, the important difference is that if you have 2 or more GPUs, you will be able to transfer memory from GPU1 to GPU2 without touching host RAM. You will be also able to control 2 GPUs with only one thread on the host.