So I have two NVidia GPU Cards
Card A: GeForce GTX 560 Ti - Wired to Monitor A (Dell P2210)
Card B: GeForce 9800 GTX+ - Wired to Monitor B (ViewSonic VP20)
Setup: an Asus Mother Board with Intel Core i7 that supports SLI
In NVidia Control Panel, I disabled Monitor A, So I only have Monitor B for all my display purposes.
I ran my program, which
simulated 10000 particles in OpenGL and rendered them (properly showed in Monitor B)
use cudaSetDevice() to 'target' at Card A to run computational intensive CUDA Kernel.
The idea is simple - use Card B for all the OpenGL rendering work and use Card A for all the CUDA Kernel computational work.
My Question is this:
After using GPU-Z to monitor both of the Cards, I can see that:
Card A's GPU Load increased immediately to over 60% percent as expected.
However, Card B's GPU Load increased only to up to 2%. For 10000 particle rendered in 3D in opengl, I am not sure if that is what I should have expected.
So how can I find out if the OpenGL rendering was indeed using Card B (whose connected Monitor B is the only one that is enabled), and had nothing to do with Card A?
And and extension to the question is:
You can tell which GPU a OpenGL context is using with glGetString(GL_RENDERER);
- Is there a way to 'force' the OpenGL rendering logic to use a particular GPU Card?
Given the functions of the context creation APIs available at the moment: No.