Search code examples
c++ccudaopenclgpgpu

How does the opencl command queue work, and what can I ask of it


I'm working on an algorithm that does prettymuch the same operation a bunch of times. Since the operation consists of some linear algebra(BLAS), I thourght I would try using the GPU for this.

I've writen my kernel and started pushing kernels on the command queue. Since I don't wanna wait after each call I figures I would try daisy-chaining my calls with events and just start pushing these on the queue.

call kernel1(return event1)
call kernel2(wait for event 1, return event 2)
...
call kernel1000000(wait for event 999999)

Now my question is, does all of this get pushed to the graphic chip of does the driver store the queue? It there a bound on the number of event I can use, or to the length of the command queue, I've looked around but I've not been able to find this.

I'm using atMonitor to check the utilization of my gpu' and its pretty hard to push it above 20%, could this simply be becaurse I'm not able to push the calls out there fast enough? My data is already stored on the GPU and all I'm passing out there is the actual calls.


Solution

  • First, you shouldn't wait for an event from a previous kernel unless the next kernel has data dependencies on that previous kernel. Device utilization (normally) depends on there always being something ready-to-go in the queue. Only wait for an event when you need to wait for an event.

    "does all of this get pushed to the graphic chip of does the driver store the queue?"

    That's implementation-defined. Remember, OpenCL works on more than just GPUs! In terms of the CUDA-style device/host dichotomy, you should probably consider command queue operations (for most implementations) on the "host."

    Try queuing up multiple kernels calls without waits in-between them. Also, make sure you are a using an optimal work group size. If you do both of those, you should be able to max out your device.