Search code examples
gpulinear-algebranon-linear

Why are GPUs only used for linear algebra as opposed to nonlinear calculations?


I keep hearing that GPUs are useful because they are quick at linear algebra.

I see how a GPU can be utilised to quickly perform linear calculations, and I see why that is useful, but I don't see why these calculations need to be linear.

Why can't we have each GPU core take in 4 numbers a, b, c, d and compute a^b + c^d, or any other nonlinear function?

If the answer is that linear algebra is more efficient: how is linear algebra more efficient and how would one utilise linear algebra to compute or approximate an arbitrary nonlinear function (if specificity is required, assume the function is a nonlinear polynomial)?


Solution

  • GPUs are used for pretty much everything. Your observation is unrelated to GPUs or programming, it’s about books and articles on the subject.

    Here’s reasons why you mostly see examples about linear algebra.

    1. Linear algebra is relatively simple, easy to explain how massive parallelism helps.

    2. Linear algebra is used for a lot of things. For some practical applications, speeding up just the linear algebra already causes massive performance win, despite the matrices involved are assembled on CPU with scalar code.

    3. Linear algebra is simple enough to be abstracted away in a library like cuBLAS. Arbitrary nonlinear functions tend to require custom compute kernels, which is harder than just consuming a library someone else wrote.