We are building an application that works with a lot of images. We are interested in both Core Image, GPUImage, and UIImage and how it decompresses the images. We are already familiar with the fact that doing decompression of images on a background thread will help remove stutter or jitter in our UI will scrolling. However, we are not so familiar with where this decompression work is happening. We also do some cropping of images using UIImage. So here goes the questions:
Background: We are supporting devices all the way back to iPhone 4, but soon may drop the iPhone 4 in favor of the iPhone 4S as being our oldest device.
1) Is decompression of an image done on the GPU? ... Core Image? GPUImage? UIImage?
2) Can cropping of an image be done on the GPU? ... Core Image? GPUImage? UIImage?
3) Is there a difference in GPU support based on our device profile?
Basically we want to offload as much as we can to the GPU to free up the CPUs on the device. Also, we want to do any operation on the GPU that would be faster to do there instead of on the CPU.
To answer your question about decompression: Core Image, GPUImage, and UIImage all use pretty much the same means of loading an image from disk. For Core Image, you start with a UIImage, and for GPUImage you can see in the GPUImagePicture source that it currently relies on a CGImageRef usually obtained via a UIImage.
UIImage does image decompression on the CPU side, and other libraries I've looked at for improving image loading performance for GPUImage do the same. The biggest bottleneck in GPUImage for image loading is having to load the image into a UIImage, then take a trip through Core Graphics to upload it into a texture. I'm looking into more direct ways to obtain pixel data, but all of the decompression routines I've tried to date end up being slower than native UIImage loading.
Cropping of an image can be done on the GPU, and both Core Image and GPUImage let you do this. With image loading overhead, this may or may not be faster than cropping via Core Graphics, so you'd need to benchmark that yourself for the image sizes you care about. More complex image processing operations, like adjustment of color, etc. generally end up being overall wins on the GPU for most image sizes on most devices. If this image loading overhead could be reduced, GPU-side processing would win in more cases.
As far as GPU capabilities with device classes, there are significant performance differences between different iOS devices, but the capabilities tend to be mostly the same. Fragment shader processing performance can be orders of magnitude different between iPhone 4, 4S, 5, and 5S devices, where for some operations the 5S is 1000x faster than the 4. The A6 and A7 devices have a handful of extensions that the older devices lack, but those only come into play in very specific situations.
The biggest difference tends to be the maximum texture size supported by GPU hardware, with iPhone 4 and earlier devices limited to 2048x2048 textures and iPhone 4S and higher supporting 4096x4096 textures. This can limit the size of images that can be processed on the GPU using something like GPUImage.