This is possibly a dumb question, but I'm not seeing how the math works here. I have an image, 414w x 584h. To get an image scaled down to half this size (i.e., half the initial width and height), using [UIImage imageWithCGImage:scaleFactor:orientation]
, I have to set scaleFactor
to 6.0.
Why is it 6.0? How does this value relate to say, a width scale of 414/207 = 2.0, or a height scale, same value, 584/292 = 2.0?
As I write this, I'm wondering... my app is running on an iPhone 6+. So could it have something to do with the 3x Retina display? I.e., normal scale factor of 2.0, which is dimensionless, but when applied to images on the 6+, to get to pixels, I have to do 3x this? Is this the logic?
And, I guess, while I'm here, is there a better way to resize an image, using available iOS facilities? E.g., some special affine transform, etc.? No particular concerns around memory or performance; the images are all no more than 1000 wide by 1500 or so high.
Thanks a lot!
You can find a grt tutorial at http://nshipster.com/image-resizing/
The Documentation says:
The scale factor to use when interpreting the image data. Specifying a scale factor of 1.0 results in an image whose size matches the pixel-based dimensions of the image. Applying a different scale factor changes the size of the image as reported by the size property.
For Scale:
If you load an image from a file whose name includes the @2x modifier, the scale is set to 2.0. You can also specify an explicit scale factor when initializing an image from a Core Graphics image. All other images are assumed to have a scale factor of 1.0.
If you multiply the logical size of the image (stored in the size property) by the value in this property, you get the dimensions of the image in pixels.
For size:
In iOS 4.0 and later, this value reflects the logical size of the image and is measured in points. In iOS 3.x and earlier, this value always reflects the dimensions of the image measured in pixels