Search code examples
ioscalayer

How to make convertToDeviceSpace do anything other than multiply by one?


When I run code like this in a CALayer

override open func draw(in ctx: CGContext) {

    let rectDEVICESPACE = ctx.convertToDeviceSpace(bounds).size

    print("bounds \(bounds)")
    print("rectDEVICESPACE \(rectDEVICESPACE)")

After trying for many, many, many minutes we have found it impossible to find any transform or scale.

convertToDeviceSpace always just returns the same thing.

enter image description here

Tried mountains of devices, simulators, etc etc.

What is a specific actual situation where convertToDeviceSpace will do anything other than identity?

This is terrifying since the Apple documentation and all existing code samples use exactly that when, for example, figuring out the raw pixel size when you're blitting around. An error would be an enormous flaw.

What is a specific actual situation where convertToDeviceSpace will do anything other than multiply by 1.0?

What am I doing wrong, specifically, in testing - that I never see any result other than just identity?


Solution

  • I'll answer my own question, to help any googlers:

    In fact

    convertToDeviceSpace assumes that contentsScale gives the pixel density!

    and that means indeed that

    you actually have to have already set the contentsScale yourself!!

    since!

    Apple does leave the contentsScale defaulted to 1, rather than screen density!!!

    Basically convertToDeviceSpace returns the UIView size, times, contentsScale.

    {Plus, any future unknowable calculations which Apple will make, in getting the "actual, physical" pixels size.}

    It seems to be a little-known fact that:

    in iOS when you make a custom layer, and assuming you're drawing it smoothly pixel-wise, in fact YOU must set the contentsScale - and YOU must do that at initialization time. (TBC, it would be a severe mistake do it once draw has already been called on the context.)

    class PixelwiseLayer: CALayer {
    
        override init() {
    
            super.init()
            // SET THE CONTENT SCALE AT INITIALIZATION TIME
            contentsScale = UIScreen.main.scale
        }
    
        required init?(coder aDecoder: NSCoder) {
            fatalError("init(coder:) has not been implemented")
        }
    

    Here's a vast examination of the issue: https://stackoverflow.com/a/47760444/294884