Search code examples
swiftsprite-kittouchskphysicsbodyhittest

Swift SpriteKit Detect TouchesBegan on SKSpriteNode with SKPhysicsBody from Sprite's Texture


I have a SpriteKit scene with a sprite. The sprite has a physics body derived from the texture's alpha to get an accurate physics shape like so:

let texture_bottle = SKTexture(imageNamed:"Bottle")
let sprite_bottle = SKSpriteNode(texture: texture_bottle)
physicsBody_bottle = SKPhysicsBody(texture: texture_bottle, size: size)
physicsBody_bottle.affectedByGravity = false
sprite_bottle.physicsBody = physicsBody_bottle
root.addChild(sprite_bottle)

....

func touchesBegan(_ touches: Set<UITouch>?, with event: UIEvent?, touchLocation: CGPoint!) {

    let hitNodes = self.nodes(at: touchLocation)

}

When a user taps the screen, how can I detect if they actually touched within the physics body shape (not the sprite's rect)?


Solution

  • You "can't" (Not easily)

    UITouch commands are based on CGRects, so let hitNodes = self.nodes(at: touchLocation) is going to be filled with any node who's frame intersects with that touch.

    This can't be avoided, so the next step is to determine pixel accuracy from the nodes that registered as "hit". The first thing you should do is convert the touch position to local coordinates to your sprite.

    for node in hitNodes
    {
        //assuming touchLocation is based on screen coordinates
        let localLocation = node.convertPoint(touchLocation,from:scene)
    }
    

    Then from this point you need to figure out which method you want to use.

    If you need speed, then I would recommend creating a 2D boolean array that behaves as a mask, and fill this array with false for transparent areas and true for opaque areas. Then you can use localLocation to point to a certain index of the array (Remember to add anchorPoint * width and height to your x and y values then cast to int)

    func isHit(node: SKNode,mask: [[Boolean]],position:CGPoint) -> Boolean
    {
        return mask[Int(node.size.height * node.anchorPoint.y + position.y)][Int(node.size.width * node.anchorPoint.x + position.x)]
    } 
    

    If speed is not of concern, then you can create a CGContext, fill your texture into this context, and then check if the point in the context is transparent or not.

    Something like this would help you out:

    How do I get the RGB Value of a pixel using CGContext?

    //: Playground - noun: a place where people can play
    
    import UIKit
    import XCPlayground
    
    extension CALayer {
    
        func colorOfPoint(point:CGPoint) -> UIColor
        {
            var pixel:[CUnsignedChar] = [0,0,0,0]
    
            let colorSpace = CGColorSpaceCreateDeviceRGB()
            let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.PremultipliedLast.rawValue)
    
            let context = CGBitmapContextCreate(&pixel, 1, 1, 8, 4, colorSpace,bitmapInfo.rawValue)
    
            CGContextTranslateCTM(context, -point.x, -point.y)
    
            self.renderInContext(context!)
    
            let red:CGFloat = CGFloat(pixel[0])/255.0
            let green:CGFloat = CGFloat(pixel[1])/255.0
            let blue:CGFloat = CGFloat(pixel[2])/255.0
            let alpha:CGFloat = CGFloat(pixel[3])/255.0
    
            //println("point color - red:\(red) green:\(green) blue:\(blue)")
    
            let color = UIColor(red:red, green: green, blue:blue, alpha:alpha)
    
            return color
        }
    }
    
    extension UIColor {
        var components:(red: CGFloat, green: CGFloat, blue: CGFloat, alpha: CGFloat) {
            var r:CGFloat = 0
            var g:CGFloat = 0
            var b:CGFloat = 0
            var a:CGFloat = 0
            getRed(&r, green: &g, blue: &b, alpha: &a)
            return (r,g,b,a)
        }
    }
    
    
    //get an image we can work on
    var imageFromURL = UIImage(data: NSData(contentsOfURL: NSURL(string:"https://www.gravatar.com/avatar/ba4178644a33a51e928ffd820269347c?s=328&d=identicon&r=PG&f=1")!)!)
    //only use a small area of that image - 50 x 50 square
    let imageSliceArea = CGRectMake(0, 0, 50, 50);
    let imageSlice  = CGImageCreateWithImageInRect(imageFromURL?.CGImage, imageSliceArea);
    //we'll work on this image
    var image = UIImage(CGImage: imageSlice!)
    
    
    let imageView = UIImageView(image: image)
    //test out the extension above on the point (0,0) - returns r 0.541 g 0.78 b 0.227 a 1.0
    var pointColor = imageView.layer.colorOfPoint(CGPoint(x: 0, y: 0))
    
    
    
    let imageRect = CGRectMake(0, 0, image.size.width, image.size.height)
    
    UIGraphicsBeginImageContext(image.size)
    let context = UIGraphicsGetCurrentContext()
    
    CGContextSaveGState(context)
    CGContextDrawImage(context, imageRect, image.CGImage)
    
    for x in 0...Int(image.size.width) {
        for y in 0...Int(image.size.height) {
            var pointColor = imageView.layer.colorOfPoint(CGPoint(x: x, y: y))
            //I used my own creativity here - change this to whatever logic you want
            if y % 2 == 0 {
                CGContextSetRGBFillColor(context, pointColor.components.red , 0.5, 0.5, 1)
            }
            else {
                CGContextSetRGBFillColor(context, 255, 0.5, 0.5, 1)
            }
    
            CGContextFillRect(context, CGRectMake(CGFloat(x), CGFloat(y), 1, 1))
        }
    }
    CGContextRestoreGState(context)
    image = UIGraphicsGetImageFromCurrentImageContext()
    

    where you would eventually call colorOfPoint(point:localLocation).cgColor.alpha > 0 to determine if you are touching a node or not.

    Now I would recommend you make colorOfPoint an extension of SKSpriteNode, so be creative with the code posted above.

    func isHit(node: SKSpriteNode,position:CGPoint) -> Boolean
    {
        return node.colorOfPoint(point:localLocation).cgColor.alpha > 0
    } 
    

    Your final code would look something like this:

    hitNodes = hitNodes.filter
               {
                   node in
                   //assuming touchLocation is based on screen coordinates
                   let localLocation = node.convertPoint(touchLocation,from:node.scene)
                   return isHit(node:node,mask:mask,position:localLocation)
               }
    

    OR

    hitNodes = hitNodes.filter
               {
                   node in
                   //assuming touchLocation is based on screen coordinates
                   let localLocation = node.convertPoint(touchLocation,from:node.scene)
                   return isHit(node:node,position:localLocation)
               }
    

    which is basically filtering out all nodes that were in the detected by the frame comparison, leaving you pixel perfect touched nodes.

    Note: The code from the separate SO link may need to be converted to Swift 4.