I am currently working on a drawing app. I'm building it for iOS, and am using Swift 3 to do so.
It's just a basic drawing app, but I'm trying to add an extra feature. I started with a UIScrollView, then added an Image View to that scroll view. All of the drawing part is done with the image view. When you first launch the app, the scrollview is completely zoomed in. When you switch to "zoom mode" this allows you to pinch to zoom. The problem is, when you first open the app, when you draw while zoomed in, the drawing is really fuzzy. In order to fix this, I can use a line of code like this:
UIGraphicsBeginImageContextWithOptions((self.view.frame.size), false, 7.0)
This causes the drawing to look great while zoomed in, but causes the app to run very laggy. The thing that confuses me though, if I change the above code to this:
UIGraphicsBeginImageContextWithOptions((self.view.frame.size), false, 0.0)
And zoom out all the way, the drawing looks exactly the same (granted, I'm zoomed all the way out) but it's no longer laggy. I know this probably isn't coming across super clearly so here's a video showing what happens in the first scenario: https://youtu.be/E_9FKf1pUTY and in the second: https://youtu.be/OofFTS4Q0OA
So basically, I'm wondering if there's a way to treat the zoomed in area as if it was its own view. It seems to me as if the app is updating the entire image view, rather than just the part that is visible at any given time. Is there a way to update only the portion of the image view that is drawn on? Sorry if this is a bit of a confusing post, feel free to ask questions if there's anything you don't understand. Just for clarity sake, I'll include all of the drawing code below:
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
print("Touches began")
swiped = true
if let touch = touches.first {
lastPoint = touch.location(in: scrollView)
lastPoint.x = lastPoint.x / scrollView.zoomScale
lastPoint.y = lastPoint.y / scrollView.zoomScale
}
}
func drawLines(fromPoint:CGPoint,toPoint:CGPoint) {
print("\(fromPoint.x), \(fromPoint.y)")
//UIGraphicsBeginImageContext(self.view.frame.size)
UIGraphicsBeginImageContextWithOptions((scrollView.frame.size), false, 0.0)
imageView.image?.draw(in: CGRect(x: 0, y: 0, width: self.view.frame.width, height: self.view.frame.height))
let context = UIGraphicsGetCurrentContext()
context?.move(to: CGPoint(x: fromPoint.x, y: fromPoint.y))
context?.addLine(to: CGPoint(x: toPoint.x, y: toPoint.y))
context?.setBlendMode(CGBlendMode.normal)
context?.setLineCap(CGLineCap.round)
if erase == true {
context?.setLineWidth(30)
}
if erase == false {
context?.setLineWidth(CGFloat(sizeVar))
}
if color == "black" {
context?.setStrokeColor(UIColor.black.cgColor)
}
if color == "white" {
context?.setStrokeColor(UIColor.white.cgColor)
}
if color == "blue" {
context?.setStrokeColor(UIColor.blue.cgColor)
}
if color == "cyan" {
context?.setStrokeColor(UIColor.cyan.cgColor)
}
if color == "green" {
context?.setStrokeColor(UIColor.green.cgColor)
}
if color == "magenta" {
context?.setStrokeColor(UIColor.magenta.cgColor)
}
if color == "red" {
context?.setStrokeColor(UIColor.red.cgColor)
}
if color == "yellow" {
context?.setStrokeColor(UIColor.yellow.cgColor)
}
context?.strokePath()
imageView.image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
}
override func touchesMoved(_ touches: Set<UITouch>, with event: UIEvent?) {
swiped = true
if let touch = touches.first {
var currentPoint = touch.location(in: scrollView)
currentPoint.x = currentPoint.x / scrollView.zoomScale
currentPoint.y = currentPoint.y / scrollView.zoomScale
drawLines(fromPoint: lastPoint, toPoint: currentPoint)
lastPoint = currentPoint
}
}
override func touchesEnded(_ touches: Set<UITouch>, with event: UIEvent?) {
if !swiped {
drawLines(fromPoint: lastPoint, toPoint: lastPoint)
}
}
The scale parameter in the function UIGraphicsBeginImageContextWithOptions(_:_:_:)
is not a quality setting, and you should not put arbitrary values there. It's a scale factor, and it's akin to the @1x, @2x, and @3x settings for images. It tells the system the scaling factor to use when mapping image pixels to screen pixels. In most (almost all) cases you should use 0, which means "use the native scale of the screen" (@2x for normal retina, or @3x for iPhone 6+ and 7+.) You should never set it to an arbitrary value like 7. That creates an image with 7x as many pixels as normal, and forces the system to scale it for screen drawing every time, which takes more time and is slower.
Next, creating a new image for each new line is a dreadfully inefficient way to do drawing. It creates and releases large blocks of memory constantly, and then has to completely redraw the screen each time. Instead I would set up a view that has a CAShapeLayer
as it's backing layer and update the path that's installed in the layer.