I'm working on an iOS app where I need to be able to see how much of a CGPath is within the screen bounds to ensure that there is enough for the user to still touch. The problem is, when the shape is in the corner, all the methods I would normally use (and everything I can think to try) fails when the path is in the corner.
Here's a pic:
How can I calculate how much of that shape is on screen?
Obvious answers are to do it empirically by pixel painting or to do it analytically by polygon clipping.
So to proceed empirically you'd create a CGBitmapContext
the size of your viewport, clear it to a known colour such as (0, 0, 0), paint on your polygon in another known colour, like say (1, 1, 1) then just run through all the pixels in the bitmap context and add up the total number you can find. That's probably quite expensive, but you can use lower resolution contexts to get more approximate results if that helps.
To proceed analytically you'd run a polygon clipping algorithm such as those described here to derive a new polygon from the original, which is just that portion of it that is actually on screen. Then you'd get the area of that using any of the normal formulas.
It's actually a lot easier to clip a convex polygon than a concave one, so if your polygons are of a fixed shape then you might consider using a triangulation algorithm, such as ear clipping or decomposition to monotone edges and then performing the clipping and area calculation on those rather than the original.