I'm trying to draw an anti-aliased trapezoid by starting at the top and drawing line by line. When the line is not an integer number of pixels the end-pixels are weighted average of the background greylevel and the trapezoid greylevel, e.g. if a line is 128.5 pixels long then at each end, the grey-level is:
0.25*(trapezoid_greylevel)+0.75*(background_greylevel)
Unfortunately the result is not very smooth (I've checked this on a linearised display):
I assume that at each line-end I need to take into account all the pixels surrounding it to arrive at an appropriate grey-level, but I can't work out how to do it. Any pointers?
Since a trapezoid is convex, it is easy to classify points with respect to a trapezoid: a point is inside iff it is to the left of all trapezoid edges (assuming the trapezoid to oriented counterclockwise).
Assuming square pixels (and a trapezoid larger than a pixel), you can then easily classify pixels with respect to a trapezoid by classifying their corners: if all four corners are outside, then the pixel is outside; if all four corners are inside, then the pixel is inside. For the other pixels, you can do antialiasing by supersampling.
You can also do adaptive rendering by using a quadtree as in Warnock's algorithm but you'll have to implement a more robust intersection test for a square against a trapezoid (or a convex polygon in general). You only need to detect when a square and the trapzoid are disjoint or when a square is inside the trapezoid. The depth of the quadtree beyond the pixel level will determine how fine the antialiasing is.
Finally, you can also do exact rendering by computing the percentage of a pixel's area covered by the trapezoid. It's a matter of polygon clipping and you can use the Sutherland–Hodgman algorithm.