I have some Qt-based software that graphs an audio-transform function in 2D (with frequency-in-Hz as the X axis, and decibels-gain on the Y axis).
It does this by choosing a set of X positions to calculate points at, then calculating the Hz-value that corresponds with each X position, then calculating the dB-value that corresponds with each Hz-value, then calculating the Y value that corresponds to each dB value, and then finally drawing a series of line segments to represent the computed EQ curve.
This all mostly works well. The exception comes when the transform I am plotting is not a well-behaved function; for example, in the video at this link, I am plotting a "notch function" that diverges to negative-infinity at a particular value, so the mathematical function becomes infinitely-steep close to that value.
The line that is actually drawn in pixels, OTOH, doesn't always accurately represent the underlying function, in that the "spike" toward negative-infinity is often "chopped off" well above the bottom of the graph, due the limited number of points in my sampling-set. In the video, this leads to a sort of "hopping" effect at the bottom of the white line, as I change the Notch filter's frequency control. The ideal behavior would be if the white line would always extends to the bottom of graph, regardless of the frequency-setting, as that would more accurately reflect the underlying mathematical function.
Obviously I can reduce the magnitude of the problem simply by increasing the number of X positions at which I calculate values (up to one calculation per horizontal pixel); but that takes additional CPU cycles, and worse, it only reduces the visual error without ever entirely removing it.
My question is, are there any obvious or widely-used techniques that I could adopt to more elegantly solve this problem? I assume that every mathematical-function-plotting program runs into this issue sooner or later; so rather than try to bodge out my own hackish solution I thought I'd first see if there is a standard-practice I could adopt.
There isn’t a fully general answer here: a function could actually dance around near but above the (arbitrary) bottom of your plotting area in a complicated fashion, and no sampling scheme can discover every place it does actually diverge. Of course, if you know the points of divergence, you can add them as plotting points (with some large finite value), but that analysis also can’t be assumed in general.
That said, there is a reasonably efficient way to handle functions that are mostly well-behaved and whose kind of divergences are known. The idea is to adapt the graph to the function. Often, that involves estimating the second derivative of the function numerically and adding more plotting points where it is large, but here a simpler strategy suggests itself: start with a uniform grid of points (perhaps a bit coarser than you would use by itself), and then repeatedly add points halfway between the lowest point not off the graph and its lower neighbor. (You might use a priority queue to efficiently find the lowest points.)
When the separation between a point and its neighbors reaches some tolerance (about a pixel, perhaps), disqualify it in case a higher point in a less-explored area is near a divergence; when the lowest point remaining point is higher than another tolerance (or the total number of points exceeds yet another), stop.
Note that there are several tolerances here, as is unavoidable in attempting to heuristically capture the behavior of unknown functions.