I have a set of GPS stations, whose coordinates I know (x,y,z), and for each station, I also have an error (e). These stations are of course unevenly spaced, otherwise it would be too easy. The thing is, in order to compute my error e for a station, I have only used said station, but I also want to take the others into account.
My problem is: given a set of unevenly spaced (x,y,z,e) points, how can I interpolate e in function of the spatial distance between points? The interpolation would need not to be exact, since I am recalculating e on points where I already have it. Also, I am looking for something cleaner than inverse distance or likewise stuff. Splines would be nice, for example.
From what I have read, the splev function of the scipy.interpolate package seems to do the trick, but I cannot understand how it works or what I am supposed to give it as arguments.
I someone could either explain how this is supposed to work, or point me to another method, it would be great.
If I understood your question correctly, you have a point x,y,z in space, and you would like to calculate the error by interpolating from the known stations. You also suggest that the validity of an error reading depends on the distance from the point where the error is known.
So, for a point x,y,z you can calculate its distance from each of the known receiving stations. Then you have some weight function calculated from each of these distances. Finally, you take a weighted average by the weight functions (or possibly do some other trickery to eliminate outliers).
How about this:
# known station coordinates (number of rows matching number of stations)
coords = array([
(x1, y1, z1),
(x2, y2, z2),
... ])
# respective error values (number of items matching number of stations)
err_values = array([
e1,
e2),
... ])
# p is a three-element array representing our coordinates
# distances will contain the spatial euclidian distances to the stations
distances = numpy.linalg.norm(coords - p[None,:], axis=1)
# get the weights somehow
weights = my_weight_function(distances)
# return the weighted average
return numpy.average(err_values, weights=weights)
There is one more trick which might be useful especially in this case. The last statement could be replaced by:
return numpy.sum(err_values * weights) / (eps + numpy.sum(weights))
Basically a weighted sum, but a small number eps
added to the denominator. The point here is that as we talk about an error, it should be zero very far away from the known points. Otherwise we would often have an average of the known errors as the error on the other side of the globe, which is not reasonable. The only reasonable assumption is that the error is zero far away. It is not, but we do not know any better, and thus zero is the best guess.
If I understood your problem in a wrong way, let me know. (If you are thinking of the interpolation problem as a way to increase accuracy on the surface of the earth, you actually have a 2d problem on the surface of the globe, not a real 3D problem.)