Maybe my understand about how texture mapping is implemented is wrong. I recently built a 3D engine purely in Java (I know I have a lot of time on my hand) and I finished the texture mapping part. The way I did it was as I draw the pixel to the screen, I look up the color for the texture at that location. I know that texture filtering helps reduce the blurriness on the texture at high distance viewed at a more oblique angle. But why does that problem even occur in the first place? It didn't in my implementation. Why would we lose resolution when we shrink an image?
Here's an image of my engine.
Two words: Nyquist Theorem.
Your texture is a signal and screen pixels are sampling positions (hence the term sampler for the unit that, well, samples the texture to screen pixels). The Nyquist Theorem says that to faithfully represent a signal with samples the signal must not contain frequencies above half the average sampling frequency. If that constraint is not met, aliasing will occour. So when you minify a texture you're essentially subsampling it, which will lead to aliasing if the sampling distance in the texture signal becomes larger (i.e. the sampling frequency lower) than twice the distance between the highest resolved texture features.
Hence in every discrete sampling system there is a so called "antialiasing filter" put in place before the sampler.