I'm actually realising a C++ raytracer and I'm confronting a classic problem on raytracing. When putting a high vertical FOV, the shapes get a bigger distortion the nearer they are from the edges.
I know why this distortion happens, but I don't know to resolve it (of course, reducing the FOV is an option but I think that there is something to change in my code). I've been browsing different computing forums but didn't find any way to resolve it.
Here's a screenshot to illustrate my problem.
I think that the problem is that the view plane where I'm projecting my rays isn't actually flat, but I don't know how to resolve this. If you have any tip to resolve it, I'm open to suggestions.
I'm on a right-handed oriented system. The Camera system vectors, Direction vector and Light vector are normalized.
If you need some code to check something, I'll put it in an answer with the part you ask.
code of ray generation :
// PixelScreenX = (pixelx + 0.5) / imageWidth
// PixelCameraX = (2 ∗ PixelScreenx − 1) ∗
// ImageAspectRatio ∗ tan(fov / 2)
float x = (2 * (i + 0.5f) / (float)options.width - 1) *
options.imageAspectRatio * options.scale;
// PixelScreeny = (pixely + 0.5) / imageHeight
// PixelCameraY = (1 − 2 ∗ PixelScreeny) ∗ tan(fov / 2)
float y = (1 - 2 * (j + 0.5f) / (float)options.height) * options.scale;
Vec3f dir;
options.cameraToWorld.multDirMatrix(Vec3f(x, y, -1), dir);
dir.normalize();
newColor = _renderer->castRay(options.orig, dir, objects, options);
There is nothing wrong with your projection. It produces exactly what it should produce.
Let's consider the following figure to see how all the quantities interact:
We have the camera position, the field of view (as an angle) and the image plane. The image plane is the plane that you are projecting your 3D scene onto. Essentially, this represents your screen. When you are viewing your rendering on the screen, your eye serves as the camera. It sees the projected image and if it is positioned at the right point, it will see exactly what it would see if the actual 3D scene was there (neglecting effects like depth of field etc.)
Obviously, you cannot modify your screen (you could change the window size but let's stick with a constant-size image plane). Then, there is a direct relationship between the camera's position and the field of view. As the field of view increases, the camera moves closer and closer to the image plane. Like this:
Thus, if you are increasing your field of view in code, you need to move your eye closer to the screen to get the correct perception. You can actually try that with your image. Move your eye very close to the screen (I'm talking about 3cm). If you look at the outer spheres now, they actually look like real balls again.
In summary, the field of view should approximately match the geometry of the viewing setup. For a given screen size and average watch distance, this can be calculated easily. If your viewing setup does not match your assumptions in code, 3D perception will break down.