Search code examples
mathgraphicsgeometryraycastingperspective

Percieved width of a decal depending on the rotation angle of the wall


I am creating a raycasting game from scratch using JavaScript canvas.

Part of the challenge (for me) is to decorate walls with random images (pictures). I already implemented drawing of walls, floor an ceiling and sprites. While drawing walls, I store for each x (depicting screen coordinate) the distance to the wall (Z-BUFFER), the height of the wall (H-BUFFER) and actual coordinates of the pixel in the underlying 2D grid (GRID_BUFFER).

My approach for painting the decals (pictures) on the wall is then the following (after identifying a list of decals that could theoretically be visible):

  • distance to the decal's position is calculated (position is defined as being in the middle of the grid vertice facing the observer)
  • screen coordinate decalScreenX is calculated based on the transformation matrix from grid coordinates to screen coordinates. This works correctly:

let decalScreenX = Math.floor((RAYCAST.SCREEN_WIDTH / 2) * (1 + CAMERA.transformX /CAMERA.transformDepth));

  • Then I retrieve image data for the decal in question and get it's width and height
  • And based on the distance and the observed angle, I calculate the percieved width of the decal. This is where the real issue lies, as I see that I don't calculate this width completely accurate.
  • with all this information, it is then easy to calculate left and right screen coordinates - where to begin and and where to end drawing the decal, use H-BUFFER to calculate height factor and use GRID_BUFFER to draw only on grid belonging to this decal.

I saw the width calculation in terms that decal is rotated from the player direction vector by an angle, if the player direction is not opposite of the direction with which decal faces the space (example):

angle

or if player direction is directly opposite to the direction of decal, this angle is 0° (example): opposite

My first approach was to use dot product of the reversed player direction and decal facing direction, thus getting cosine of the angle between vectors and use this as a factor to reduce perceived width:

let CosA = PLAYER.dir.mirror().dot(decal.facingDir);
let widthScale = CosA * (CAMERA.transformDepth / decal.distance);

The problem with this solution is, that when perpendicular , the factor is 0 and the decal is not drawn but as the walls are drawn with perspective, this should not be the case. So I began improvising. I defined CAMERA.minPerspective factor as seen below. Field of vision (FOV) is 70°.

CAMERA.minPerspective = Math.cos(Math.radians((90 + this.FOV) / 2));

My intuition was (as I lack the knowledge of perspective and geometry, alas) that for small angles, the factor should remain 1. And for angles close to 90° there should be some minimal factor, so that decal remains visible. So I came with this "improved" code:

let CosA = PLAYER.dir.mirror().dot(decal.facingDir);
let FACTOR = Math.min(1, CosA + CAMERA.minPerspective);
FACTOR = Math.max(FACTOR, CAMERA.minPerspective);
let widthScale = FACTOR * (CAMERA.transformDepth / decal.distance);

This works considerably better, but it has some flaws. Visually, for angles 0-50° the factor of reduction is too great. This can be observed if I use decals of such width, that they should cover complete grid surface. (see image below; left of the stairs the wall underneath is visible, decal should cover complete grid, but it doesn't, bacause the FACTOR is to small).

anomaly

I have searched Stack Overflow and the rest of the Web for better solution, by it seems that my knowledge of geometry also prevents me to recognize proper solutions if they are out of this context.

So, please. There are probably deterministic solutions for calculating percieved width, without using raycasting phase again or by using the information I am able to store in raycasting phase. While JavaScript is used in code example, I consider this question not to be specific to any programming language.


Solution

  • I have found solution that retains (or even improves) simplicity and time complexity of the approach in the question.

    • I have added two points to the decal definition - leftDrawStart and rightStartDraw. Those are easy to calculate at the point of decal instantialization, based on real sprite (decal) width and the definition of the grid (block) size. While doing this calculation, I consider leftDrawStart from the camera perspective (not grid coordinates).
    • when rendering decal, I calculate using transformation matrix (as in question, code example below) screen coordinates for leftDrawStart and rightStartDraw from their grid coordinates:
    transform(spritePos) {
        let invDet = 1.0 / (CAMERA.dir.x * PLAYER.dir.y - PLAYER.dir.x * CAMERA.dir.y);
        CAMERA.transformX = invDet * (PLAYER.dir.y * spritePos.x - PLAYER.dir.x * spritePos.y);
        CAMERA.transformDepth = invDet * (-CAMERA.dir.y * spritePos.x + CAMERA.dir.x * spritePos.y);
      }
    
    • I distinguish the calculated absolute drawStartX and drawEndX, and their adjustment so that they fit the screen boundaries or return from function if they are completely offscreen
    • finally, percieved width of the decal is not even required since the texture position can be calculated by using ratio of differences between curent drawing stripe - absolute drawing start and difference of absolute drawing end - absolute drawing start:
    let texX = (((stripe - drawStartX_abs) / (drawEndX_abs - drawStartX_abs)) * imageData.width) | 0;
    

    The approach is completelly accurate and considerably faster in comparison to approach where decal casting would be incorporated in the raycasting step.