I'm implementing dynamic field of view. I decided to use shaders in order to make the illumination better looking and how it affects the walls. Here is the scenario I'm working on: https://i.sstatic.net/by8Bd.jpg
I have a map, with a flat floor and walls. Every thing here is 2d, there is no 3d geometry, only 2d polygons that compose the walls.
Using the vertex of the polygons I cast shadows, to define the viewable area. (The purple lines are part of the mask I use in the next step)
Using the shader when drawing the shadows on top of the scenario, I avoid the walls to be also obscured.
This way the shadows are cast dynamically along the walls as the field of view changes
I have used the following shader to achieve this. But I feel this is kind of a overkill and really really unefficient:
uniform sampler2D texture;
uniform sampler2D filterTexture;
uniform vec2 textureSize;
uniform float cellSize;
uniform sampler2D shadowTexture;
void main()
{
vec2 position;
vec4 filterPixel;
vec4 shadowPixel;
vec4 pixel = texture2D(texture, gl_TexCoord[0].xy );
for( float i=0 ; i<=cellSize*2 ; i++)
{
position = gl_TexCoord[0].xy;
position.y = position.y - (i/textureSize.y);
filterPixel = texture2D( filterTexture, position );
position.y = position.y - (1/textureSize.y);
shadowPixel = texture2D( texture, position );
if (shadowPixel == 0){
if( filterPixel.r == 1.0 )
{
if( filterPixel.b == 1.0 ){
pixel.a = 0;
break;
}
else if( i<=cellSize )
{
pixel.a = 0;
break;
}
}
}
}
gl_FragColor = pixel;
}
Iterating for each frament just to look for the red colored pixel in the mask seems like a huge overload, but I fail to see how to complete this taks in any other way by using shaders.
The solution here is really quite simple: use shadow maps.
Your situation may be 2D instead of 3D, but the basic concept is the same. You want to "shadow" areas based on whether there is an obstructive surface between some point in the world and a "light source" (in your case, the player character).
In 3D, shadow maps work by rendering the world from the perspective of the light source. This results in a 2D texture where the values represent the depth from the light (in a particular direction) to the nearest obstruction. When you render the scene for real, you check the current fragment's location by projecting it into the 2D depth texture (the shadow map). If the depth value you compute for the current fragment is closer than the nearest obstruction in the projected location in the shadow map, then the fragment is visible from the light. If not, then it isn't.
Your 2D version would have to do the same thing, only with one less dimension. You render your 2D world from the perspective of the "light source". Your 2D world in this case is really just the obstructing quads (you'll have to render them with line polygon filling). Any quads that obstruct sight should be rendered into the shadow map. Texture accesses are completely unnecessary; the only information you need is depth. Your shader doesn't even have to write a color. You render these objects by projecting the 2D space into a 1D texture.
This would look something like this:
X..X
XXXXXXXX..XXXXXXXXXXXXXXXXXXXX
X.............\.../..........X
X..............\./...........X
X...............C............X
X............../.\...........X
X............./...\..........X
X............/.....\.........X
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
C
is the character's position; the dots are just regular, unobstructive floor. The X
s are the walls. The lines from C
represent the four directions you need to render the 2D lines from.
In 3D, to do shadow mapping for point lights, you have to render the scene 6 times, in 6 different directions into the faces of a cube shadow map. In 2D, you have to render the scene 4 times, in 4 different directions into 4 different 1D shadow maps. You can use a 1D array texture for this.
Once you have your shadow maps, you just use them in your shader to detect when a fragment is visible. To do that, you'll need a set of transforms from window space into the 4 different projections that represent the 4 directions of view that you rendered into. Only one of these will be used for any particular fragment, based on where the fragment is relative to the target.
To implement this, I'd start with just getting a simple case of directional "shadowing" to work. That is, don't use a position; just a direction for a "light". That will test your ability to develop a 2D-to-1D projection matrix, as well as an appropriate camera-space matrix to transform your world-space quads into camera space. Once you have mastered that, then you can get to work doing it 4 times with different projections.