I am quite new to shaders at all and can't get this easy task running.
I created a ColorRect
object and also a new shader material, to simulate a fog layer in my top down 2D game. Everything is working fine, the fog is drifting over the screen.
What I want to achieve: if I move the camera, the fog is moving with the camera, but this is wrong. The shader pixels should stay at the same world position. The ColorRect
is attached to the camera, because I have an infinite map and therefore don't know how big to scale the rect.
I think I have to translate the UV coordinates on movement within the shader? I think so? Or am I wrong? I hope you can help me..
I already played around with the existing MATRIX_VERTEX and CANVAS_MATRIX, without any result.
Here is the fog shader I am using at the moment.
shader_type canvas_item;
// Amount of detail.
uniform int octaves = 4;
// Opacity of the output fog.
uniform float starting_amplitude: hint_range(0.0, 0.5) = 0.5;
// Rate of pattern within the fog.
uniform float starting_frequency = 1.0;
// Shift towards transparency (clamped) for sparser fog.
uniform float shift: hint_range(-1.0, 0.0) = -0.2;
// Direction and speed of travel.
uniform vec2 velocity = vec2(1.0, 1.0);
// Color of the fog.
uniform vec4 fog_color: source_color = vec4(0.0, 0.0, 0.0, 1.0);
// Noise texture; OpenSimplexNoise is great, but any filtered texture is fine.
uniform sampler2D noise: repeat_enable;
float rand(vec2 uv) {
float amplitude = starting_amplitude;
float frequency = starting_frequency;
float output = 0.0;
for (int i = 0; i < octaves; i++) {
output += texture(noise, uv * frequency).x * amplitude;
amplitude /= 2.0;
frequency *= 2.0;
}
return clamp(output + shift, 0.0, 1.0);
}
void fragment() {
vec2 motion = vec2(rand(UV + TIME * starting_frequency * velocity));
COLOR = mix(vec4(0.0), fog_color, rand(SCREEN_UV + motion));
}
This is as simple as I have been able to get shader:
shader_type canvas_item;
uniform vec2 displacement = vec2(0.0, 0.0);
uniform vec2 scale = vec2(1.0, 1.0);
uniform vec2 velocity = vec2(1.0, 1.0);
uniform vec4 fog_color: source_color = vec4(0.0, 0.0, 0.0, 1.0);
uniform sampler2D noise: repeat_enable;
void fragment() {
vec2 fixed_uv = displacement + scale * UV + TIME * velocity;
float fog = texture(noise, fixed_uv).r;
COLOR = mix(vec4(0.0), fog_color, fog);
}
Here fixed_uv
is doing the trick. It has three terms:
displacement
which will be the displacement in the world.scale * UV
which gives us the texture offset, scaled to match the displacement
units. I'll get back to that.TIME * velocity
which is the offset based on time.So, to make it work we need to put the current displacement in displacement
, which we will do with an script attached to the ColorRect
that looks like this:
extends ColorRect
func _process(_delta: float) -> void:
var viewport = get_viewport()
var texture_size = (material.get("shader_parameter/noise") as Texture2D).get_size()
var center := viewport.get_camera_2d().get_screen_center_position()
var viewport_size := viewport.get_visible_rect().size
global_position = Vector2(-viewport_size / 2.0) + center
size = viewport_size
material.set("shader_parameter/scale", viewport_size / texture_size)
material.set("shader_parameter/displacement", center / texture_size)
Note that this code assumes you cannot cache anything. However, you probably can pre fetch the viewport
and the texture_size
on _enter_tree
, for example. Also you could only update viewport_size
when it resizes (connecting to the size_changed
signal of the viewport
) and only update global_position
when camera moves (see set_notify_transform
).
Anyway, as you know the uv coordinates exist in a space that goes from 0.0
to 1.0
. And the shader relies on the texture repeating seamlessly beyond that range.
As a result if we just give the shader the displacement in pixels we won't notice anything.
Instead, we are going to use the size of the texture. So we give the displacement in texture sizes. Which is why I'm passing center / texture_size
to the shader.
Consequently, the terms displacement
and scale * UV
would not be in the same units (and you would notice things drift as you move, similar to a parallax). To fix that, we set the scale
to viewport_size / texture_size
. With that value you should observe no drifting of the world relative to the shader (tweak that if you want).
This also means that you could specify the velocity
in the same units (texture sizes). So a velocity
of (1.0, 0.0)
has the effect scroll horizontally the size of the texture in one second (tweak that to what you want).
You can further mangle the fixed_uv
for effect (e.g. using rand
). However, I would encourage to:
We have a few a couple options to get position of the fragment in screen space: FRAGCOORD.xy
or VERTEX
, but I don't see a way to convert them to world coordinates.
Thus, we are going to get the world coordinates in the vertex shader and pass them to the fragment shader.
After much, much, much experimentation, this how we have to do it (or at least this sticks with the world, I'm not sure if it is offset):
shader_type canvas_item;
render_mode skip_vertex_transform;
varying vec2 coord;
void vertex() {
coord = (SCREEN_MATRIX * inverse(CANVAS_MATRIX) * vec4(VERTEX, 0.0, 1.0)).xy;
VERTEX = (MODEL_MATRIX * vec4(VERTEX, 0.0, 1.0)).xy;
}
void fragment() {
COLOR = vec4(-coord, 0.0, 1.0);
}
the problem is that we were taking a single offset from the center coordinates, and now we have coordinantes for each fragment… The solution eluded me for a while, but once I saw it, it made sense: don't use UV
, since we are already getting coordinates that are different for each fragment, we don't need UV
.
So here is the version that does not require to update uniforms form code:
shader_type canvas_item;
render_mode skip_vertex_transform;
uniform vec2 velocity = vec2(1.0, 1.0);
uniform vec4 fog_color: source_color = vec4(0.0, 0.0, 0.0, 1.0);
uniform sampler2D noise: repeat_enable;
varying vec2 coord;
void vertex() {
coord = (SCREEN_MATRIX * inverse(CANVAS_MATRIX) * vec4(VERTEX, 0.0, 1.0)).xy;
VERTEX = (MODEL_MATRIX * vec4(VERTEX, 0.0, 1.0)).xy;
}
void fragment() {
vec2 fixed_uv = -coord + TIME * velocity;
float fog = texture(noise, fixed_uv).r;
COLOR = mix(vec4(0.0), fog_color, fog);
}
However, we still need a script to set the global_position
and size
:
extends ColorRect
func _process(_delta: float) -> void:
var viewport = get_viewport()
var center := viewport.get_camera_2d().get_screen_center_position()
var viewport_size := viewport.get_visible_rect().size
global_position = Vector2(-viewport_size / 2.0) + center
size = viewport_size
Yes. The reason we have to set the global_position
and size
is because the ColorRect
does not really follow the Camera2D
(instead it is rendered ignoring the Camera2D
).
We can change the type from ColorRect
to Sprite2D
, which as child of the Camrea2D
will actually follow it. And for the size, all you need to do is to give it a PlaceholderTexture
large enough to cover the screen.
But should you?
Set your Camera2D
to use drag margins and you will see it does not behave correctly. The reason is that the position of the Camera2D
does no longer match the center of the screen, and thus the Sprite2D
gets out of aligment with it.
The solution would be to use a script to place the Sprite2D
using get_screen_center_position
, which looks like the script I was using before.