I am quite new to shaders at all and can't get this easy task running.
I created a ColorRect object and also a new shader material, to simulate a fog layer in my top down 2D game. Everything is working fine, the fog is drifting over the screen.
What I want to achieve: if I move the camera, the fog is moving with the camera, but this is wrong. The shader pixels should stay at the same world position. The ColorRect is attached to the camera, because I have an infinite map and therefore don't know how big to scale the rect.
I think I have to translate the UV coordinates on movement within the shader? I think so? Or am I wrong? I hope you can help me..
I already played around with the existing MATRIX_VERTEX and CANVAS_MATRIX, without any result.
Here is the fog shader I am using at the moment.
shader_type canvas_item;
// Amount of detail.
uniform int octaves = 4;
// Opacity of the output fog.
uniform float starting_amplitude: hint_range(0.0, 0.5) = 0.5;
// Rate of pattern within the fog.
uniform float starting_frequency = 1.0;
// Shift towards transparency (clamped) for sparser fog.
uniform float shift: hint_range(-1.0, 0.0) = -0.2;
// Direction and speed of travel.
uniform vec2 velocity = vec2(1.0, 1.0);
// Color of the fog.
uniform vec4 fog_color: source_color = vec4(0.0, 0.0, 0.0, 1.0);
// Noise texture; OpenSimplexNoise is great, but any filtered texture is fine.
uniform sampler2D noise: repeat_enable;
float rand(vec2 uv) {
float amplitude = starting_amplitude;
float frequency = starting_frequency;
float output = 0.0;
for (int i = 0; i < octaves; i++) {
output += texture(noise, uv * frequency).x * amplitude;
amplitude /= 2.0;
frequency *= 2.0;
}
return clamp(output + shift, 0.0, 1.0);
}
void fragment() {
vec2 motion = vec2(rand(UV + TIME * starting_frequency * velocity));
COLOR = mix(vec4(0.0), fog_color, rand(SCREEN_UV + motion));
}
Setting the displacement from script
This is as simple as I have been able to get shader:
Here
fixed_uvis doing the trick. It has three terms:displacementwhich will be the displacement in the world.scale * UVwhich gives us the texture offset, scaled to match thedisplacementunits. I'll get back to that.TIME * velocitywhich is the offset based on time.So, to make it work we need to put the current displacement in
displacement, which we will do with an script attached to theColorRectthat looks like this:Note that this code assumes you cannot cache anything. However, you probably can pre fetch the
viewportand thetexture_sizeon_enter_tree, for example. Also you could only updateviewport_sizewhen it resizes (connecting to thesize_changedsignal of theviewport) and only updateglobal_positionwhen camera moves (seeset_notify_transform).Anyway, as you know the uv coordinates exist in a space that goes from
0.0to1.0. And the shader relies on the texture repeating seamlessly beyond that range.As a result if we just give the shader the displacement in pixels we won't notice anything.
Instead, we are going to use the size of the texture. So we give the displacement in texture sizes. Which is why I'm passing
center / texture_sizeto the shader.Consequently, the terms
displacementandscale * UVwould not be in the same units (and you would notice things drift as you move, similar to a parallax). To fix that, we set thescaletoviewport_size / texture_size. With that value you should observe no drifting of the world relative to the shader (tweak that if you want).This also means that you could specify the
velocityin the same units (texture sizes). So avelocityof(1.0, 0.0)has the effect scroll horizontally the size of the texture in one second (tweak that to what you want).You can further mangle the
fixed_uvfor effect (e.g. usingrand). However, I would encourage to:Can we make the shader work without script input?
We have a few a couple options to get position of the fragment in screen space:
FRAGCOORD.xyorVERTEX, but I don't see a way to convert them to world coordinates.Thus, we are going to get the world coordinates in the vertex shader and pass them to the fragment shader.
After much, much, much experimentation, this how we have to do it (or at least this sticks with the world, I'm not sure if it is offset):
the problem is that we were taking a single offset from the center coordinates, and now we have coordinantes for each fragment… The solution eluded me for a while, but once I saw it, it made sense: don't use
UV, since we are already getting coordinates that are different for each fragment, we don't needUV.So here is the version that does not require to update uniforms form code:
However, we still need a script to set the
global_positionandsize:Can we get rid of the script entirely?
Yes. The reason we have to set the
global_positionandsizeis because theColorRectdoes not really follow theCamera2D(instead it is rendered ignoring theCamera2D).We can change the type from
ColorRecttoSprite2D, which as child of theCamrea2Dwill actually follow it. And for the size, all you need to do is to give it aPlaceholderTexturelarge enough to cover the screen.But should you?
Set your
Camera2Dto use drag margins and you will see it does not behave correctly. The reason is that the position of theCamera2Ddoes no longer match the center of the screen, and thus theSprite2Dgets out of aligment with it.The solution would be to use a script to place the
Sprite2Dusingget_screen_center_position, which looks like the script I was using before.