So I’m currently learning things about shadows , e.g. CSM and shadow bias . Although it seems UE5 had already fixed the shadow acne issue to a certain extent , and it’s able to avoid it completely by using static light , I think it’s meaningful to know what flaw was here.
I came up with a method ( actually misunderstanding) to not have shadow acne (repeated black lines) . I thought the standard shadow mapping was transforming the view from the source light into world space by reversing the matrix , and then baking it onto objects , as if objects were receiving ‘shadow maps’.
Yeah I now understand it is comparing depth value . But then I was thinking if my imagination was practicable.
So my conclusion is as below :
My main purpose is to kill the repeated black line , or to fill the gap caused by bias of 1 . I want to manually calculate the shadow in screen position in post-process material . I don’t know if there’s already one in UE5 . I didn’t find it . I called my method as ‘Screen space shadow compensation’ , as it’s designed to fill up the gap.
Suppose there’s a random point in world space , and I call it ‘starter’ . And there’s another point near it , named ‘ender’ . By ender-starter we can define the direction of a light. And given the direction and a point , we can know the equation of that light :
Float3 Arpoi ;// A random point on it
Float3 Starter ;//The point the line passed through
Float3 Direction ; //the vector of light direction
Arpoi = Starter + t * Direction ;
And the next thing we should consider , is that the pixels we’re calculating , are captured from player’s view . It means we can see the occluded pixels that’s invisible from light’s view.
We get current pixel’s world position , and using the equation above to calculate the intercept coordinate with the plane that is perpendicular to light direction ( which means orthogonal view , i.e. directional light rather than point light) .
The equation of line belong to current pixel starts from its world position point .
We can know the intercept coordinate by calculating the t value which makes it intercept with the plane that is certain for containing the light’s starter point and being perpendicular to the light’s direction that plays a role of normal .
Think about it . Every beam of directional light is parallel . So we can simply use the light direction as our unit vector to know how much long it should discharge from the world space starter point to reach the intercept plane.
And then we know the identificating feature of each pixel. It’s the intercept coordinate on the plane . I.e. we transform world position to light’s view space , so that the light can know which two pixel are overlapped , and do distance (depth) test to mark the far pixel as black .
Cuz all points on a plane don’t have difference on depth relative to the plane ( depth = How far it extended into the space without plane) . We can probably turn them into Float2 . I’m bad on math so I didn’t design this process .
And then :
There should be a frame buffer . It plays a role of array . What we want to store is the key , and the value is simply true or false. In fact , we store intercept coordinate as it’s index/key , and put that world space starter as value for further comparison . The default color of the buffer is black , and once the pixel in buffer of corresponding index got found , it was painted into the float3 of corresponding index’s corresponding WS coordinate.
To elaborate :
We can multiply the float2(See where I mentioned my bad math) by 0.001 to clamp it into 0~1. ( Value greater than 1000 is greater than 1. I wonder if we can clamp them absolutely , by normalizing a vector of (TheValueToBeClampped , 1 , 1))
After it got clamped ,we can treat it as a UV coordinate , and find the corresponding pixel in our frame buffer . Material actually is float3 , I’m trying my best to avoid being directly comparing two float3 dealt in different time, neither looping .
You just wrote the WS data into buffer, and pick the data by using same uv , and surprisingly found it not black , so you’re going to compare which one (the stored and the current) has larger distance , and discard it. Lighting Mask :
Any pixel in frame buffer found it overlapped with others should do ‘distance test’ to see which one is smaller , and replace the content with the smaller one’s WS data to get closer to the light. But whatever it’s small or big they’re all visible to us , for we’re viewing them from player’s location where should occur no culling .
The only difference is that we going to set the current pixel to
1.initialized black 2.+1 if we’re running over it 3.+1 if is overlapped and closer to camera(smaller) 4.-1 if is overlapped and distant to camera(bigger)
We can’t change another pixel’s color in a independent pixel’s rendering , but we can make it moves far more to leave the previous one behind.
Once we got lighting mask , the game is over. We have shadow now . We can blur the edge if the shadow is too hard.
Now the problem is that , if we can write data into a custom buffer in material editor ? I see Scene Texture node has lots of custom input . Can we use them ?