ITKarma picture

Armored Warfare: Armata Project is a free tank-based online action game developed by Allods Team, game studio MY.GAMES. Despite the fact that the game is made on CryEngine, a fairly popular engine with a good realtime render’s, for our game we have to modify and create a lot from scratch. In this article I want to talk about how we implemented chromatic aberration for a first-person view, and what it is.

What is chromatic aberration?

Chromatic aberration is a lens defect in which not all colors come to the same point. This is because the refractive index of the medium depends on the wavelength of light (see variance ). So, for example, the situation when the lens does not suffer from chromatic aberration looks like:

ITKarma picture

And here is a defective lens:

ITKarma picture

By the way, the situation above is called longitudinal (or axial) chromatic aberration. It arises in the case when different wavelengths do not converge at the same point in the focal plane after passing through the lens. Then the defect is visible throughout the picture:

ITKarma picture

In the picture above you can see that due to the defect, purple and green colors stand out. Can not see? And in this picture?

ITKarma picture

There is also lateral (or transverse) chromatic aberration. It occurs when light falls at an angle to the lens. As a result, different wavelengths of light converge at different points in the focal plane. Here's a picture for you to understand:

ITKarma picture

You can already see from the diagram that in the end we get a full decomposition of light from red to purple. Unlike longitudinal, lateral chromatic aberration never appears in the center, only closer to the edges of the image. So that you understand what I mean, here's another picture from the Internet:

ITKarma picture

Well, since we are done with theory, let's get to the point.

Lateral chromatic aberration taking into account the decomposition of light

I will start all the same with the fact that I will answer the question that many of you might have had in your head: “Is there really no realized chromatic aberration in CryEngine?” There is. But it is used at the post-processing stage in the same shader with sharpening, and the algorithm looks like this ( link to code ):

screenColor.r=shScreenTex.SampleLevel( shPointClampSampler, (IN.baseTC.xy - 0.5) * (1 + 2 * psParams[0].x * + 0.5, 0.0f).r; screenColor.b=shScreenTex.SampleLevel( shPointClampSampler, (IN.baseTC.xy - 0.5) * (1 - 2 * psParams[0].x * + 0.5, 0.0f).b; 

Which basically works.But we have a game about tanks. We need this effect only for a first-person view, and only for beauty, that is, so that everything in the center is in focus (hello lateral aberration). Therefore, the current implementation did not suit at least the fact that its effect was visible throughout the picture.

This is what the aberration itself looked like (attention to the left side):

ITKarma picture

And so it looked if you twist the parameters:

ITKarma picture

Therefore, we set as our goal:

  1. Implement lateral chromatic aberration so that everything is in focus near the sight, and if characteristic color defects are not visible on the sides, at least to be blurry.
  2. Sample a texture by multiplying RGB channels by coefficients corresponding to a specific wavelength. I have not talked about this yet, so now it may not be completely clear what this item is about. But we will definitely consider it in full detail later.

To get started, consider the general mechanism and code for creating lateral chromatic aberration.

half distanceStrength=pow(length(IN.baseTC - 0.5), falloff); half2 direction=normalize(IN.baseTC.xy - 0.5); half2 velocity=direction * blur * distanceStrength; 

So, first a circular mask is constructed, which is responsible for the distance from the center of the screen, then the direction from the center of the screen is considered, and then this is multiplied with CDMY0CDMY. CDMY1CDMY and CDMY2CDMY are parameters that are transmitted externally and are simply factors for adjusting the aberration. The CDMY3CDMY parameter is also thrown from the outside, which is responsible not only for the number of samples, but also, in fact, for the step between the sampling points, since

half2 offsetDecrement=velocity * stepMultiplier/half(sampleCount); 

Now we just have to walk CDMY4CDMY times from a given point in the texture, shifting each time by CDMY5CDMY, multiply the channels by the corresponding wave weights and divide by the sum of these weights. Well, it's time to talk about the second paragraph of our global goal.

The visible spectrum of light lies in the wavelength range from 380 nm (violet) to 780 nm (red). And, lo and behold, the wavelength can be converted to an RGB palette. In Python, the code that does this magic looks like this:

def get_color(waveLength): if waveLength >= 380 and waveLength < 440: red=-(waveLength - 440.0)/(440.0 - 380.0) green=0.0 blue=1.0 elif waveLength >= 440 and waveLength < 490: red=0.0 green=(waveLength - 440.0)/(490.0 - 440.0) blue=1.0 elif waveLength >= 490 and waveLength < 510: red=0.0 green=1.0 blue=-(waveLength - 510.0)/(510.0 - 490.0) elif waveLength >= 510 and waveLength < 580: red=(waveLength - 510.0)/(580.0 - 510.0) green=1.0 blue=0.0 elif waveLength >= 580 and waveLength < 645: red=1.0 green=-(waveLength - 645.0)/(645.0 - 580.0) blue=0.0 elif waveLength >= 645 and waveLength < 781: red=1.0 green=0.0 blue=0.0 else: red=0.0 green=0.0 blue=0.0 factor=0.0 if waveLength >= 380 and waveLength < 420: factor=0.3 + 0.7*(waveLength - 380.0)/(420.0 - 380.0) elif waveLength >= 420 and waveLength < 701: factor=1.0 elif waveLength >= 701 and waveLength < 781: factor=0.3 + 0.7*(780.0 - waveLength)/(780.0 - 700.0) gamma=0.80 R=(red * factor)**gamma if red > 0 else 0 G=(green * factor)**gamma if green > 0 else 0 B=(blue * factor)**gamma if blue > 0 else 0 return R, G, B 

As a result, we get the following color distribution:

ITKarma picture

In short, the graph shows how much and what color is contained in a wave with a specific length. On the y-axis, we just get the very weights that I mentioned earlier. Now we can fully implement the algorithm, taking into account the previously agreed:

half3 accumulator=(half3) 0; half2 offset=(half2) 0; half3 WeightSum=(half3) 0; half3 Weight=(half3) 0; half3 color; half waveLength; for (int i=0; i < sampleCount; i++) { waveLength=lerp(startWaveLength, endWaveLength, (half)(i)/(sampleCount - 1.0)); Weight.r=GetRedWeight(waveLength); Weight.g=GetGreenWeight(waveLength); Weight.b=GetBlueWeight(waveLength); offset -= offsetDecrement; color=tex2Dlod(baseMap, half4(IN.baseTC + offset, 0, 0)).rgb; accumulator.rgb += color.rgb * Weight.rgb; WeightSum.rgb += Weight.rgb; } OUT.Color.rgb=half4(accumulator.rgb/WeightSum.rgb, 1.0); 

That is, the idea is that the more CDMY6CDMY we have, the smaller the step between our sample points, and the more we disperse the light (we take into account more waves with different lengths).

If it is still not clear, then let's look at a specific example, namely, on our first attempt, and I will explain what to take for CDMY7CDMY and CDMY8CDMY, and how the functions CDMY9CDMY will be implemented.

Approximation of the entire range of the visible spectrum

So, from the graph above, we know the approximate ratio and the approximate values ​​of the RGB palette for each wavelength. For example, for a wavelength of 380 nm (purple) (see the same graph), we see that RGB (0.4, 0, 0.4). These are exactly the values ​​we take for the weights that I mentioned earlier.

Now let’s try to get rid of the fourth-degree polynomial function of obtaining color so that the calculations are cheaper (we are not a Pixar studio, but a game studio: the cheaper the calculation, the better). This fourth degree polynomial should approximate the resulting graphs.To build the polynomial, I used the SciPy library:

wave_arange=numpy.arange(380, 780, 0.001), red, 4) 

As a result, the following result is obtained (I divided it into 3 separate graphs corresponding to each individual channel so that it would be easier to compare with the exact value):

ITKarma picture

ITKarma picture

ITKarma picture

In order for the values ​​not to exceed the limit of the interval [0, 1], we use the CDMY10CDMY function. For red, for example, the function is obtained:

half GetRedWeight(half x) { return saturate(0.8004883122689207 + 1.3673160565954385 * (-2.9000047500568042 + 0.005000012500149485 * x) - 1.244631137356407 * pow(-2.9000047500568042 + 0.005000012500149485 * x, 2) - 1.6053230172845554 * pow(-2.9000047500568042 + 0.005000012500149485*x, 3)+ 1.055933936470091 * pow(-2.9000047500568042 + 0.005000012500149485*x, 4)); } 

The missing parameters CDMY11CDMY and CDMY12CDMY in this case are 780 nm and 380 nm, respectively. The practical result with CDMY13CDMY is the following (see the edges of the picture):

ITKarma picture

If you tighten the values, increase CDMY14CDMY to 400, then everything gets better:

ITKarma picture

Unfortunately, we have a realtime render, in which we cannot allow 400 samples (about 3-4) in one shader. Therefore, we have slightly reduced the wavelength range.

Approximation by part of the visible spectrum range

Let’s take such a range that we end up with pure red and pure blue. We also refuse the red tail on the left, since it very strongly affects the resulting polynomial. As a result, we obtain the distribution on the segment [440, 670]:

ITKarma picture

There is also no need to interpolate over the entire segment, since now we can only obtain the polynomial of the section where the value changes. For example, for red, this is the segment [510, 580], where the weight value changes from 0 to 1. In this case, you can get a second-order polynomial, which then can also be reduced to the range of [0, 1] by the CDMY15CDMY function.For all three colors, we get the following result, taking into account saturation:

ITKarma picture

As a result, we get, for example, for red the following polynomial:

half GetRedWeight(half x) { return saturate(0.5764348105166407 + 0.4761860550080825 * (-15.571636738012254 + 0.0285718367412005 * x) - 0.06265740390367036 * pow(-15.571636738012254 + 0.0285718367412005 * x, 2)); } 

But in practice with CDMY16CDMY:

ITKarma picture

In this case, with twisted settings, you get about the same result as when sampling the entire range of the visible spectrum:

ITKarma picture

Thus, by polynomials of the second degree we got a good result in the wavelength range from 440 nm to 670 nm.


In addition to optimizing calculations by polynomials, you can optimize the work of the shader, relying on the mechanism that we laid the foundation for our lateral chromatic aberration, namely, not to carry out calculations in the area where the total offset does not go beyond the current pixel, otherwise we will sample the same pixel, and we get it.

It looks something like this:

bool isNotAberrated=abs(offsetDecrement.x * g_VS_ScreenSize.x) < 1.0 && abs(offsetDecrement.y * g_VS_ScreenSize.y) < 1.0; if (isNotAberrated) { OUT.Color.rgb=tex2Dlod(baseMap, half4(IN.baseTC, 0, 0)).rgb; return OUT; } 

The optimization is small but very proud.


The lateral chromatic aberration itself looks very cool, this defect in the center does not interfere with the sight. The idea of ​​decomposing light into scales is a very interesting experiment, which can give a completely different picture if your engine or game allows more than three samples. In our case, we could not bother and come up with a different algorithm, since even with optimizations we cannot afford a lot of samples, and, for example, the difference between 3 and 5 samples is not very visible. You can experiment with the described method yourself and look at the results.