I wanted to make an official analysis of the engine for zelda: the stalemate in the desert for a long time, but I've never had time to do it.However, now that Switch has a new function of video recording, I thought it would be a great time to review the game and share my thoughts via a video that I uploaded to Twitter, I will start with a summary of my findings, but I will also detail each of the technical features at the end of the article to make things easier. I also try to avoid repetitions. Something like Digital Foundry has analyzed some characteristics of the engine and I will not mention them here. The purpose of this post is to expose more people to the technical achievements of the game than others do not worry about investigating. Whatever it is, here is a summary of the features of the engine: • Global Illumination | • Local reflections | Local reflections (calculated by Fresnel reflection) Rendering based on physics | • Emitting materials / light areas | • Ambient occlusion in the screen | • Dynamic Wind Simulation System | • Cloud formation in real time | cloud deformation in real time (affected by wind) • Highlights of Rayleigh / Mie | Rayleigh / Mie scattering • Full volumetric lighting | Total volume lighting • Bokeh DOF and approx. of the circle of confusion | DOF Bokeh and About • Volume of Sky Occlusion and Dynamic Shadow | features features combining occlusion and dynamic shadow volume Opening-based lens illumination | at the same time • Highlights of the subsurface | surface scattering • dynamically localized from tiny lighting fixtures | Dynamic Local Lightning • Pixel-by-pixel Irradiance | Per Pixel • Insulate Fog | Diffusion of fog lamps • Particle lamps | • formation of puddles and evaporation | formation and evaporation of water Global Illumination / Radiosity | Global Enlightenment / Light Transmission First of all, I want to point out that all so – called real – time global lighting systems have been falsified in one way or another.
The transmission of light is therefore the overall approximation of the illumination of the light reflected by different surfaces and the transfer of color information from one surface to another in the process. The more precise transfer of light energy , more reflected light must be calculated in order to transmit the appropriate color. The burgeoning motor uses wild light probes to collect information about colors on different surfaces near the light probes in the environment. There is no simulation from the reflected light, to the basic colors in a given region.The algorithm used by wild life to calculate this information is not clear, but I guess that they are spherical harmonic functions or something of the kind, based on the color averages and the location of the light energy transfer. Unlike Mario: Odyssey, the transmission of light energy in the breath of nature is not binary but particle.Lighting information calculated from the dress seems to be integrated to the LOD system at the same rendering pipeline level, which makes it extremely efficient.
Observation Tip: Note how the rocky cliffs receive green tones of grass when the camera approaches the area.
At first, I assumed that spherical harmonics could be placed in the environment to collect color samples because the link seems to update to the base color when it moves in. l & # 39; environment. However, after further research, I now know that these basic color reflections are due to the lack of color change in the environment. When I tested the overall lighting in an area with many adjacent colored surfaces, the operation of the overall lighting system became clear. Notice how the color of the link is transferred to all surfaces facing the opposite direction when it touches the red wall. The same goes for the green wall in the opposite direction of the red wall (although the effect is not very strong because the probe is closer to the red wall, the color of the red wall itself reflects more strongly ). In fact, at any time, this will happen in all directions. The floor transmits the color upwards, and any ceiling or colored surface just above the head of the link transmits the color. The probe samples and dynamically transmits the colors (we can assume this is reflected light) because the probe picks up more colors for the new transmissions and has to sample them. Finally, the end result will stop changing because the sample closest to the probe will have the dominant color, regardless of the color shift. The process is orderly but very local and fast. The probe has a limited sampling range and applies these results to materials from the global space. Because of this efficiency, the probe can simulate the effect of many reflections, but only the area closest to the probe looks accurate.
This is a very important discovery. (the other materials are "dyed" in red near the red wall) (other materials are also "dyed" in green when they are near the green wall) The overall lighting is getting closer in reality of multiple reflections. A light probe on the connecting head samples the colors of most environmental materials. Each sampled color is then transmitted and reflected in the opposite direction. Interestingly, the intensity is considered to be influenced by the surface closest to the probe and by the intensity of the reflected light. This may not seem obvious on the outside, but overall lighting looks good if there are several adjacent surfaces. Local reflections | Local Reflections So, an area that has always bothered me since I started analyzing the game seems to be a local reflex. There are so many apparent inconsistencies, because my theory first stole. Now, I can say with confidence that I have solved the mystery of how local thinking works. Clearly, this is a three-pronged approach based on specific circumstances. • Specular lighting | Specular Lighting Sunlight, skylights, lightning and point sources all fall into this category. At first I thought the same thing was true for temples and towers (since they are self-luminous, I assumed they were regional light sources), but this was dismissed when I was I have seen the very revealing artifacts presented by the temples and towers. All bright materials can not illuminate the environment, and temples and towers can be attributed to those who can not. • Aperture Mapping | Though the term seems new to you, it may be. Based on the text copy of the game, Inspired Respiration developers have marked their point of view on the 2D reflections of the unreal engine 4 scene. The environment is reflected this way. Virtual cameras above the connection head (aperture, in particular) have a relatively small field of view. Thus, as the links move, the reflections (displayed in real time) move in their own space until the aperture captures the environment again. You can see this type of treatment trace and see the video below. • Reflections of the screen space | Reflections on the screen Only those that look like laminates use this model, and these are limited to temples. A number in the gloss map tells the motor to use the space reflections on the screen only for these materials. They reflect everything on the screen and can be seen from the incident corners of any material. However, these materials also use mapping openings to reflect the environment, which is one of the sources of my confusion. The incongruity of these reflections led me to formulate hypotheses about other materials outside the temple. Fortunately, we have clarified this question. Observation Council: see how the reflection of the link is compared to that of the blue light. The link must be on the screen to show the reflection, and the blue light does not necessarily have to be on the screen to show the reflection. (reflection of the space on the screen + specular highlights) The mystery of local reflection solved! (The front walls do not reflect, unlike the side walls.) The mystery of local reflection solved! The temple materials have an extra layer of shine and reflection, but they also use the same reflection pattern for external reflections. No wonder it's so confusing! By using a shiny material, you can capture the reflection of everything in the screen (reflection of space). Using non-shiny materials (almost all external materials), capture 2D reflections from the scene using almost the same techniques used in unreal engine 4 to capture ambient reflections. Basically, the virtual camera (which has its own visual field and field of view) rests directly above the head of the link, always facing the horizon of the main camera, regardless of the orientation of the link (This limits the reflections off the screen). The captured image is then introduced into a reflective material, as if it was broadcasting a live signal on a television. This means that the stream of images is projected in real time at any rate (30 frames) used by the game. This makes it possible to update different elements of the material without waiting for a new capture. However, the actual capture screen is updated at a much lower rate (4 to 5 frames). You can see it as long as the scene captures the camera moving from its absolute position. Before updating the capture reflection, the currently captured image inside the material (eg water) moves in any direction of movement of the camera in real time (30 images). However, once the material receives the updated capture, it corrects the reflection. This correction time allows us to really understand the capture of updates along the material path (4 to 5 images). (the reflection of the bridge column is slightly delayed) As you can see here, the obsolete reflection can always follow the movements of the link smoothly. There is no cat. Reflection is then corrected when the new capture is updated. It works differently than a reflection map, which updates the reflection only when the map itself is updated. At this point, the captured reflection is clearly obsolete, but its position still changes by 30 frames. You can see the field of view of the capture camera in the following gifs: (since there is no reflected color on the material at the end of the camera's line of sight) It is therefore logical that all non-self-luminous materials only have Fresnel reflections. . With this thinking technique, these are the only angles that work! I came across this ark and realized that it was the ideal setting to measure and capture the camera's field of view: Let's do some basic trigonometry. I estimate the horizontal field of vision at about 115 °. The reflection of the vault is off the screen before the link passes through, so we know that this is certainly not a 180-degree field of view, because otherwise the reflection of the vault would not be a visual error of this kind. You can also see that when the camera is a few feet from the arc and perpendicular to it, the reflection is tilted and proportional to the field of view, which allows us to observe its width. It measures the relative horizontal field of view of the scene captured by the camera. But I want to repeat that this is only a rough estimate. So I may be about 10 degrees high, but it's impossible to use this field of view from certain angles. Therefore, excluding, we can at least have an estimate. Physical rendering | Before anyone asks, no, it does not mean "the material that looks physically correct". This is only one way to apply 3D graphics rendering channels, where all materials (textured surfaces) interact with light in a unique way, which changes their behavior. That's what's happening in the real world, that's why it's called physics-based rendering. Different materials cause a different behavior of light, which is why we can visually distinguish different surfaces. Traditionally, the rendering pipeline relies on the artist's understanding of how light interacts with different real-world materials and defines the texture map based on that understanding. As a result, there are many inconsistencies between the different textured surfaces and their comparison with their counterparts in the real world (which is understandable because we can not expect an artist to have an encyclopedic knowledge of everything which is in the real world). For PBR, the fundamental principle of light physics is an integral part of the pipe, and all textured surfaces are classified as having unique properties that will result in the behavior of light based on these unique properties. This allows different surfaces to be placed in different lighting conditions and different camera angles and to dynamically adapt the light's interactions with these surfaces. Artists should not pre-define this interaction as traditional workflows do. Everything is automatic. Because of the effectiveness of the PBR, the developers wanted to create games where all the materials have unique qualities that affect the light. In nature's breath, his PBR has a bit of artistic talent, so you may not even notice that his engine is based on such pipes, because the textures do not have to be realistic. However, it is clear that the BDRF function (bidirectional reflection distribution function) used on the materials makes the engine useful for the PBR. For each dynamic lighting, its specular highlights (the light itself represents a part of the reflective surface) and the specular reflectivity / refraction index are based on the angle of incidence (l 39; angle of incident light relative to normal), and the light interacts with the refractive index. of any material (when the contact of light with the surface, the material of "bending" and how much light) generated dynamically. If the game uses traditional pipes, there is not much difference between the specular highlights allocated between wood and metal. But in this game, the specular reflections depend entirely on the material with which the light interacts. Another key factor that indicates the use of PBR in full food is the Fresnel (s silent) | Fresnel reflection on all materials. First and foremost, most games using traditional pipes do not even use Fresnel reflection, because it is better to simply use the PBR. As I explained earlier in my discussion of local reflections, Fresnel reflection becomes visible at the same incident angle (the angle at which the incident light is almost parallel to the surface on which the observer / camera perspective interacts). According to the Fresnel reflection coefficient, all materials achieve a 100% reflectivity at the angle of incidence, but the effectiveness of the reflectivity will depend on the roughness of the material. Thus, programmers can distinguish between reflectivity and refractive index. Some materials reflect light in all directions (diffuse materials). Even with a 100% reflectivity, 100% of the light can be reflected by the entire surface, but all the light is not reflected in the same direction, so the light is evenly distributed and you do not see no specular reflection (the mirror image around the surface). Other materials only reflect the incoming light in the opposite direction (mirror material), so you can only see the reflection at the right angle. Nearly 90% of the light is reflected. Diffuse and specular reflectivity The reflectivity of a material is not always 100% even at an incident angle. This is why no material can see perfect specular reflection at an incident angle, even in the real world. The clarity of Fresnel's reflection will vary depending on the material that produces the reflection. Observation board: notice how the green light on the barrel's wood has the same appearance from all angles, and this same green light also seems to change the reflection of the metal hoop (the metal ring on the barrel ). This one is easy. The luminous object material provides a unique light source to illuminate the environment in the same shape as the material itself. They are not punctual light sources that propagate in all directions, nor even simple directional light sources that illuminate in one direction. It is important to note that only global sources (sun / moon / lightning) cast shadows. However, the bidirectional reflection distribution function always applies to all light sources of the game. Observation Tip: Notice the shape of the light projected by the sword. This shape corresponds to the shape of the sword itself, but the intensity of the light depends on the distance between the sword and the illuminated surface. Ambient occlusion of the screen | In the real world, when light is reflected in the environment, a certain amount of "ambient light" will color the environment and make it completely diffuse. If the shadow is the product of the blocking of direct sunlight on the object, the shielding against ambient light can then be considered as the product of the space that blocks the ambient light in the # 39; environment. The scheme used in the breath of nature is called SSAO (ambient shading of the screen space) because it calculates the ambient shading of the screen space and depends on the point of view. It receives ambient light only when it is perpendicular to the camera. Observation Tip: From the front, look for dark and shadowed noise mode effects in the interstices of the walls. The same noise pattern describes the profile of the link from this angle. Dynamic wind simulation system | This one surprised me because I did not know it would be so powerful. Basically, the physical system is linked to the wind simulation system. It is completely dynamic and affects different objects according to their weight. The most prominent objects are the grass and clouds generated by the program. Observation Tip: If you take a closer look, you can see how the directional flow of grass and clouds matches the direction of the wind. Cloud formation in real time | Cloud formation in real time This game does not use traditional sky boxes. The cloud is programmatically generated according to the parameters defined by the engine. They cast shadows in real time. They receive bright information based on the position of the sun in the sky. As far as I know, the cloud is considered the real material of the game. These are not volumetric clouds, so you do not see any gap light or anything, but they are not clouds either. skybox. They are also trained by wind systems. Observation Tip: Notice how cloudy particles in the sky cluster randomly Rayleigh, Mie, | In the real world, when light reaches the Earth's atmosphere, it is dispersed by air molecules, creating a blue sky, because the shorter wavelengths of blue light disperse more easily than other colors. However, as the horizon approaches, the sun must pass through a larger part of the atmosphere, causing most of the blue light to scatter when it reaches the sunglasses. viewer, thus leaving the longer wavelengths of the orange and red light to the naked eye. Wild Breath mathematically approaches this algorithm (I had found it earlier this year in the text dump code!). Apparently, this algorithm also explains the minimal dispersion that allows the fog to appear in the sky. To be honest, if I had not looked at the code in the text dump, I would never have thought of imitating this phenomenon in the game. It is easy to simulate this effect. However, after observing the reflection of the sky in the water, it made sense. This scattered light bounces back into the environment in real time. A simple sky box would make that impossible. Observation Board: notice how different shades of orange and red in the sky reflect the same color on the environment. Although this is not indicated in GIF, the light scattered in the sky also illuminates the environment and the water surface in other colors, depending on how the light is broadcast. Observation tip: notice how the color of the snow changes at sunset. Suggestion for observation: At the beginning of this GIF, the water has at least five different reflections. Temple (blue), hill (green), flag (black outline), sky (orange) and sun (pink).