IMPROVING VISUALS

Improving Visuals: Adding Life to a Still Scenery Using Shaders

Posted on Posted in Game Development

Intro

This article is about improving visuals of Raft Challenge by applying GPU’s shaders to the still scenery. It explains the algorithms and potential pitfalls when using GLSL in SpriteKit. The reader should have basic experience in writing fragment shaders (explained in the previous article).

 

The problem

 

 

After the game entered beta stage, we received feedback from various people. We frequently heard that the graphics were good, but also static, which in the long run led to boredom. My instant reaction was like: „They said it’s static? So we’ll add some wind to move the whole thing!” After that, we thought more about the problem. Such huge objects as the whole trees cannot be animated frame-by-frame, because it would lead to memory problems. We were considering adding small animated objects like animals, but it would complicate the scene graph even more and would have unknown performance impact.

The solution I came up with was to animate the whole forest using the fragment shaders. I wanted to create the wind effect. The idea was to apply a horizontal distortion to the sprite’s texture with strength proportional to the distance from the trunks’ base. That strength has also been changing in time and influenced by the scene’s „depth”.  Other pros of this solution:

  • easy integration (simple as filling an existing object’s properties)
  • performance
  • huge flexibility

 

Here’s the source (GLSL):

 

 

  1. This float holds the vertical position of all trunks’ bases. This value is specific to our texture.
  2. We calculate the distance between the current sampling point and the above value. Note that this value is less than 1.0 and can be negative.
  3. We calculate max divergence. The magic number at the end was tweaked through trial and error.
  4. We calculate the changing strength and the wind’s direction. The sinus function is a good foundation since it returns predictable values (-1 to 1) and it’s a continous function. The latter means that we can put any garbage as the argument and it will still work. „The garbage” in this case is the current time plus the „depth” of the current sprite (the concept of depth will be described later). Magic numbers are added to shape the animation.
  5. The delta vector is created. Max divergence multiplied by the factor goes into the X position while Y is left with 0.
  6. Here’s the most important trick. This line takes the color from a specific point in the texture and outputs it to the screen. By adding delta to our current position (vtexcoord), we alter the point from which the sampler is extracting color value.

 

Result:

 

 

Note that reflections on the water are also moving. That is because the trees and reflections are a part of the same sprite/texture. No sorcery here.

 

Improving fog

Is there anything else that we can do? Well, if we can’t invent anything new, we can always improve something existing. Our designer said once ago that trees further away should have solid color to merge better with the fog.

The above image is almost self-explanatory. Earlier, I’ve mentioned about the „depth”. Every layer of the forest has an attribute (attrDepth) that represents the distance between the mountains (0.0) and the viewer (6.0). Let’s tweak this fog!

 

 

The code above is pretty straightforward so I’ll focus only on the most important things:

  1. Extract alpha from the texture.
  2. The far stage. When the forest is the furthest possible, it has the „Light Mountains” color and 0 alpha. While it’s being closer, it emerges thorugh increasing alpha up to depth == 1.0
  3. The medium distance. The color shifts towards „Dark Mountains” as the sprite’s getting closer to the viewer.
  4. The close distance. The color is a mix between the „Dark Mountains” and the native texture color. Naturally, the closer, the more normal the look.
  5. Passing the final color to the output using the alpha extracted at the begining.

 

Again, the result:

 

 

Combining both effects

The best thing that I like about shaders is their flexibility. It’s not only possible to merge both effects without sacrificing anything, but it’s even recommended to do so. Merging shaders decrease the draw calls and that increases the framerate.

 

 

The final result:

 

 

Pitfalls

There’s no rose without a thorn.

  • Using shaders on multiple big sprites with alpha channel may cause visible framerate drop.
  • Same GPU may give 60fps on the iPhone but only 20fps on iPad with more pixels. Test your code frequently on different devices, especially the iPads with retina displays.
  • There is no reliable way to estimate the performance of the device from the code. Just run your game on multiple physical devices and whitelist those that are capable of running shaders with decent performance. To distinguish devices, you can use UIDevice-Hardware.m.
  • Your partially transparent texture loses color and becomes gray? Google premultiplied alpha!
  • Beware of using SKTextureAtlases if you’re altering the coordinates like in the wind example. During the atlas generation, XCode may rotate and move some textures and it’s impossible to detect such anomaly from the code (or at least I don’t know how).
    • For some sprites, you may receive a texture with swapped X and Y coords!
    • You may accidentaly warp to a completly different subtexture!

 

Summary

Summarizing, we’ve learned how to use fragment shaders to create wind and fog. When writting Your own GLSL code, you’ll surely produce many display artifacts. Some of them are annoying, some are hilarious, but keep in mind that some of them may have potential to become a feature!

 

Dodaj komentarz

Twój adres email nie zostanie opublikowany. Pola, których wypełnienie jest wymagane, są oznaczone symbolem *