screenshots

motion blur and shadowed grass in action

shadow_on_grassmotionblur

threading in directx:

enabling thread-safety in directx weighs in with about 7-8% performance loss. If resources such as vertex buffers have to be created in different threads while the engine is rendering, there’s no way around that.

Put it down as a sacrifice to the graphics gods, swallow hard twice and move on.

Phony…

Generate grass on cpu, adapt to local terrain
Generate grass on gpu, no adaption necessary
Geometry shader: particle system to simulate gravity, generate leave mesh on terrain surface.

First: finish scenegraph manager, implement basic terrain tile class (flat, no heightmap)
Add shader support for dx9 renderer
Start dx10 engine, test geometry shader, texture arrays, …

implementation time

a first output of the new engine architecture

Currently, the engine only renders one type of mesh with one texture, but if they remain static (nested or not – doesn’t matter), the architecture is able to draw about 3k of these objects on an old x800 with approximately 20fps (debug):

3ktigers

Since I can’t (yet) animate that amount of objects simultaneously, I have to invest some more time into optimizing the internal vector and matrix classes. If the number of objects get reduced to about 1k, the engine can handle updating all their matrices and still run at a decent framerate (~15-20fps, release)

make some noise

An outline of a simple zoomable heightmap generation algorithm:

  • create a 256×256 noise map
  • upon zooming in, create a new one as soon as the zoom factor reaches (1.5 x iteration level)
  • sample it at the respective coordinates to make it serve as the base for the new texture
  • use global coordinates (relative to the sphere) to create new texture iteration
  • iteration++
  • when zooming further in towards (2x iteration level), blend the new texture over the old one to hide transition artifacts
  • when zooming out again, blend it out between (2x iteration level) and (1.5x iteration level)
  • discard the texture upon reaching (1.5x iteration level)

using this approach, we will only have 256×256 heightmaps in graphics memory, albeit n of them.

i.e. if the target planet has a circumference (edge size) of  40’000km (earth), and the maximum zoom level would be 1m edge size, we would end up with approx. 25 textures.

One approach would be to only keep the minimum amount of textures (thus limiting the view distance) in memory and discarding the rest, re-rendering them on demand.

The same algorithm can be applied to movement parallel to the ground: if the camera is moved further than (0.25x edge width of current iteration), a new texture at the same iteration level but with new base coordinates is computed and blended over the old one between (0.25x edge width) and (0.33x edge width)