SIGGRAPH 2009 Highlights

Light Propagation Volumes in CryEngine 3
Anton Kaplanyan, Crytek

Anton Kaplanyan presented Crytek’s research into real-time single-bounce diffuse indirect illumination.  The current state of the art in real-time global illumination is derived from the work of Marc Stamminger and Carsten Dachsbacher.  They introduced the idea of rendering reflective shadow maps from light sources (shadow maps that store reflected light color and direction at each pixel) and then using that information to introduce virtual point lights (VPLs) that approximate bounced light.  This approach still requires a renderer capable of handling tens of thousands of point lights.  Deferred techniques can handle this kind of load only at barely interactive frame rates on the fastest hardware.  Various people, most notably Chris Wyman, have implemented solutions such as VPL clustering and multi-resolution rendering in order to boost the performance of this technique.  The best I’ve seen from this line of research is single-bounce, single-light global illumination consuming about 20 ms on a high-end GPU.  Crytek’s goal was to reduce that to 3.3 ms on current consoles!

Crytek’s research eventually led them to light propagation volumes A.K.A. irradiance volumes.  Their motivation is clear, to apply large numbers of point lights at a fixed cost.  The light propagation volumes are represented as cascaded volume textures covering the view frustum at multiple resolutions.  They store a single, 4-component spherical harmonic per texel in the volumes and they render point lights into the volumes as single-texel points.  They then apply what amounts to a custom blur over the volume texture to simulate light propagation and give their point lights non-zero radius.  Finally they apply the light volume as you would any single light in a deferred renderer.

They’ve provided tons of documentation, so I won’t bother going into any more of the details.  This is the most exciting and promising direction of research into real-time lighting I’ve seen in years.  If you read one paper from SIGGRAPH, let this be the one!

Graphics Engine Postmortem from LittleBigPlanet
Alex Evans, Media Molecule

As you may know, I’m a gushing fanboy when it comes to the team at Media Molecule.  These guys are alien geniuses and everything they touch turns to gold.  I was really excited to see the talk by Alex Evans and it absolutely did not disappoint!

Perhaps the most impressive thing about Media Molecule’s work on LittleBigPlanet is the large set of problems they chose not to solve.  From early in development they established a number of key limitations on their game design.  The world would be viewed through a constrained camera angle and it would have very low depth-complexity from that perspective.  Also, the game would be Playstation exclusive and rely heavily on the Cell processor.

In return for these constraints, the engine would push the boundaries of geometric, lighting, and material complexity.  Everything would be dynamic.  Physics simulation would happen at the vertex level (everything is cloth!), lights would be plentiful and dynamic, and users would be able to build and texture objects in game virtually without limitations.

Throughout his talk, Alex repeatedly cautioned against implementing the techniques he was presenting.  He seemed to do it mostly out of modestly, but let’s face it, if you’re like most of us, trying to build an extensible, multipurpose, cross-platform engine to accommodate the current demand for multiplayer story-driven detail-rich open-world action RPG driving games, these techniques ain’t for you!

Hopefully the course notes will come out soon with more details, but to tide you over here are some quick bullets on LittleBigPlanet rendering:

  • Everything is cloth.  The SPUs simulate and collide every vertex every frame, allowing an entire world of soft-body interactions.  The GPU just sees a big polygon soup separated by materials.
  • Alex implemented dynamic irradiance volumes not unlike Crytek’s stuff.  However, his approach differed in three key ways.  First, he was representing higher frequency direct lighting in his irradiance volumes rather than low frequency bounce lighting.  Second, rather than storing second order spherical harmonics, Alex stored first order spherical harmonics and inferred light direction by taking multiple samples from the volume.  Third, rather than using cascaded volumes, he mapped his volumes in projective space.  The combination of these differences led to distortion in the shape of point lights in the volume and caused unacceptable lighting artifacts. He ultimately abandoned irradiance volumes in favor of regular deferred lighting.
  • Since he fell back on deferred lighting, he needed a solution for lighting translucent objects.  Given the world constraints, he decided to allow only a single layer of translucency.  This was achieved by using MSAA sample masks to restrict translucent pixels to one sample of an MSAA buffer.  This is very similar to the stippled alpha approach used in Inferred Lighting (coming up).

Inferred Lighting: Fast dynamic lighting and shadows for opaque and translucent objects
Scott Kircher, Volition, Inc.
Alan Lawrance, Volition, Inc.

Scott Kircher and Alan Lawrance presented a technique they call Inferred Lighting.  I must admit, I was too quick to dismiss this paper when it first came out.  I’m glad I sat through the talk because there are quite a few clever ideas in there.  Inferred lighting is basically deferred lighting with two interesting and complimentary extensions.

The first extension is doing the deferred lighting at quarter resolution.  This reduces the cost of the G-Buffer generation and deferred lighting passes, but it introduces artifacts at edges where there is a sharp discontinuity between lighting at adjacent pixels.   To solve this problem, inferred lighting introduces two new attributes to the deferred lighting G-Buffer, an object ID and a normal group ID.  These two 8-bit IDs are packed together into a single 16-bit ID and stored with linear depth and a two-component surface normal representation as the G-Buffer.  Scott and Alan refer to the linear depth and 16-bit ID as the discontinuity sensitive filter (DSF) buffer and use it during the shading pass to discard spurious samples from the irradiance buffer.

The second extension is using alpha stippling to allow a limited number of deferred-lit translucent layers.  Translucent objects are written into the G-Buffer with a 2×2 stipple pattern, allowing up to the three layers of translucency.   Each layer of translucency is lit at one quarter the resolution of the already quarter-resolution irradiance buffer, and each overlapping layer of translucency reduces the lighting resolution of the opaque pixels in its quad by 25%.  Since the shading pass is already designed to accommodate a reduced-resolution irradiance buffer, it selects the best irradiance values for each fragment and overlapping opaque and translucent objects are lit appropriately.  It is important to note that unlike other stippled alpha techniques, inferred lighting doesn’t limit the range of levels of translucency in a scene; it merely limits the number of overlapping layers at each pixel.

I’m pretty intrigued by inferred lighting and its future possibilities, but the issues that bothered me when I first read the paper remain.

  • I’ve implemented deferred lighting at low resolutions before, and the quality difference between full and quarter resolution lighting was very noticeable.  Imagine cutting the dimensions of every normal map in your game in half and you’ll know what I mean.
  • Not every visible pixel has a compatible representation in a low-resolution irradiance buffer.  Pixel-thin objects like grass may end up discarding every candidate sample from the irradiance buffer creating unresolveable lighting artifacts.
  • Transparent objects lit with inferred rendering can’t smoothly fade into or out of a scene.  The transition from opaque to translucent or invisible to translucent will result in a pop as lighting resolution drops.  I can imagine lots of great uses for inferred-lit translucency, but fading effects and LODs in and out aren’t among them.

PatchMatch: A Randomized Correspondence Algorithm for Structural Image Editing
Connelly Barnes, Princeton University

This was a pretty interesting presentation of a new algorithm for finding approximate nearest-neighbor matches between image patches.  It sounds pretty dry, but nearest-neighbor matching is at the heart of a lot of highly-desired image editing operations like image retargeting, image shuffling, and image completion.  The cool thing about this paper was it showed how sometimes inferior techniques can achieve superior results if, unlike their competition, they can be applied at interactive frame rates.   The PatchMatch algorithm runs 20 to 100 times faster than previous nearest-neighbor matching algorithms and even though it is only an approximation, its speed has enabled interactive tools with more user guidance and truly stunning results.

id tech 5 Challenges
J.M.P. van Waveren, Id Software

This talk was divided into a review of the virtual texturing implementation in id tech 5 and an overview of the job system used to parallelize non-GPU processing.  Much of the virtual texturing content has been covered before, but this was the first live presentation I’ve been able to attend so it was still a treat.  I’m on the fence about whether virtual texturing is a neat trick suitable to a small set of applications or the foundation for all future development in real-time graphics.  Carmack’s involvement guarantees that virtual texturing will be an important and successful milestone in computer graphics, but it doesn’t guarantee that it will survive to the next generation. Consider, for example, how many people are still using stencil shadows?

The most interesting thing about the id’s job framework is that jobs must accept a full frame of latency.   This severely limits the kind of processing that can be pushed off into asynchronous jobs, but it greatly simplifies the problem of extracting maximum parallelism from the hardware.  It is hard for synchronization to be a bottleneck when it simply isn’t an option.  Anyway, despite this limitation id has managed to push non-player collision detection, animation blending, AI obstacle avoidance, virtual texturing, transparency processing, cloth simulation, water surface modeling, and detail model generation into jobs.   Most of those are pretty obviously latency tolerant, but I’m impressed they have a latency tolerant implementation of most of their physics simulation.

How does id tech 5 handle camera cuts with virtual texturing and latent job completion?   I hope it is better than the Halo and Unreal 3 engines.

A Real-time Micropolygon Rendering Pipeline
Kayvon Fatahalian, Stanford University

In this talk Kayvon Fatahalian made a case that REYES-style micropolygon rendering is a possible, and in fact practical, future for real-time graphics.  This talk was nicely grounded in reality.  The pros and cons of every technique were well covered and realistic extensions to current GPU architectures were presented to tackle the problems introduced by micropolygons.

The first part of the talk tackled the problem of tessellation, namely implementing split-dice adaptive tessellation.   Kayvon presented a new algorithm, DiagSplit tessellation, that can be implemented in hardware with the addition of one new fixed-function unit to the DirectX 11 GPU pipeline.

In the second part of the talk, Kayvon discussed rasterization of micropolygons.   Current methods for rasterizing polygons are inefficient when dealing with pixel-sized polygons because GPUs must shade at least 4 samples per polygon to generate derivatives.   Kayvon looked at possible modifications to the rasterization unit and weighed the pros and cons of shading before or after rasterization.

With almost everyone else looking at ways of dismantling the structured graphics pipeline and moving to a purely programmable model, it was refreshing to hear a talk on how to improve the existing pipeline to accommodate faster, higher quality graphics.   I feel a big part of the success of computer graphics over the past decade has been the fact that we’ve standardized on a common pipeline.  Even with hardware capable of unlimited flexibility, most developers will still need a common language for expressing their requirements and sharing their solutions.

Visual Perception of 3D Shape
Roland Fleming, MPI for Biological Cybernetics
Manish Singh, Rutgers – New Brunswick

This course covered research into how the human brain transforms visual sensory input into a 3-D representation of the world.   The instructors presented a number of research studies which have sought to test hypotheses about what internal representations the human brain uses for modeling the world.   This line of research is actually very relevant to real-time computer graphics.   We employ a lot of approximations to convey a sense of 3-D to the human brain, so the more we know about what cues the brain cares about, the more effectively we can trick it into seeing what we want it to see.

One Comment

Leave a Reply