Tuesday, June 28, 2011

Stunning animation rendered with Octane Render

Just saw this very impressive video on the Octane forum today and thought it deserved its own post (the level of realism is otherworldly):


The whole scene is 3D. Rendertime was two minutes per frame on average on a single GTX480, so eight of these GPUs rendering simultaneously would reduce the rendertime to just 15 seconds per frame (rendertimes in Octane scale almost linearly with the number of GPUs), which is just completely nuts considering the quality.

Update: there's a new video on youtube showing that Octane can produce a real-time rendered low resolution preview animation of an outdoor scene with a rendertime of only 55 milliseconds per frame!

Monday, June 27, 2011

Accelerating Path Tracing by Eye Path Reprojection

Just stumbled upon this upcoming paper about interactive GPU path tracing from Niklas Henrich. The abstract sounds very promising:
"Recently, path tracing has gained interest for real-time global illumination since it allows to simulate an unbiased result of the rendering equation. However, path tracing is still too slow for real-time applications and shows noise when displayed at interactive frame rates. The radiance of a pixel is computed by tracing a path, starting from the eye and connecting each point on the path with the light source. While conventional path tracing uses the information of a path for a single pixel only, we demonstrate how to distribute the intermediate results along the path to other pixels in the image. We show that this reprojection of an eye path can be implemented efficiently on graphics hardware with only a small overhead. This results in an overall improvement of the whole image since the number of paths per pixel increases. The method is especially useful for many indirections, which is often circumvented to save computation time. Furthermore, instead of improving the quality of the rendering, our method is able to increase the rendering speed by reusing the reprojected paths instead of tracing new paths while maintaining the same quality."

Hopefully, the results from "Accelerating path tracing by re-using paths" (a paper from 2002 by Bekaert et al. who reported a 9x speed-up for CPU path tracing) can be replicated on the GPU.

Sparse voxel octree with real-time global illumination and dynamic geometry

Cyril Crassin has posted a very nice video on his blog at http://blog.icare3d.org/2011/06/interactive-indirect-illumination-and.html showing the latest developments in his sparse voxel octree research which will be presented at Siggraph 2011 in Vancouver.

Major improvements on previously published results (http://artis.imag.fr/Membres/Cyril.Crassin/) include real-time indirect lighting (for diffuse and glossy materials) and support for fully dynamic voxel objects (by fast mesh voxelization and updating the voxel octree in real-time). There has been research in animated sparse voxel octrees before (using rasterization, see http://bautembach.de/wordpress/?page_id=7, http://www.youtube.com/watch?v=Tl6PE_n6zTk, http://www.youtube.com/watch?v=Hnvr0hxyDvk and http://www.youtube.com/watch?v=gNZtx3ijjpo, which doesn't look as detailed as Jon Olick's raycasted static sparse voxel octree tech from Siggraph 08), but this is the first time it's being done with ray casting (cone tracing actually).

The global illumination algorithm resembles photon mapping: instead of photon tracing, the scene is rasterized from the perspective of the light source and radiance is stored in the octree, followed by filtering in screen-space and a final gathering step using approximate cone tracing. More details: http://artis.imag.fr/Publications/2011/CNSGE11a/GIVoxels_Siggraph_Talk.pdf

Tuesday, June 21, 2011

HPG paper and video of "Improving SIMD Efficiency for Parallel Monte Carlo Light Transport on the GPU"!

Another post on the work of Dietger van Antwerpen, but that's because his research efforts for GPU path tracing are so damn amazing and groundbreaking. I can't stop imagining what the possibilities could be when implemented in a real-time path tracer for games. His HPG2011 paper "Improving SIMD Efficiency for Parallel Monte Carlo Light Transport on the GPU" and an accompanying video are available here: http://graphics.tudelft.nl/~dietger/HPG2011/index.html.

The video shows a side-by-side comparison between standard path tracing (PT), bidirectional path tracing (BDPT) and Metropolis light transport (MLT) in a number of scenes with complex lighting. The difference in efficiency of the MLT and BDPT algorithms compared to regular path tracing is huge: for example, the kitchen scene (indirect lighting, actually it's lit by light passing through a lens, essentially a caustic) converges extremely slowly with PT and is still mostly pitch black (because the probability of a random path hitting a small light source behind a lens is very low), while BDPT (paths starting simultaneously from the camera and the light source, "meeting each other half way") and MLT (paths starting randomly, but once an important light contributing path is found, nearby paths are explored) both do a much better job and show a recognizable scene in a matter of milliseconds. All these algorithms are running completely on the GPU with almost zero CPU load. The end of the video shows a flooded Cornell box scene rendered with MLT at interactive rates, showing that the caustic light pattern on the floor converges very fast (something that would take a very long time for a regular path tracer). The last page of the HPG 2011 paper contains a very interesting comparison between a fully converged reference image and a PT, BDPT and MLT image after just 30 seconds on a GTX 480.

An often recurring criticism of using the GPU for rendering is that for scenes with complex lighting and materials, the CPU outperforms the GPU because the CPU is able to use smarter and more efficient algorithms (like MLT and BDPT) while the GPU is only efficient at regular, "dumb" path tracing. This is no longer true as proven by this paper: not only can GPUs use more efficient rendering algorithms like BDPT and MLT, they can also do it an order of magnitude faster than the CPU! The table comparing performance between GPU rendering and CPU rendering (GTX 480 vs Core i7 920) demonstrates that, depending on the scene, PT is 10-18x, BDPT 8-15x and MLT 9-15x faster on the GPU than on the CPU!!

It's amazing to see that the still very young field of physically based GPU rendering has made such tremendous advancements in just a couple of months (thanks to guys like Dietger among others). I can't wait to see where this field is going to head during the next year with better hardware (Nvidia Kepler, AMD Graphics Core Next) and even more optimized and efficient algorithms. It really is mind-boggling...

Wednesday, June 15, 2011

Dietger van Antwerpen's thesis "Unbiased physically based rendering on the GPU" available!

The thesis from Dietger van Antwerpen, co-developer of the Brigade path tracer who pioneered bidirectional path tracing (BDPT), energy redistribution ray tracing (ERPT) and Metropolis light transport (MLT) in CUDA is finally available (since yesterday) at http://repository.tudelft.nl/view/ir/uuid%3A4a5be464-dc52-4bd0-9ede-faefdaff8be6/

It's 180 pages and this is just a master thesis, nuts! :)

Update on CentiLeo, the out-of-core GPU ray tracer

CentiLeo, the awesome out-of-core interactive GPU path tracer for massive models (like the 400 million polygon Boeing 777 model) that I've blogged about in this post, is going to be presented this summer at Siggraph:


According to the video on youtube "CentiLeo implementation uses CUDA and is based on Kirill Garanzha's PhD research in Keldysh Institute of Applied Mathematics, Russian Academy of Sciences."

As mentioned in the last post, Kirill Garanzha will also present a paper entitled "Simpler and Faster HLBVH with Work Queues" at High Performance Graphics 2011, which aims to make real-time ray tracing of highly dynamic scenes possible by fully rebuilding (instead of refitting which significantly degrades ray traversal performance) the BVH from scratch in real-time. Combined with the out-of-core path tracing tech from CentiLeo, this could be very compelling and make photoreal animations of highly dynamic multi-million polygon scenes render very fast on the GPU.

Update: This week, Garanzha also presented a paper at the Computer Graphics International 2011 conference on a novel way to build high quality BVH's on the GPU (quality measured by GPU path tracing on a GTX 480) with the title "Grid-based SAH BVH construction on a GPU" (behind a paywall). The BVH refitting is faster than the original HLBVH implementation.


Monday, June 6, 2011

HPG 2011 paper list online

The Real-Time Rendering blog just blogged that Kesen Huang has put the list of papers for the High Performance Graphics 2011 symposium online at http://kesen.realtimerendering.com/hpg2011Papers.htm. HPG 2011 takes place in Vancouver (home of the Canucks ;) on Aug 5-7.

These are some of the more juicy titles on the subject of ray tracing and path tracing that immediately catch the attention:
Simpler and Faster HLBVH with Work Queues (Kirill Garanzha, Jacopo Pantaleoni, David McAllister) I've been waiting an eternity for Pantaleoni's HLBVH code to be open sourced, so I hope the code from this new paper will get published eventually.
Improving SIMD Efficiency for Parallel Monte Carlo Light Transport on the GPU (Dietger van Antwerpen) This paper is from the co-developer of the real-time path tracer called Brigade, who also implemented Metropolis light transport and ERPT on the GPU, so it should be really interesting.
Active Thread Compaction for GPU Path Tracing (Ingo Wald) From the guy that pioneered real-time CPU ray tracing.

Some other attention-grabbing titles:
Real-Time Diffuse Global Illumination Using Radiance Hints (Georgios Papaioannou)
VoxelPipe: A Programmable Pipeline for 3D Voxelization (Jacopo Pantaleoni)
High-Performance Software Rasterization on GPUs (Samuli Laine, Tero Karras)
MSBVH: An Efficient Acceleration Data Structure for Ray Traced Motion Blur (Leonhard, Gruenschloss, Martin Stich, Sehera Nawaz, Alexander Keller)
The program looks very promising so far, especially for real-time and interactive ray tracing and path tracing.


The paper by McGuire et al. on the Alchemy Screen Space Ambient Obscurance Algorithm contains a pretty neat idea, which could prove to be very useful for real-time path tracing. The idea is to adapt the number of samples per pixel with distance from the camera, essentially some kind of LOD scheme for the spp number as shown in this picture (taken from the paper):