Monday, August 17, 2009

Meet: Abs & pics -- High Performance Graphics 2009

Update: my back of the envelope below was a few orders of magnitude off! Three to be exact.

***

Hey all,

For the last two weeks, we've been skimming the High Performance Rendering conference proceedings (the merger of Graphics Hardware with Interactive Ray Tracing). You can find the proceedings online here. (If you don't have NCSU/ACM access, you can find most of the content here, along with many of the talk slides). There were two keynotes at the event; one in particular was given by Epic Games cofounder Tim Sweeney, responsible for the Unreal engines and games. The guys who authored the book Real Time Rendering blogged the whole event.

Two weeks ago we began looking over the papers and discussed a few in a bit of depth:
  • A Parallel Algorithm for Construction of Uniform Grids, Kalojanov and Slusallek. A GPU-based method for gridding 3D geometry in real time.
  • Scaling of 3D Game Engine Workloads on Modern Multi-GPU Systems,
    Monfort and Grossman. Studying methods for synchronizing systems using multiple GPUs.
  • Embedded Function Composition, Whitted, Kajiya, Ruf and Bittner. As displays grow in size and resolution, input bandwidths cannot continue to grow with the number of pixels. The authors tackle this problem by embedding processors in displays, enabling the use of higher level primitives in communication with displays.
  • Efficient Depth Peeling via Bucket Sort, Fang Liu, Meng-Cheng Huang, Xue-Hui Liu, and En-Hua Wu. Depth peeling is a hardware method for sorting geometry in depth. This paper describes a technique that avoids the need for multiple passes.
  • Data-Parallel Rasterization of Micropolygons With Defocus and Motion Blur, Fatahalian, Luong, Boulos, Akeley, Mark and Hanrahan. The future of interactive rendering will involve film techniques. This paper describes hardware methods for REYES-like rendering.
Last week we continued our discussion and spent most of the time talking about Tim Sweeney's keynote talk on the future of interactive rendering. He made several interesting points:
  • The GPU shader programming model is limited and will not scale
  • Interactive graphics will use more techniques from film
  • It will require much more parallelism
  • In software, this will require high level, functional programming and a new style of vectorization
  • In hardware, this will require 4 tflops of computing, and 4 tBps of bandwidth!
To give you an idea of what that last stat means, that is about 16,000 1M pixel textures read per frame. I have to ask, are pixels the right primitive to be pushing around at these bandwidths?

Next week we'll finish our discussion of this event.

Best,

Ben.

No comments:

Post a Comment