Monday, May 19, 2008

Parthenon: round two

Hi folks,

We never got to Parthenon last time -- forget why! So let's try again as we restart for the summer. I believe that Dave has scanned the full paper and can make it available electronically. He'll put it on the lab's internal wiki b/c it's copyrighted -- if you don't have access please email him at davecrist@mac.com.

Best,

Ben.

***

Hi folks,

Today we'll discuss Parthenon rendering, a GPU-based method for accelerating global illumination. The full paper isn't online, I'm afraid -- it's a chapter in GPU Gems 2. But you can find some of the content here:

http://www.bee-www.com/parthenon/

... and the full content will be available for our perusal at the meeting.

Best,

Ben.

1 comment:

  1. Hi folks,

    Dave and Alejandro and I talked about Parthenon yesterday. Essentially, the technique uses multiple parallel projections (e.g. 1K), with depth peeling to handle occlusions, to sample the space of reflections for the "final gather" during rendering.

    By doing this on the gpu, offline global illumination can be greatly accelerated.

    The paper was a bit hard to parse (written by a Japanese author), and so we were a bit rushed at the end.

    A couple open questions:

    * The author kept stating that the method was meant for the second pass, with the first pass involving the spreading of photons around the envt. Not clear why one can't simply render the envt using hardware for the first pass.

    * At first glance, anyway, Parthenon only adds one bounce of indirect illumination. The author talked about more bounces, I think, but didn't have time to delve into that.

    * Of course, when a point looks up what's "seen" in a certain direction, it will in general not find exactly the intersection point along that directly in the buffers parthenon generates. Assume there's some filtering/interp happening?

    Ben.

    ReplyDelete