NVIDIA Turing GPUs with built in ray-tracing

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.

Burt Radnolds

Full Member
10+ Year Member
Joined
Aug 6, 2013
Messages
414
Reaction score
670
Members don't see this ad :)
They've been incorporating GPUs into Monte-Carlo planning for years now. Look up GPUMCD, among others.

I was aware GPUs had been incorporated in TPS applications, however this was being pitched as a major breakthrough in GPU tech, specifically of the use of "ray-tracing" which has implications for rendering light (ahem, photons). Seemed like if it was such a major breakthrough in GPU tech, and it truly had to do with modeling photons, that it could significantly improve calculation times.
 
Peanut gallery comment from a physicist who has programmed GPUs:

In regular ray-tracing applications, photon paths are more deterministic. You point a photon in direction A, it reflects off the surface and goes to position B. You do this 100 times for the same direction, you get the same result. MV photons are different; some will scatter this way, some will scatter the other way. You won't be able to apply the same technique for MV treatment.

With that said, the enabling technology for ray tracing on GPUs is AI and deep learning. These technologies could be applied to deep learn treatment planning and the outcome of Monte Carlo. This does not require new GPUs, just new software to be built and trained -- and trust me, it can be built. This is all very possible. But it won't be as accurate because the ground truth will be Monte Carlo. Is that an acceptable compromise? For ray tracing, absolutely, because it's good enough that the user can't really see the difference. For treatment planning, it's less clear. We might want to mitigate a visually invisible 2% error. QC is going to be a problem. AI is a black box. Even if we see <1% error in 100 cases, how do we know that the 101st patient isn't going to have some really weird anatomy that generates a 10% error?

An example of this tradeoff between accuracy and speed is with the Folding@Home software popular many years ago. This software ran protein folding in the background (very compute intensive) and would use GPUs if available. The problem is that consumer GPUs have RAM that is sensitive to cosmic rays ... you know, the ones that give you background radiation. Does this matter for movies or video games? One pixel is off for 1/30 of a second, you will never notice. But for a complex computation, there might be a 1 in 10,000 chance that a variable got messed up. So the designers of this software decided that they would get around this by folding each protein configuration, *twice*, to ensure that a cosmic ray didn't mess things up.
 
  • Like
Reactions: 1 user
Top