^

High Performance Graphics 2020

July 13-16, hosted online

Program

We are pleased to announce the invited speaker program for High-Performance Graphics 2020, which begins next Monday at 9:00am PDT.

Our goal was to create a program that combines talks about the state-of-the-art in high-performance rendering, with talks about how advanced rendering systems are poised to enable new applications, such as enabling scene understanding, serving as a source of training data for machine learning, and enabling photorealistic, across-the-globe human interactions in AR/VR.

HPG 2020 will be taking place online from July 13th to July 16th, with all talks streamed live to the public on Twitch. (Talks will also be recorded for later viewing on YouTube.)

On MONDAY, JULY 13th, Chris Wyman of NVIDIA will talk about how ray tracing algorithms are changing to achieve real-time path tracing on the GPU. Holger Gruen of Intel will discuss end-to-end system level challenges that arise when ray tracing complex, real-world game scenes.

On TUESDAY, JULY 14th, Yaser Sheikh of Facebook Reality Labs will describe how the latest human capture and real-time neural rendering techniques are poised to enable photorealistic 3D videoconferencing in AR/VR.

On WEDNESDAY, JULY 15th, Wenzel Jakob of EPFL will talk about the rapidly advanced field of differentiable rendering, which is enabling new applications of inverse rendering and scene understanding. Matt Pharr of NVIDIA will be talking about his experiences porting PBRT to the GPU, while keeping much of the easy-to-understand C++ codebase completely intact!

On THURSDAY, JULY 16th, Manolis Savva of Simon Fraser University will talk about how in the future, most rendered images will not be consumed by human eye balls, but by machine learning algorithms training intelligent agents, and why there is a need for rendering images at tens of thousands of frames per second!

See our Frequently Asked Questions about how to view and participate in this year’s online HPG.

Full Program

All times are are given in US Pacific daylight time: UTC -7

Monday, July 13

9:00-9:10 HPG 2020 Opening
9:10-10:10 Keynote: Chris Wyman (Principal Research Scientist, NVIDIA)

Reframing Light Transport for Real-Time

monday_wyman.pdf slides

10:10-10:25 Break
10:25-10:45 Towards Fully Ray-Traced Games: Addressing System-Level Challenges (Holger Gruen, Intel)

monday_gruen.pdf slides

10:45-12:00 Technical Papers: High-Performance Rendering

Tuesday, July 14

9:00-10:00 Keynote: Yaser Sheikh (Director, Facebook Reality Labs Pittsburgh)

Photorealistic Telepresence
tues_sheikh.pdf slides

10:00-10:15 Break
10:15-10:20 Posters fast forward
10:20-10:45 Virtual social mixer for premium registrants (on Zoom)
10:45-12:00 Technical Papers: Image-Based Computing

12:00-12:20 Zoom Conversations with Poster Authors

  • Evaluation of Graphics-based General Purpose Computation Solutions for Safety Critical Systems: An Avionics Case Study
    (Marc Benito, Matina Maria Trompouki, Leonidas Kosmidis, Juan David Garcia, Sergio Carretero, Ken Wenger)
    01_benito_SCS.pdf poster, 01_benito_SCS_abstract.pdfabstract
    Poster Breakout Meeting: A
  • Euclid NIR GPU: Embedded GPU-accelerated Near-Infrared Image Processing for On-board Space Systems.
    (Ivan Rodriguez, Leonidas Kosmidis)
    02_rodriguez_euclid_NIR.pdf poster, 02_rodriguez_euclid_NIR_abstract.pdfabstract
    Poster Breakout Meeting: B
  • Fast Eye-Adaptation for High Performance Mobile Applications. (Morteza Mostajab, Theodor Mader)
    03_mostajab_fast_eye_adaptation.pdf poster, 03_mostajab_fast_eye_adaptation_abstract.pdfabstract
    Poster Breakout Meeting: C

Wednesday, July 15

9:00-10:00 Keynote: Wenzel Jakob (Assistant Professor, EPFL)

Differentiable Simulation of Light: Why it is Important, and What Makes it Hard!
wed_jakob.pdf slides

10:00-10:15 Break
10:15-10:45 Invited talk: Matt Pharr (Distinguished Research Scientist, NVIDIA)
Porting PBRT to the GPU While Preserving its Soul
wed_pharr.pdf slides
10:45-12:00 Technical Papers: Rendering Thin or Transparent Objects

12:00-12:20 Zoom Conversations with Poster Authors

  • Improved Triangle Encoding for Cached Adaptive Tessellation
    (Linus Horvàth, Bernhard Kerbl, Michael Wimmer)
    05_horvath_triangle_encoding.pdf poster, 05_horvath_triangle_encoding_abstract.pdfabstract
    Poster Breakout Meeting: A
  • Iterative GPU Occlusion Culling with BVH
    (Gi Beom Lee, Sungkil Lee)
    04_lee_iterative_occlusion_culling.pdf poster, 04_lee_iterative_occlusion_culling_abstract.pdfabstract
    Poster Breakout Meeting: B
  • Ray-casting inspired visualisation pipeline for multi-scale heterogeneous objects
    (Evgeniya Malikova)
    student competition submission
    Poster Breakout Meeting: C

Thursday, July 16

8:00-9:00 HPG Town Hall (on Zoom only)
9:00-9:10 Short Talks from HPG Sponsors
9:10-10:10 Keynote: Manolis Savva (Assistant Professor, Simon Fraser University)
3D Graphics System Challenges for Simulation: Lessons from AI Habitat
thur_savva.pdf.pdf slides
10:10-10:25 Break
10:25-11:40 Technical Papers: Hardware Architectures and Space Partitioning

11:40-12:00 HPG Closing (Announcement of best paper and test of time award)
12:00-TBD After party for premium registrants (on Zoom)

Keynote: Chris Wyman (Principal Research Scientist, NVIDIA)
Reframing Light Transport for Real-Time

monday_wyman.pdf slides

Abstract

As real-time ray tracing becomes ubiquitous, researchers and engineers get to define its uses. We’re still early in adoption, with many applications relying on pre-existing ray tracing algorithms designed decades ago under wildly different system constraints. But ask yourself: would you use the same sorting or tree building algorithm on the CPU and the GPU? If not, why use the same lighting algorithms and data structures? The emergence of ray tracing hardware presents a tremendous opportunity for immediately impactful research defining how to efficiently ray and path trace under a streaming, parallel programming model. This talk briefly reviews important constraints of real-time rendering and presents existence proofs that reframing light transport for real-time is feasible. I also highlight key takeaways from our recent research on spatiotemporal importance resampling, which shows one concrete way to rethink lighting algorithms.

Bio

Chris Wyman is a Principal Research Scientist in NVIDIA’s real-time rendering research group, where he looks at a variety of problems ranging from lighting, shadowing, global illumination, BRDFs, sampling, filtering, denoising, antialiasing, and how to efficiently build GPU algorithms and data structures to solve these problems. Prior to NVIDIA, he was an Associate Professor at the University of Iowa and has a PhD from the University of Utah and a BS from the University of Minnesota.

Towards Fully Ray-Traced Games: Addressing System-Level Challenges (Holger Gruen, Intel)

monday_gruen.pdf slides

Abstract

Hardware accelerated real-time ray tracing can solve long standing problems in real-time rendering that have been challenging to address with rasterization-based approaches. However, real-time raytracing can be limited by practical scenarios in modern games, for example, complex dynamic content such as procedurally animated foliage or destructible scenes, highly detailed compressed or tessellated geometry, incoherent shading requests and large amounts of mid-traversal “shading” needed for alpha-textured geometry. We tackle the limitations of current programming models and hardware in these scenarios and discuss potential solutions for the future.

Bio

Holger Gruen ventured into 3D graphics over 27 years ago writing fast CPU software rasterizers. In the past he did work for middleware companies, games studios, a military simulation company and, for 13 years, in developer technology roles for GPU IHVs. He now works as a principal engineer in the XPU Technology and Research group at Intel.

Keynote: Yaser Sheikh (Director, Facebook Reality Labs Pittsburgh)
Photorealistic Telepresence

tues_sheikh.pdf slides

Abstract

Telepresence has the potential to bring billions of people into artificial reality (AR/MR/VR). It is the next step in the evolution of telecommunication, from telegraphy to telephony to videoconferencing. In this talk, I will describe early steps taken at FRL Pittsburgh towards achieving photorealistic telepresence: realtime social interactions in AR/VR with avatars that look like you, move like you, and sound like you. If successful, photorealistic telepresence will motivate the concurrent development of the next generation of algorithms and computing platforms for computer vision and computer graphics. In particular, I will introduce codec avatars: the use of neural networks to unify the computer vision (inference) and computer graphics (rendering) problems in signal transmission and reception. The creation of codec avatars require capture systems of unprecedented 3D sensing resolution, which I will also describe.

Bio

Yaser Sheikh directs the Facebook Reality Lab in Pittsburgh, devoted to achieving photorealistic social interactions in augmented reality (AR) and virtual reality (VR). He is an associate professor (on leave) at the Robotics Institute, Carnegie Mellon University. His research broadly focuses on machine perception and rendering of social behavior, spanning sub-disciplines in computer vision, computer graphics, and machine learning. He is specifically interested in precisely measuring and modeling the full spectrum of social behavior. His research has been featured by various news and media outlets including The New York Times, BBC, CBS, WIRED, and The Verge. With colleagues and students, he has won the Hillman Fellowship (2004), Honda Initiation Award (2010), Popular Science’s “Best of What’s New” Award (2014), as well as several conference best paper and demo awards.

Keynote: Wenzel Jakob (Assistant Professor, EPFL)
Differentiable Simulation of Light: Why it is Important, and What Makes it Hard!

wed_jakob.pdf slides

Abstract

Progress on differentiable rendering over the last two years has been remarkable, making these methods a serious contender for solving truly hard inverse problems in computer graphics and beyond. However, a number of key challenges arise that often make differentiable rendering very difficult to use in practice. In this talk, I will give an intuition of what works, what doesn’t, and what it will take to elevate differentiable rendering to a trusty and efficient component of the practitioner’s toolbox. I will also showcase a new project fresh from the press that addresses one of the major problems faced by algorithms in this area today.

Bio

Wenzel Jakob is an assistant professor at EPFL’s School of Computer and Communication Sciences, and is leading the Realistic Graphics Lab (https://rgl.epfl.ch/). His research interests revolve around inverse graphics, material appearance modeling and physically based rendering algorithms. Wenzel is the recipient of the ACM SIGGRAPH Significant Researcher award and the Eurographics Young Researcher Award. He is also the lead developer of the Mitsuba renderer, a research-oriented rendering system, and one of the authors of the third edition of “Physically Based Rendering: From Theory To Implementation”. (http://pbrt.org/)

Invited talk: Matt Pharr (Distinguished Research Scientist, NVIDIA)
Porting PBRT to the GPU While Preserving its Soul

wed_pharr.pdf slides

Abstract

The primary goals of PBRT, the ray tracer described in the book *Physically Based Rendering*, are pedagogical: we have tried to demonstrate how to implement a state-of-the-art rendering system from start to finish. There has always been a tension in that goal—on one hand, we would like to discuss all of the details of making a renderer run at peak performance; on the other, the code must remain clean enough that the algorithms be understandable. Thus, PBRT is multi-threaded, but doesn’t use CPU SIMD instructions, for example. In this talk I’ll discuss some recent work in bringing PBRT to the GPU while maintaining its simplicity. The end result is a system with minimal modifications that runs on both CPU and GPU, with substantially higher performance on the latter architecture.

Bio

Matt Pharr is a Distinguished Research Scientist at NVIDIA where he works on ray-tracing and real-time rendering. He is an author of the book Physically Based Rendering, for which he and the co-authors were awarded a Scientific and Technical Academy Award in 2014 for the book’s impact on the film industry.

Keynote: Manolis Savva (Assistant Professor, Simon Fraser University)
3D Graphics System Challenges for Simulation: Lessons from AI Habitat

thur_savva.pdf slides

Abstract

Computer graphics systems are increasingly being used to power 3D simulation infrastructure for developing and evaluating computer vision, robotics, and machine learning methods. The confluence of recent advances in 3D data acquisition and scalable machine learning algorithms has enabled a flourishing of simulation platforms that focus on large-scale learning from realistic 3D environments. These efforts have demonstrated the power of 3D graphics systems to accelerate work in adjacent research areas. At the same time, this use case differs dramatically from interactive systems that produce visuals for human consumption, for which many of the underlying graphics systems were originally designed. This mismatch can lead to inefficiencies and performance bottlenecks which emerge particularly when the 3D graphics producer systems are tightly coupled with machine learning consumer systems. In this talk, I will discuss recent work on the AI Habitat platform, connect it with more general trends in 3D simulation for machine learning and related fields, and finally describe newly emerging challenges for high-performance graphics systems in this domain.

Bio

Manolis Savva is an Assistant Professor of Computing Science at Simon Fraser University. His research focuses on analysis, organization and generation of 3D content through a human-centric lens of “common sense” semantics. The methods that he works on are stepping stones towards holistic 3D scene understanding revolving around people, with applications in computer graphics, computer vision, and robotics. Prior to his current position he was a visiting researcher at Facebook AI Research and a postdoctoral research associate at the Princeton University Computer Graphics and Vision Labs. He received his Ph.D. from Stanford University, under the supervision of Pat Hanrahan. His undergraduate degree was in Physics and Computer Science at Cornell University.