Student Competition
Warning: The browser code in this competition is intentionally computationally costly, and may crash your browser or computer, especially if low powered or you have many other tabs open. Please check this box to acknowledge this before following any competition links.
Entries
- Akshay Jindal (entry) — 🏆 Honorable Mention
- Description: My shadertoy demo builds upon the base path tracer provided in the HPG Student Competition 2022. It integrates the SVGF denoising algorithm[1] (implementation adapted from [2]) into the base path tracer. Filtering was done at 4 scales (1,2,4,8) achieving a final average runtime of 170 FPS compared to 320 FPS of provided baseline (on RTX 3080, Microsoft Edge). The temporal reprojection code was simplified assuming slow camera velocity. NUM_SAMPLES was set to 2, NUM_BOUNCES set to 6, and Exposure was increased to 4 to reduce the FLIP mean error. SVGF knobs were left unchanged.
References:
[1] Schied, Christoph, et al. "Spatiotemporal variance-guided filtering: real-time reconstruction for path-traced global illumination." Proceedings of High-Performance Graphics. 2017. 1-12.
[2] Dzhoganov, A., 2020. Real-time GI Copy. [online] Shadertoy.com. Available at: https://www.shadertoy.com/view/tlXfRX [Accessed 25 June 2022]. - Mats Busse (entry)
- The improvements are due to an anti-aliasing technique. It uses a gaussian blur on pixels with the same normal. Also the light sample function assigns the light sources different probabilities, depending on distance, angle, and visibility. On top of that I sacrificed Bounces for Samples.
- Ishaan Shah (entry)
- Authors:
Ishaan Shah (ishaan.shah@research.iiit.ac.in)
Rahul Goel (rahul.goel@research.iiit.ac.in)
Chandradeep Pokhariya (chandradeep.pokhariya@research.iiit.ac.in)
Description of improvements made:- While choosing which light to sample, we only choose lights which are on the correct side of the shading point, i.e. dot(cube_pos.xyz - shade_point, normal) > 0.
- We choose the light to be sampled with a probability proportional to I/r2.
- While choosing the face from which to sample, we check in which octant the shading point lies in w.r.t to the light cubes coordinate frame. We only sample from the 3 surface which are visible from the shading point.
- We sample from the BRDF using the VNDF technique described in https://jcgt.org/published/0007/04/01/.
- We perform intersection tests in two stages, first, we intersect with bounding boxes surrounding the letters and if the intersection is successful then we perform the intersection with individual components.
- We apply Russian Roulette to terminate paths which won't contribute much to the final image early.
- We use more samples for Next Event Estimation, i.e. from each point we cast 2 light rays instead of 1.
- We use the blue noise texture to generate random numbers as it is more efficient and distributes the error better.
- We focused on keeping the renderer unbiased.
- Arthur Pereira Vala Firmino (entry) — 🏆 Honorable Mention
- Rewrote ray-scene intersection, with geometry represented as pre-computed 4x4 matrices (inverses of their model-world transform matrix).
- Light sampling also uses pre-computed 4x4 matrices instead of computing rotation matrices and offsets.
- Used a linear congruential generator as the pseudo-random number generator.
- Limited ray depth to 4 to increase number of samples per frame.
- Implemented temporal re-projection and accumulation to improve quality.
- Implemented an image filter very similar to SVGF with a few differences: 4 instead of 5 filter levels (due to ShaderToy limitation); 7x7 kernel size instead of 5x5 (to increase pixel footprint); material based edge stopping function; temporal accumulation at each filter level (except the final level) to improve stability;
- Georges Grondin (entry)
- Refactored intersection code:
- Grouped sets of primitives into a few larger AABBs
- Added a dedicated shadow testing function that shortcuts when it finds an intersection
- For each primitive group, move computations outside loop where possible
- Replaced hash function used in random number generation with a faster one borrowed from this paper https://jcgt.org/published/0009/03/02/
- Replaced materials assignment branching statement with a lookup in an array of hardcoded values
- Refactored lookat vector computation (make use of swizzling, eliminate unnecessary transpose)
- Replaced light pdf function with hardcoded value
- Compute rotation matrices only once per frame instead of every sample
- Refactored BRDF computation to slightly reduce number of operations
- Various other very minor optimizations
- Simon Lucas (entry)
- For this shader, I mainly worked on lights sampling.
- I start by selecting a light among the four according to their intensities as well as their distance to the intersection point.
- Once a light is selected, I use the "BRDF Importance Sampling for Polygonal Lights" method by Christoph Peters ( SIGGRAPH 2021 ) in combinaison with "Real-Time Polygonal-Light Shading with Linearly Transformed Cosines" ( Eric Heitz, Jonathan Dupuy, Stephen Hill and David Neubelt ) to sample a light direction really efficiently.
- To make this method compatible with the shader, I considered the cube lights as polygonal lights. To do this, I extract the silhouette of the cube lights at each intersection point which allows to consider them as polygonal lights.
- With this method of light sampling, the results are already much less noisy but far from optimal.
- The next step was to accumulate samples over several frames. To do this, I mix the previous frame and the current frame by making a reprojection. My implementation reduces the noise but tends to blur the frame.
- On top of that, I added a temporal anti-aliasing pass to filter out the remaining high frequencies.
- David Headrick (entry)
- Though I've done things with rasterized graphics before, path tracing is new to me, which made
this a very fun project. Here is a list of the things I tried to do to increase the performance of the
path tracer:- BVHs around numbers/letters
- Decreased loop count in encrypt_tea
- Added luminance median filter
- Changed the number of light samples to 2 during NEE
- Suzuran Takikawa (entry)
- Here are the optimizations I made:
- Improved performance of the path tracer by adding bounding box tests for each letter, resulting in requiring less intersection tests when the ray does not hit any of the boxes of each letter.
- Improved performance of the path tracer by changing various small bits of the code to reduce branching, for example in the intersect_box function.
- Used a mix of temporal accumulation and temporal anti aliasing (buffers B-D) to reduce noise.
- Yu Chengzhong (entry) — 🏆 First place winner
- Short description: This is a realtime path tracing renderer for HPG student competition 2022. I runs at 260fps on GTX-2070 Super graphics card.
The optimization I did:- Precompute low-frequency radiance infomation in CubeMap Buffer during shader compilation.
- Restore high-frequency(view-dependent infomation usually) like glossy reflection in runtime.
- MSAA-liked style AA for fixing the artifact on high luminance boundary.
Rules
We invited you to participate in our Shadertoy student competition. Entries from students at all levels were encouraged and were eligible for our awards (see below)!
We provided an unoptimized path tracer implementation on shadertoy.
The goal was to achieve the highest possible quality compared to a brute-force reference (100k samples per pixel) without a significant performance cost.
In other words, submissions might increase the render time at most by a factor of 2, but would be ranked according to their image quality as measured per SSIM using FLIP.
Update (6/14/2022): We have updated the quality evaluation metric from SSIM to FLIP.
Here is a link to our framework, where you can get started.
Timeline
Submission deadline: June 27th, 12:00 PST
Winners will be announced during the HPG conference.
Submission
Submissions had to fork framework on shadertoy and include “HPG 2022 student competition” in the title. A link to the shadertoy had to be sent to studentcompetition@highperformancegraphics.org with a short description of the optimizations performed. Visibility was to be set to unlisted before the competition; we encourage you to set it to “public” afterwards.
Please note that upon submission you authorized HPG to display your shader during the conference as well as host it on the HPG 2022 website and Youtube channel. We give proper attribution to the authors.
Eligibility
Anyone who was a student at the time the work was completed was welcome to participate. We encouraged submissions from underrepresented groups in our community — please also see the HPG 2022 Diversity and Inclusion Program.
Prizes
We have multiple GPUs from our sponsors as awards. More details to come!