Overview

Final Render

Inspiration

Born and raised in a family where my mother worked as a tailor, I (Minh) learned about the world through the smell of fabric and newly cut cloth. These sensations have never faded, quietly finding their way back to me through dreams. From this place of remembrance, Mengzhu and I set out to tell a story that exists somewhere between waking and dreaming. Love, Thread & Pixels unfolds as a dreamlike moment—where reality softens, clouds rest upon the floor, warm light drifts in through the window, and the devotion of a tailor's craft lingers in the air—stitched together through pixels to preserve a memory shaped by love and time.

Statistics

  • Hardware Intel Core i7-10750H
  • Resolution 3840 × 2160
  • SPP 32
  • Render Time 5 hours

Features

  • Alpha Masking

    Alpha masking allows rays to pass through masked regions with a probability based on the alpha channel. We mainly modified code in instance.cpp to allow the renderer to read the alpha value at the current UV texture coordinate and generate a random number to decide whether a ray passes through. The most time-consuming part was realizing that alpha masking also had to be handled for NEE, which required overriding the transmittance function. If a ray passes through the alpha mask, we compensate the contribution by multiplying it by 1 - alpha.

    With
    Without Without
    With
  • Normal Mapping

    Normal mapping is a simple yet effective technique for simulating fine surface details without using complex geometry. We found this to be a relatively straightforward task: in instance.cpp, we only needed to fetch the normal from the normal map, remap it to the [-1, 1] range.

    With
    Without Without
    With
  • Low-Discrepancy Sampling

    A Halton sampler generates evenly distributed, quasi-random points to reduce noise in rendering. samplers/primes.h provides radical inverse functions using different prime bases. The main code in samplers/halton.cpp handles generating low-discrepancy Halton sequences, seeding per pixel, supporting multiple dimensions, and scrambling sequences to reduce visual correlation. We consider this feature one of the most difficult topics to understand.

    With
    Without Independent
    Halton
  • Spotlight

    Spotlights emit light in a cone, with edge falloff adding realism. In this feature, we simply calculate the falloff and attenuate the light based on distance and angle in lights/spot.cpp. We also added new code in lightwave_blender/light.py to support exporting spotlights.

    With
  • Bloom

    Bloom makes bright areas of a scene glow, creating a soft, light-bleeding effect that enhances visual appeal. In postprocesses/bloom.cpp, we simply apply a Gaussian blur to spread bright areas into neighboring pixels when the luminance exceeds a user-defined threshold.

    With
    Without Without
    With
  • Denoising

    Denoising is a magic tool that cleans up noisy renders, turning grainy images into smooth, polished results. Our denoiser, implemented in postprocesses/denoising.cpp, uses Intel's OIDN and takes auxiliary albedo and normal channels extracted from the corresponding integrator(aov.cpp). For albedo, we added a getAlbedo function to every BSDF. The main technical challenge was linking the OIDN library; to save time, we simply copied the library into our build folder.

    With
    Without Without
    With
  • Improved Environment Sampling

    This sampling scheme improves rendering efficiency by sampling directions based on the environment map's luminance. Most changes occur in lights/envmap.cpp. The main implementation challenge was constructing a 2D probability distribution from the environment map's luminance, weighted by sin θ.

    With
    Without Without
    With
  • Iridescence

    Iridescence BSDF simulates surfaces that change color depending on the viewing angle. The paper for us was difficult to understand, with equations that were hard to translate into code. So, we used Unity's implementation as a reference and implemented the BSDF in bsdfs/iridescence.cpp.

    With
  • Basic Area Light Sampling

    This sampling scheme randomly selects a point on a light's surface with probability proportional to its area. We provide sampleArea implementations in every shapes/*.cpp. We also implemented rectangle and triangle lights, but for triangle meshes we currently select triangles randomly, without accounting for area differences. Additionally, our implementation only supports uniform scaling; handling non-uniformly scaled geometry would require separate compensation factors for correct sampling. Area light samples are retrieved and processed in lights/area.cpp.

    With
    Without Without
    With
  • Improved Area Light Sampling

    Improved area light sampling selects points on the light's surface based on the distance and angle relative to the shading point. This means points that contribute more to the shading (closer or more directly visible) are more likely to be sampled. The main challenge comes from the complex mathematics behind it. As a result, we only implemented the improved scheme for shapes/sphere.cpp by overloading sampleArea to include an additional shading position parameter.

    Improved
    Basic Basic
    Improved
  • Thin Lens

    This thin-lens camera simulates depth of field by focusing on a chosen plane while blurring objects at other depths. In cameras/thinlens.cpp, we simply sample rays through a finite lens aperture and shift their origins on the lens to simulate depth of field.

    With
    Without Without
    With
  • Custom Bokeh Shapes

    Bokeh is the aesthetic shape of out-of-focus highlights in a rendered image, created by the shape of the camera's lens aperture. The idea is straightforward, as in cameras/thinlens.cpp, if a bokeh texture is provided, we accept points based on the texture's luminance, so rays are more likely to pass through brigh regions.

    With
    Without Without
    With
  • MIS Path Tracer

    MIS is a robust integrator that combines the strengths of BSDF and NEE sampling, producing images with less noise. The main challenge in implementing it was retrieving the PDF from all BSDFs (bsdfs/*.cpp) and lights (lights/*.cpp). The sampling logic is handled in integrators/pathtracer_mis.cpp, where rays are sampled from both the BSDF and NEE and then combined using MIS weights.

    Final MIS
    Normal NEE
    Raw BSDF
  • Heterogeneous Participating Media

    A heterogeneous volume is a volumetric medium whose properties (density), vary spatially. The first challenge we encountered was choosing a volume format. For simplicity, we used the grid-based Mitsuba .vol format and converted from VDB files using Mitsuba's converter. The second challenge was implementing the feature itself in shapes/grid.cpp, including reading the file, interpolating densities, and estimating transmittance using ratio tracking.

    With
  • Disney BSDF [Raw Image Viewer]

    Our ultimate headache - here comes the most challenging and time-consuming feature yet. The principled Disney BSDF is a versatile, physically-based shading model that combines multiple reflection types into a single, artist-friendly (but not developer-friendly) framework for realistic materials—and it's the only BSDF we used for our entry. We implemented it following this document in bsdfs/disney.cpp. The BSDF alone consists of five different lobes:
    - Diffuse: captures the base diffusive color of the surface.
    - Metallic: features major specular highlights.
    - Glass(Rough Dielectric): handles transmission.
    - Clearcoat: models the heavy tails of the specularity.
    - Sheen: addresses retroreflection.
    Each lobe comes with complex mathematics, where even small conditions must be handled carefully—for example, ensuring wi and wo are on the correct hemisphere (and not accidentally swapping them). Among them, the glass lobe was the most painful to implement, as it also requires handling transmission. To validate our implementation, we built an interactive viewer that allows us to test individual lobes with different parameter settings.

    With

Process

We began with a sketch of an interior scene, centered around a warm beam of light streaming through a window and guiding the viewer's gaze toward the main character: a sewing machine. To reinforce the theme, we filled the space with a variety of background elements, while surrounding household objects were given an antique character to evoke a sense of nostalgia.

a (a)
b (b)

(a) Our rough, childlike early sketch of the scene (b) The base layout of our scene

Love
The challenge was how to visually convey love, and atmosphere plays a crucial role in shaping that feeling. We aimed to evoke warmth and nostalgia—the comforting sensation of returning to childhood memories. Color choice therefore became essential. To achieve this mood, we selected a warm golden-hour palette and implemented a 3000K blackbody light for the key illumination in textures/blackbody.cpp.
The scene is primarily lit by a strong spotlight outside the window, acting as the key light, supported by softer area lights inside the room and a neutral gray environment map. We wanted the key light to be sharp and highly focused, casting strong, directional light onto the floor. However, increasing its intensity quickly led to overexposure across the scene. To address this, we introduced a per-light bounce limit using the lightBounce parameter. By setting the bounce limit of the main spotlight to zero, we ensured it contributes only direct illumination. To preserve realistic indirect lighting, we duplicated the key light with the same position and direction but much lower intensity and unlimited bounces. This allowed the spotlight to shape the scene without overwhelming it.
Bounces = 0
Bounces = 1024 Bounces = 1024
Bounces = 0
Finally, to recreate the “god ray” effect and enhance the emotional atmosphere, we placed a cube of homogeneous volume with a low density scale into the scene. This subtle volumetric scattering adds depth and gives the lighting a softer, more ethereal quality.
Material detail

Volumetric light beams entering through the window

Thread
This section highlights our central theme—“tailor”—through a diverse collection of fabric materials. Our scene features a wide range of textiles, from everyday materials such as cotton, denim, and wool to high-fashion fabrics like satin and velvet. Capturing the visual richness of these materials was essential to conveying craftsmanship and texture.
a) Cotton
b) Denim
c) Wool
d) Satin
e) Velvet
To handle this variety, we implemented the Disney BSDF, which closely mirrors Blender's Principled BSDF and allows for a one-to-one mapping of material parameters. This made it possible to represent different fabrics using intuitive controls—for example, rougher diffuse responses for cotton and wool, stronger specular highlights for satin, and softer sheen effects for velvet—within a single, unified shading model.

a
b

Pieces of fabric scattered throughout the scene

One major challenge was that these rich materials often relied on complex and deeply nested shading graphs in Blender, which our exporter could not fully support. As a result, a significant portion of our time was spent simplifying node graphs and baking textures into exportable formats. Despite these technical hurdles, working through Blender's limitations became an enjoyable learning experience, giving us hands-on exposure to cloth simulation, UV mapping, and material authoring workflows.
…and Pixels
With all the ingredients prepared, it was finally time to cook the main dish. Our initial concept was rooted in realism, but as the scene evolved, we began to question whether faithfully reproducing ordinary reality was truly necessary. Given the creative freedom of rendering, we chose instead to reshape it—introducing a fairytale-like atmosphere where clouds drift across the ground, an element borrowed more from dreams than the real world.
This artistic direction came with technical challenges. The scene contains many high-bounce elements, particularly volumes, which made rendering with a vanilla path tracer both slow and noisy. To address this, we combined a Halton sampler with a MIS path tracer, significantly reducing variance and improving convergence. To further clean up residual noise, we applied a denoiser using auxiliary albedo and normal channels. Because the image was already relatively stable, the denoiser preserved fine details while delivering a cleaner, noise-free result. We also applied a bloom effect to enhance the scene's aesthetic.
Post-processed
Noisy Noisy
Post-processed
And finally, the moment we had all been waiting for—the dish is served. What emerges is a scene filled with joy, warmth, passion, and a touch of surrealism: Love, Thread, and Pixels.

Acknowledgments

We would like to thank all the lecturers, teaching assistants, and tutors who organized and supported this wonderful course. In particular, we are grateful to Pascal for sharing valuable tips on creating visually appealing scenes, our tutor Nils for guiding us during tutorials, Aron for providing feedback on our idea, and Ben for helping us debug the competition code. Finally, we also want to acknowledge ourselves (Minh and Mengzhu) for our hard work and dedication throughout the challenge.

References

Literature

Assets