Delio Vicini
deliovicini.bsky.social
Delio Vicini
@deliovicini.bsky.social
Senior research scientist @ Google
What an amazing release 👏 so many new and immensely useful features! 🤩
Dr.Jit+Mitsuba just added support for fused neural networks, hash grids, and function freezing to eliminate tracing overheads. This significantly accelerates optimization &realtime workloads and enables custom Instant NGP and neural material/radiosity/path guiding projects. What will you do with it?
August 7, 2025 at 11:22 AM
Reposted by Delio Vicini
Rendering nerds! Check out our latest work "Vector-Valued Monte Carlo Integration Using Ratio Control Variates" that has just gotten the best paper award at SIGGRAPH 2025. This paper presents a method that reduces variance of a wide range of rendering and diff. rendering tasks with negligible cost.
June 14, 2025 at 5:26 PM
Reposted by Delio Vicini
The latest development version of Dr.Jit now provides built-in support for evaluating and training MLPs (including fusing them into rendering workloads). They compile to efficient Tensor Core operations via NVIDIA's Cooperative Vector extension. Details: drjit.readthedocs.io/en/latest/nn...
June 1, 2025 at 2:04 AM
Reposted by Delio Vicini
🚀 The source code for our #SIGGRAPH2025 paper "Practical Inverse Rendering of Textured and Translucent Appearance" is now available!
🔗 GitHub: github.com/google/pract...
GitHub - google/practical-inverse-rendering-of-textured-and-translucent-appearance: SIGGRAPH 2025 "Practical Inverse Rendering Of Textured And Translucent Appearance"
SIGGRAPH 2025 "Practical Inverse Rendering Of Textured And Translucent Appearance" - google/practical-inverse-rendering-of-textured-and-translucent-appearance
github.com
May 16, 2025 at 10:47 AM
Excited to finally share Philippe's amazing work that he did with our team at Google!
Inverse rendering has become a standard tool for 3D reconstruction problems. However, recovering high-frequency appearance textures is challenging. In our SIGGRAPH 2025 paper, we propose several techniques to robustly reconstruct complex appearances (e.g., human skin). 1/n
May 12, 2025 at 5:08 PM
3D Gaussian splatting relies on depth-sorting of splats, which is costly and prone to artifacts (e.g., "popping"). In our latest work, "StochasticSplats", we replace sorted alpha blending by stochastic transparency, an unbiased Monte Carlo estimator from the real-time rendering literature.
April 7, 2025 at 7:57 AM
Reposted by Delio Vicini
(2/2)

🏆 Our invited speakers for February to April, 2025 includes:

Dorian Chan, @axelparis.bsky.social, ZHEN XU, ezgi ozyilkan, Zhaocheng Liu, @deliovicini.bsky.social, Qi Guo, @niladridutt.bsky.social, Akshat Dave, Ethan Tseng, Ziyang Chen.

👉 For more details:
complightlab.com/outreach
Outreach - Computational Light Laboratory at University College London
Computational Light Laboratory at University College London
complightlab.com
January 18, 2025 at 3:44 PM
Reposted by Delio Vicini
We are excited to present a SIGGRAPH Asia paper exploring a new application of inverse rendering to Tomographic Volumetric Additive Manufacturing (TVAM), a new light-based 3D printing technology that can print objects in less than a minute.
November 27, 2024 at 2:12 PM
Super exciting to see these new versions finally being released. it's amazing how far Mitsuba & Dr.Jit have come!
Following over 1.5 years of hard work (w/@njroussel.bsky.social &@rtabbara.bsky.social), we just released a brand-new version of Dr.Jit (v1.0), my lab's differentiable rendering compiler along with an updated Mitsuba (v3.6). The list of changes is insanely long—here is what we're most excited about🧵
November 26, 2024 at 4:05 PM