Aditya Chetan
justachetan.bsky.social
Aditya Chetan
@justachetan.bsky.social
PhD Student at Cornell. Working in Vision and Graphics. justachetan.github.io
Are fictional maps okay? If yes, the inheritance cycle by Christopher paolini, also the Throne of Glass series by Sarah J Maas
August 29, 2025 at 4:58 AM
Happy to get feedback + questions! For more experiments and technical details, check out our paper! 😄
June 10, 2025 at 2:11 PM
We also show improved performance in downstream applications like rendering, collision simulation, and PDE solving.
(n/n)
June 10, 2025 at 2:11 PM
We show the effectiveness of our method in computing accurate normals and curvatures over a variety of challenging neural SDFs learned over the FamousShape dataset. Our approach shows a 4x improvement in gradients and mean curvature over the baselines.
(6/n)
June 10, 2025 at 2:11 PM
Second, to enable smoother gradients directly with autodiff over the network, we propose a fine-tuning approach that can use any smooth gradient operator to smooth out the artifacts in the gradients.
(5/n)
June 10, 2025 at 2:11 PM
To mitigate this noise, we propose a two-pronged solution. First, we leverage the classical technique of polynomial-fitting to fit low-order polynomials through the learned signal and take autodiff over the fitted polynomial.
(4/n)
June 10, 2025 at 2:11 PM
What causes these artifacts? We note that signals learned by hybrid neural fields exhibit high-frequency noise (see FFT of a 1D slice of a 2D SDF), which gets amplified when we take derivatives using standard tools like autodiff.
(3/n)
June 10, 2025 at 2:11 PM
Hybrid neural fields like Instant NGP have made training neural fields extremely efficient. However, we find that they fall short of being "faithful" representations, exhibiting noisy artifacts when we compute their spatial derivatives with autodiff.
(2/n)
June 10, 2025 at 2:11 PM