Nicholas Sharp
nmwsharp.bsky.social
Nicholas Sharp
@nmwsharp.bsky.social
3D geometry researcher: graphics, vision, 3D ML, etc | Senior Research Scientist @NVIDIA | polyscope.run and geometry-central.net | running, hockey, baking, & cheesy sci fi | opinions my own | he/him

personal website: nmwsharp.com
Actually, Yousuf did a quick experiment which is related (though a different formulation), using @markgillespie64.bsky.social et al's Discrete Torsion Connection markjgillespie.com/Research/Dis.... You get fun spiraling log maps! (image attached)
July 2, 2025 at 6:54 PM
Yeah! That diffused frame is "the most regular frame field in the sense of transport along geodesics from the source", so you get out a log map that is as-regular-as-possible, in the same sense.

You could definitely use another frame field, and you'd get "log maps" warped along that field.
July 2, 2025 at 6:54 PM
💻 Website: www.yousufsoliman.com/projects/the...
📗 Paper: www.yousufsoliman.com/projects/dow...
🔬 Code (C++ library): geometry-central.net/surface/algo...
🐍 Code (python bindings): github.com/nmwsharp/pot...

(point cloud code not available yet, let us know if you're interested!)
July 2, 2025 at 6:23 AM
We give two variants of the algorithm, and show use cases for many problems like averaging values on surfaces, decaling, and stroke-aligned parameterization. It even works on point clouds!
July 2, 2025 at 6:23 AM
Instead of the usual VxV scalar Laplacian, or a 2Vx2V vector Laplacian, we build a 3Vx3V homogenous "affine" Laplacian! This Laplacian allows new algorithms for simpler and more accurate computation of the logarithmic map, since it captures rotation and translation at once.
July 2, 2025 at 6:23 AM
Previously in "The Vector Heat Method", we computed log maps with short-time heat flow, via a vector-valued Laplace matrix rotating between adjacent vertex tangent spaces.

The big new idea is to rotate **and translate** vectors, by working homogenous coordinates.
July 2, 2025 at 6:23 AM
Thank you! There's definitely a low-frequency bias when stochastic preconditioning is enabled, but we only use it for the first ~half of training, then train as-usual. The hypothesis is that the bias in the 1st half helps escape bad minima, then we fit high-freqs in the 2nd half. Coarse to fine!
June 8, 2025 at 5:57 AM
Ah yes absolutely. That's a great example, we totally should have cited it!

When we looked around we found mannnnnny various "coarse-to-fine" like schemes appearing in the context of particular problems or architectures. As you say, what most excited us here is having simple+general option.
June 4, 2025 at 10:14 PM
Thank you for the kind words :) The technique is very much in-the-vein of lots of related ideas in ML, graphics, and elsewhere, but hopefully directly studying it & sharing is useful to the community!
June 4, 2025 at 10:10 PM
We did not try it w/ the Gaussians in this project (we really focused on the "query an Eulerian field" setting, which is not quite how Gaussian rendering works).

There are some very cool projects doing related things in that setting:
- ubc-vision.github.io/3dgs-mcmc/
- diglib.eg.org/items/b8ace7...
June 4, 2025 at 10:06 PM
Tagging @selenaling.bsky.social and @merlin.ninja, who are both on here it turns out! 😁
June 3, 2025 at 1:19 AM
website: research.nvidia.com/labs/toronto...
arxiv: arxiv.org/abs/2505.20473
code: github.com/iszihan/stoc...

Kudos go to Selena Ling who is the lead author of this work, during her internship with us at NVIDIA. Reach out to Selena or myself if you have any questions!
Stochastic Preconditioning for Neural Field Optimization
Stochastic Preconditioning for Neural Field Optimization
research.nvidia.com
June 3, 2025 at 12:43 AM
Closing thought: In geometry, half our algorithms are "just" Laplacians/smoothness/heat flow under the hood. In ML, half our techniques are "just" adding noise in the right place. Unsurprisingly, these two tools work great together in this project. I think there's a lot more to do in this vein!
June 3, 2025 at 12:43 AM
Geometric initialization is a commonly-used technique to accelerate SDF field fitting, yet it often results in disastrous artifacts for non-object centric scenes. Stochastic preconditioning also helps to avoid floaters both with and without geometric initialization.
June 3, 2025 at 12:43 AM
Neural field training can be sensitive to changes to hyperparameters. Stochastic preconditioning makes training more robust to hyperparameter choices, shown here in a histogram of PSNRs from fitting preconditioned and non-preconditioned fields across a range of hyperparameters.
June 3, 2025 at 12:43 AM
We argue that this is a quick and easy form of coarse-to-fine optimization, applicable to nearly any objective or field representation. It matches or outperforms custom designed polices and staged coarse-to-fine schemes.
June 3, 2025 at 12:43 AM