Nicholas Sharp
nmwsharp.bsky.social
Nicholas Sharp
@nmwsharp.bsky.social
3D geometry researcher: graphics, vision, 3D ML, etc | Senior Research Scientist @NVIDIA | polyscope.run and geometry-central.net | running, hockey, baking, & cheesy sci fi | opinions my own | he/him

personal website: nmwsharp.com
Actually, Yousuf did a quick experiment which is related (though a different formulation), using @markgillespie64.bsky.social et al's Discrete Torsion Connection markjgillespie.com/Research/Dis.... You get fun spiraling log maps! (image attached)
July 2, 2025 at 6:54 PM
We give two variants of the algorithm, and show use cases for many problems like averaging values on surfaces, decaling, and stroke-aligned parameterization. It even works on point clouds!
July 2, 2025 at 6:23 AM
Instead of the usual VxV scalar Laplacian, or a 2Vx2V vector Laplacian, we build a 3Vx3V homogenous "affine" Laplacian! This Laplacian allows new algorithms for simpler and more accurate computation of the logarithmic map, since it captures rotation and translation at once.
July 2, 2025 at 6:23 AM
Previously in "The Vector Heat Method", we computed log maps with short-time heat flow, via a vector-valued Laplace matrix rotating between adjacent vertex tangent spaces.

The big new idea is to rotate **and translate** vectors, by working homogenous coordinates.
July 2, 2025 at 6:23 AM
Logarithmic maps are incredibly useful for algorithms on surfaces--they're local 2D coordinates centered at a given source.

Yousuf Soliman and I found a better way to compute log maps w/ fast short-time heat flow in "The Affine Heat Method" presented @ SGP2025 today! 🧵
July 2, 2025 at 6:23 AM
Geometric initialization is a commonly-used technique to accelerate SDF field fitting, yet it often results in disastrous artifacts for non-object centric scenes. Stochastic preconditioning also helps to avoid floaters both with and without geometric initialization.
June 3, 2025 at 12:43 AM
Neural field training can be sensitive to changes to hyperparameters. Stochastic preconditioning makes training more robust to hyperparameter choices, shown here in a histogram of PSNRs from fitting preconditioned and non-preconditioned fields across a range of hyperparameters.
June 3, 2025 at 12:43 AM
We argue that this is a quick and easy form of coarse-to-fine optimization, applicable to nearly any objective or field representation. It matches or outperforms custom designed polices and staged coarse-to-fine schemes.
June 3, 2025 at 12:43 AM
Surprisingly, optimizing this blurred field to fit the objective greatly improves convergence, and in the end we anneal 𝛼 to 0 and are left with an ordinary un-blurred field.
June 3, 2025 at 12:43 AM
And implementing our method requires changing just a few lines of code!
June 3, 2025 at 12:43 AM
It’s as simple as perturbing query locations according to a normal distribution. This produces a stochastic estimate of the blurred neural field, with the level of blur proportional to a scale parameter 𝛼.
June 3, 2025 at 12:43 AM
Selena's #Siggraph25 work found a simple, nearly one-line change that greatly eases neural field optimization for a wide variety of existing representations.

“Stochastic Preconditioning for Neural Field Optimization” by Selena Ling, Merlin Nimier-David, Alec Jacobson, & me.
June 3, 2025 at 12:43 AM