SniperJake945
banner
tearsofjake.bsky.social
SniperJake945
@tearsofjake.bsky.social
Computer Graphics Investigator (CGI)
Pinned
New at #SIGGRAPH2025:

Can we make Perlin Noise stretch along some underlying vector field? Well it turns out it's possible with two simple additions to the original method! No need for advection or convolutions.

Find the paper and implementations here:
github.com/jakericedesi...
Reposted by SniperJake945
My MOPs course on Houdini.School is now free! If you want a start to finish course on how (almost) everything works, plus the math background, you can watch it all here: www.houdini.school/courses/mops...
MOPs: Motion Operators for Houdini
This course provides an overview of the MOPs workflow and how it integrates with the rest of Houdini, shows concrete examples of how MOPs can be used in both motion graphics and visual effects workflo...
www.houdini.school
January 29, 2026 at 8:41 PM
voronoi based NERF training isn't going incredibly well... 😂20k iterations with 80k voronoi sites takes about 10 hours to train. And it still looks dog water.

Definitely need more sites and some better approach to accelerating sampling....
January 25, 2026 at 9:08 PM
Voronoi implicit but this time in 3d. It's not fully NERF mode yet, as I'm not doing any kind of sampling along rays, this is just trying to learn volumetric data.

30,000 Voronoi sites. Probably not enough for the pighead, but it's just a fun first test.
January 17, 2026 at 1:34 AM
Lebronsketball 2
January 10, 2026 at 1:48 AM
Lebronsketball 1
January 10, 2026 at 1:43 AM
*morphs your cat*
January 10, 2026 at 12:34 AM
Gaboronoi (top) vs Anisotropic Voronoi (bottom), with 5000 Sites. I feel like the anisotropic voronoi's artifacts are more aesthetically pleasing. Gaboronoi is faster to train by a lot!

I've compiled all of my recent voronoi experiments into a collab:
colab.research.google.com/github/jaker...
January 4, 2026 at 6:47 AM
I'm once again making neural implicits of my cats. This time we're back to the voronoi. Gabor noise style.

We weight the result of the softmax at any site i by sin(F_i*(x_i-x_j)•u_i) where F_i is learned frequency and u_i is a learned anisotropy direction. I call it Gaboroni
January 4, 2026 at 1:31 AM
Wanted to make a neural implicit that's just layers of anisotropic simplex noise. Turns out it works pretty well. With 24 layers of simplex noise each with a 96x96 texture of anisotropy data we can get this kind of result!

It's not at all a good compression method but it's fun and cool :)
December 31, 2025 at 6:57 AM
I had the idea to incorporate anisotropy into the recent Spherical Voronoi paper. And when we apply their ideas to euclidean problems (not Spherical) the results are pretty great when anisotropy is used. This is 3000 anisotropic sites vs 3000 isotropic sites. Same learning rates for both
December 26, 2025 at 10:49 PM
Shockingly annoying to find the points of intersection between a plane and an AABB. Even when I know the plane passes through the center of the box.

If anyone has a simple method for finding the exact distance to a plane clipped by a bounding box I'd be eternally grateful.
December 25, 2025 at 8:15 PM
Reposted by SniperJake945
Computing the exact bijection of the optimal transport (OT) problem between very large point sets is completely untractable…

In our SIGGRAPH Asia 2025 paper: “BSP-OT: Sparse transport plans between discrete measures in log-linear time” we get one with typically 1% of error in a few seconds on CPU!
October 1, 2025 at 1:55 PM
For the snow on the hedge maze in Zootopia 2, we cast rays upwards from each leaf and if it didn't collide with anything we instanced a snow chunk onto it. We made a few variants of the snow chunks to add extra variation.
December 12, 2025 at 7:22 PM
my FX work alone required at least 1 quadrillion of the 3.7 quadrillion rays traced on Zootopia 2. This is FACTUAL and REAL. If you are a journalist, please cite this in your article.
Using some very approximate guesstimate math, I think we traced around 3.7 quadrillion individual rays to produce the final movie. This does not include intermediate / WIP renders. I think this is astronomy-accurate, meaning it's in roughly the correct order of magnitude.

(2/2)
November 25, 2025 at 6:47 AM
my GOAT 🐐🐐🐐🐐
November 15, 2025 at 1:27 AM
Reposted by SniperJake945
At 10am this morning at SIGGRAPH, my teammate Nathan Zeichner will be presenting our talk about a texture streaming system for Ptex on the GPU that we built for our in-house interactive pre-viz GPU path tracer; the talk is by Nathan, Mark Lee, and me.

s2025.conference-schedule.org/presentation...
Presentation - SIGGRAPH 2025 Conference Schedule
s2025.conference-schedule.org
August 10, 2025 at 4:16 PM
New at #SIGGRAPH2025:

Can we make Perlin Noise stretch along some underlying vector field? Well it turns out it's possible with two simple additions to the original method! No need for advection or convolutions.

Find the paper and implementations here:
github.com/jakericedesi...
July 29, 2025 at 12:16 AM
Reposted by SniperJake945
One of my coworkers, @tearsofjake.bsky.social, has a talk at SIGGRAPH this year about this really cool steerable perlin noise technique that was used on Moana 2. He's just posted some handy reference implementations for Houdini, Unity, Godot, and Blender; check it out!

github.com/jakericedesi...
GitHub - jakericedesigns/SteerablePerlinNoise: Implementations of "Steerable Perlin Noise" as presented at Siggraph 2025
Implementations of "Steerable Perlin Noise" as presented at Siggraph 2025 - jakericedesigns/SteerablePerlinNoise
github.com
July 28, 2025 at 7:01 PM
Reposted by SniperJake945
Time for another long blog post... and today it's all about how we handle the temporal aspect of animation data. And more specifically, how to approach things from a signal processing perspective, covering up-sampling, down-sampling, and everything in-between.

theorangeduck.com/page/filteri...
April 4, 2025 at 3:22 PM
Reposted by SniperJake945
A thread on our new, also awesome, Siggraph paper "Linear-Time Transport with Rectified Flows", by K. Do, @dcoeurjo.bsky.social, P. Memari and me.
Paper here: perso.liris.cnrs.fr/nicolas.bonn...
Results in video: youtu.be/EcfKnSe6mhk
Linear-Time Transport with Rectified Flows
YouTube video by David Coeurjolly
youtu.be
May 5, 2025 at 7:52 PM
How do I get the discover page to align more with my interests? I've tried following a bunch of people, but it still kinda just shows stuff based off whatever categories I clicked when I signed up...
March 16, 2025 at 7:06 PM
Reposted by SniperJake945
I know it’s a bit uncool to linkedin-post on here but we’re hiring at work, I’ve been looking at quite a lot of showreels, so I wrote up some tips for making a great fx reel #vfx
www.linkedin.com/pulse/creati...
Creating a stand-out Effects reel
As a VFX artist, your showreel is one of the most important tools in the job search process. Whether you’re trying to break into the industry or looking for a new role, having a stand-out showreel can...
www.linkedin.com
February 6, 2025 at 9:12 AM
Distances to LP Voronoi Edges :)

shadertoy.com/view/MXVyz1
January 4, 2025 at 12:23 AM
Reposted by SniperJake945
So what is the tiniest possible fluid simulation?

Can we get anything interesting from a single cell?

For this we'll be using a standard MAC grid, which means we'll represent horizontal velocities (red) on the vertical edges and vertical velocities (green) on the horizontal edges of each cell.
December 9, 2024 at 12:27 AM
Distances to the boundary of voronoi in L4.

shadertoy.com/view/lfKfWV

Hopefully it doesn't crash webgl on your device! :)
December 18, 2024 at 2:08 AM