Thanks to great collaborators: Brian @hrksrkr.bsky.social Kohei @congma.bsky.social Sereno @braphael.bsky.social
Thanks to great collaborators: Brian @hrksrkr.bsky.social Kohei @congma.bsky.social Sereno @braphael.bsky.social
Fun back-story: @braphael.bsky.social and I derived most of this model at the bar near an NCI workshop 🥂😅
Fun back-story: @braphael.bsky.social and I derived most of this model at the bar near an NCI workshop 🥂😅
These locations look like contours of equal height on an elevation map, hence the “topographic map” analogy.
These locations look like contours of equal height on an elevation map, hence the “topographic map” analogy.
-> Spatial dimensionality reduction! 🚀
We call d(x,y) the "isodepth" - it characterizes spatial gradients ▽f_g
-> Spatial dimensionality reduction! 🚀
We call d(x,y) the "isodepth" - it characterizes spatial gradients ▽f_g
(1) genes have *shared* gradient directions, i.e. each gradient ▽f_g(x,y) is proportional to shared vector field v(x,y)
(Equivalent to Jacobian of f being rank-1 everywhere)
(2) vector field v has no “curl”, so v=▽d is gradient of "spatial potential" d
(1) genes have *shared* gradient directions, i.e. each gradient ▽f_g(x,y) is proportional to shared vector field v(x,y)
(Equivalent to Jacobian of f being rank-1 everywhere)
(2) vector field v has no “curl”, so v=▽d is gradient of "spatial potential" d
Spatial gradients are gradients ▽f_g of each component (gene)
Unfortunately, large data sparsity means naive estimation of gradient ▽f_g is very noisy 😱
Spatial gradients are gradients ▽f_g of each component (gene)
Unfortunately, large data sparsity means naive estimation of gradient ▽f_g is very noisy 😱