Teemu Sarapisto
@tsarpf.bsky.social
47 followers 110 following 25 posts
CS/ML PhD research in high-dimensional time-series @helsinki.fi Before: 7y of C++/JS/VR/AR/ML at Varjo, Yle, Reaktor, Automattic ... After dark: synthesizers & 3D gfx
Posts Media Videos Starter Packs
Pinned
tsarpf.bsky.social
Hi, lets try bsky!

New paper: Subsystem Discovery in High-Dimensional Time-Series Using Masked Autoencoders

Code/data/paper: github.com/helsinki-sda...

Presented at European Conference on Artificial Intelligence 2024

Map graph learned from weather #timeseries, adjacency from 7 engines!
tsarpf.bsky.social
If we train LLMs to use browsers, I doubt it makes them 10x more useful, because they are dumb af 😄

My guess is that most progress in "agents" will be driven more by human developed LLM-friendly APIs rather than improvement of generalization capabilities in LLMs. No exponential speed-ups there.
tsarpf.bsky.social
Good point, a sloppy reply from me.

I just meant that language is not the end-all tool for everything. Sure, LLMs can be trained to use tools like calculators and browsers like us. But so far we need to develop those tools, and train the LLMs to use them.
tsarpf.bsky.social
What I expect is that scenarios that are particularly economically valuable will get neat automated solutions. |

Either via 1000 people annotating data for a year, or a bunch of scientists coming up with neat self-supervised losses for it 😆
tsarpf.bsky.social
It assumes algo dev to maintain the exponential progress.

IMO the (multimodal) LLM paradigm to handle everything in a single model will not scale. Language is a bad abstraction for 1) math (LLMs can't multiply) 2) physical things (where is my cleaning robot?)

End of the sigmoid for data/compute.
tsarpf.bsky.social
Nice to find you here then!

That'll be a difficult read, having limited background in dynamics/control/RL, but it's on the TODO.

Coming from ML, Neural ODEs got me hooked on dynamics and state spaces. Also variational math x optimal control is 🔥

Now learning basics from the book by Brunton/Kutz.
tsarpf.bsky.social
www.foundationalpapersincomplexityscience.org/tables-of-co...

has a nice overview of the papers.

For example
- The 1943 perceptron paper (neural nets)
- Landauer's principle (reversible computing)
- Info theory (Shannon's og paper)
- State space models (Kalman)
...Turing's AI, Nash equilibrium...
www.foundationalpapersincomplexityscience.org
tsarpf.bsky.social
Just received the first volume, and damn, clearly a ton of effort was put into it!

There's a ~90 page intro to "foundations of complexity science" (which is also sold separately).

(The super interesting) papers each have a 5-10 page intro with historical context, and are full of annotations ❤️
tsarpf.bsky.social
I guess one could call this moving the goalpost so far that nothing will ever suffice 😁
tsarpf.bsky.social
"If intelligence lies in the process of acquiring new skills, there is no task X that skill at X demonstrates intelligence"
tsarpf.bsky.social
Ok, heh, well, partially the reason for the \infty in there is because the simulator got stuck in a non-chaotic loop due to integration errors. Grabbing the samples before the looping just gives a more uniform distribution
tsarpf.bsky.social
I was out of curiosity scatter plotting a double pendulum's x1/y1/x2/y2 positions in euclidean coordinates against each other.... Turns out they are sooo aesthetic 😍
tsarpf.bsky.social
1) In something like the 37th layer the model is (weighted) summing vectors which already have been combined with every other vector in the input sequence 36 times, + the effect of residual connections and multiple heads.
2) The tokens are (usually) not even full words to begin with. 2/2
tsarpf.bsky.social
Great visualizations, and excellent explanation of KV cache. But their intuitive reasoning about attention adding the meaning(s) of words to others is quite misleading. 1/2
tsarpf.bsky.social
Yeah, it's a bit too silent here, and the recommendation algorithm on bsky is not working great. The amount of clicking "show less like this" I'm doing is stupid.

Meanwhile, every time I check X I find a ton of interesting stuff, unfortunately also mixed with a lot of toxic bullshit as well.
tsarpf.bsky.social
What are you referring to? I've missed this.
tsarpf.bsky.social
Request and personal opinion: I would prefer if you focused less on the latest hype the AI swindlers are pushing out.

You have had unique angles for the physics stuff. While anyone with a brain can see, that even though OpenAI does very cool research, they are over-hyping every single release.
tsarpf.bsky.social
Basic idea of #DeepSeek R1, the new RL LLM model:

Start from a pretrained LLM, sample K responses for tasks (e.g., LeetCode problems with N tests). Update weights to favor better-than-average responses, as the true correct code is unknown. Also, reward correct output format.

#LLM #genAI #RL
tsarpf.bsky.social
And oh yeah, nice visualization! I really liked being able to compare the ELBO and log(z).
tsarpf.bsky.social
I've taken one course in Bayesian ML, so I barely know the basics 😄

But somehow the fact that there is no consistency/identifiability guarantees even with infinite data makes me afraid of VI 😅

2/2
tsarpf.bsky.social
IRL we don't know the shape of the true posterior (or log Z). In practice, when can you believe in the approximation enough to "dare" estimate uncertainty?

In practice, would you, e.g., try adding GMM components to boost ELBO? You’d need to keep everything else fixed for comparability, right?

1/2
tsarpf.bsky.social
For the past 2 years, every time I've tried to use jax-metal it has either refused to work at all due to features being unimplemented, or provided wrong results in a very simple test scenario. So I just use the CPU version on my M2...
tsarpf.bsky.social
Interested to see if a proper academic bsky will happen! The amount of great papers and memes I found on Twitter made me stick to it this long, but definitely something new would be nice.
tsarpf.bsky.social
Hi, lets try bsky!

New paper: Subsystem Discovery in High-Dimensional Time-Series Using Masked Autoencoders

Code/data/paper: github.com/helsinki-sda...

Presented at European Conference on Artificial Intelligence 2024

Map graph learned from weather #timeseries, adjacency from 7 engines!