The place I’d hope to get something out of “quantum stuff” is in handling probabilities in a fundamentally different way, and maybe there are some new interesting nonlinearities to try.
Considering a standard neural network as a mix of nonlinearities applied to linear transformations, it’s not obvious that complex numbers actually add much compared to a real space with twice as many dimensions. Do we get anything by calling our linear transformations operators instead of matrices?
I haven’t given this a thorough enough read, but it’s probably a good start on searching literature for more thoroughly QM inspired approaches: proceedings.neurips.cc/paper_files/...
I’d be interested to see a deeper dive on this. My impression is that machine learning could be done in a much more QM analogous way based on ideas like “QM is statistics in l2 norm”. Outside of that, self attention feels a bit like an operator to me. Anything using softmax feels like L1 stats to me
“Stand up” was originally supposed to be a team ritual where you announce you’ve had enough coffee for your hair to “stand up” straight from your head like a cartoon
I think an LLM feed could have a chance of doing this. The post on its own would be hard to classify as politics. Right now, this seems related to “inflatable costumes at political protests” and I think there’s enough text/image content you could put into a context window to get an LLM to say so.
This is actually part of why I’m really skeptical of the task specific small model thing. The progress of machine learning and deep learning over the last decade or two feels like progress toward replacing the expertise curated by human experts with data and compute driven approaches.
This is a hazy memory more than a recollection of fact, but I think AlphaGo and all that were the biggest compute consumers of their time. I don’t remember balance of inference compute and training compute. Moving to AlphaZero style RL was a victory for replacing expert data with massive compute.
Totally agree. It’s fun to take a step back and see some of the challenges that tech has overcome. Another standout for me is Go, which I was told was maybe a 15 year problem for AI when I learned to play somewhere around 2009. AlphaGo solved that 15 year problem just 6 years later!
As a 2014 comic though, it was wasn’t that far off! I think the “Not Hotdog” app released in 2017 gives a reasonable date for when that task got easy. 3 years vs 5 is not too shabby in tech prediction!
Trump pops his head into Pete’s dorm room. “Pete, sober up quick Pete. We’ve gotta do something about Al Qaeda Pete” “Are you that guy from home alone 2?” “Sure Pete, but this is more important than that Pete. Many are saying it Pete, get sober and fight Al Qaeda” “I’m 19… and wtf is an Al Qaeda?”
Awesome! Strong recommendation for quality then. It introduces a fun mix of new challenges and epic quality stuff is a solid upgrade over normal quality.
That’s probably for the best in some ways. I’ve yet to see a case where someone starts quality for the first time and doesn’t spend 20-40 hours bricking random parts of their base because they put quality modules in some miners and now the brick furnaces have 4 different kinds of stone to work with.