inkorak.bsky.social
@inkorak.bsky.social
Emad was an incompetent VC who was very lucky during the Dall-e 2 hype to find the guys who wrote an paper on latent diffusion and give them money.
December 9, 2025 at 10:21 AM
Well about the future...
bsky.app/profile/inko...
December 8, 2025 at 7:27 AM
Welcome to the new quiz show "Is It AI or a Compression Artifact?"
December 8, 2025 at 6:45 AM
Ripoff of House of leaves.
December 2, 2025 at 4:40 PM
Yes, if we forget about purely tabular data, then all other useful ML is simply optimization using backpropagation. Optimization using backpropagation has proven to be very versatile and powerful at scale. And you can solve any problem to a degree if you can construct an loss function for it.
December 2, 2025 at 9:04 AM
And now look on this. This is z-image turbo. Can run on consumer gpu.
November 27, 2025 at 9:04 AM
Yeah, a top-end model with six billion parameters and a regular gaming video card is a match made in slop heavens.
November 27, 2025 at 2:51 AM
Only video and images are progressing thanks to China. Here's a new image model, only 6 billion, but the quality is on par with 20 billion Qwen image.
huggingface.co/Tongyi-MAI/Z...
Tongyi-MAI/Z-Image-Turbo · Hugging Face
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
huggingface.co
November 27, 2025 at 1:47 AM
Yes, LLM is currently the most problematic monster, so it's hard to imagine it working locally with quality close to the state of art even in the long term.
November 27, 2025 at 1:45 AM
Well, as far as I remember, there were some restrictions, like only 18 characters could be chosen from over a hundred. But yeah, with the current AI labs' capabilities, it's overkill.
November 26, 2025 at 10:50 AM
Nah, it's Vince McMahon.
November 13, 2025 at 3:49 PM
Interesting day. Also tpot is leading a crusade to defend the Pope against Marc Andreessen.
November 9, 2025 at 10:02 AM
Even if the bubble bursts, models will still improve. I wouldn't be surprised if the bursting actually helps, directing resources toward efficiency and low-hanging fruit rather than scale. Plus, in the longer term, hardware advances will make inference models increasingly accessible and affordable.
November 9, 2025 at 8:10 AM
How well is a question, of course, but it can be done. Therefore, the scope of neural networks' applications is incredibly vast.
October 24, 2025 at 4:47 PM
Deep learning is incredibly versatile. If a problem can be represented as input numbers, output numbers, we can construct an error function, and there's a ton of data, then it can be solved.
October 24, 2025 at 4:47 PM
An audio spectrogram with a voice and the probability distribution of tokens at the output? Speach to text. Text token indices at the input and image pixels at the output? Image generation. I'm greatly simplifying, of course, but the essence remains the same.
October 24, 2025 at 4:46 PM
An array of numbers at the input is the pixels of a scanned image with text, and the output is an alphabetical probability distribution? That's ocr. Token indices at the input and the probability distribution of the next token at the output? That's LLM.
October 24, 2025 at 4:45 PM
We have a bunch of math with free parameters, it is model. It takes arrays of numbers as input and outputs arrays of numbers. We have an error function that can determine whether output numbers are good or bad, and we have an algorithm that selects the free parameters to minimize the error function.
October 24, 2025 at 4:45 PM
Problem with definitions. When it comes to deep learning, generative AI isn't fundamentally different from other deep learning models.
October 24, 2025 at 4:43 PM
Ahem, tagging on anime art booru sites.
October 22, 2025 at 4:23 AM
I think I understand the form as a relationship between weight and height. A llm that maps various combinations of arbitrary length from a discrete vocabulary of vectors to various probability distributions over that vocabulary is probably similar, but my intuition says fuck you in this case.
October 14, 2025 at 6:31 AM
This hardly makes sense to the layperson. I'm a dumb ml engineer and can write a simple transformer from memory. But I have a hard time grasping what can be represented in weights or activations by a variable composed of other variables, especially in terms of the intuition of shape as a continuum.
October 14, 2025 at 5:42 AM
Gemini also give his thought process.
October 14, 2025 at 5:33 AM
Yud is certainly not the best Harry Potter fanfiction author, but the Antichrist is a bit much.
October 10, 2025 at 3:16 PM