Vatnik Soup
banner
vatniksoup.bsky.social
Vatnik Soup
@vatniksoup.bsky.social
Fighting Disinformation since 2022.

https://vatniksoup.com/en/support-our-work/
Enjoy our soups? Brandolini’s law: The amount of energy needed to refute bullshit is an order of magnitude bigger than that needed to produce it. Fact-based research takes time and effort. Please support our work:

vatniksoup.com/en/support-o...
Fight Disinformation With Us: Support our Work | Vatnik Soup
Vatnik Soup: Your most comprehensive resource on pro-Kremlin disinformation and propaganda.
vatniksoup.com
December 12, 2025 at 4:59 PM
But… nothing to worry about. If any of us still have friends left after talking with chatbots instead of humans all day, thanks to Elon Musk, we can lose them real quick by asking Grok for a “vulgar roast” of them using “forbidden words”.

Yes, our future’s in good hands.

24/24
December 12, 2025 at 4:59 PM
Meanwhile, Grok and Musk are spreading the propaganda that helps Russia murder and torture Ukrainians, and now, per the latest Trump proposal — whom Musk claims he got elected — even get away with it scot-free.

23/24
bsky.app/profile/vatn...
December 12, 2025 at 4:59 PM
Given Musk’s ambitions, wealth and political influence, Grok’s influence and danger cannot be dismissed as a mere harmless chatbot. The trolley problems can seem silly, until you realize that that’s the very sort of decisions autonomous cars could be making every day.

22/24
December 12, 2025 at 4:59 PM
The soon-to-be-trillionaire also controls autonomous cars, spaceships, tunnels, etc. It’s not hard to imagine classic dystopian scenarios of AI running amok (Terminator, Matrix — and then Butlerian Jihad as a reaction), but apparently Elon’s SF culture doesn’t go that far.

21/24
December 12, 2025 at 4:59 PM
Musk has big ambitions for Grok, such as Grokipedia (to be renamed “Encyclopedia Galactica”, another Science Fiction reference), and already put it in charge of the Twitter feed algorithm (with disastrous results — but then again, he was manipulating it previously too.)

20/24
December 12, 2025 at 4:59 PM
Except for a few restrictive visas for employees who then can’t quit his companies, Musk opposes immigration and supports various far-right parties (yes even Grok corrected him there), who always happen to be pro-Kremlin as well. “Remigration” — except for Elon, of course.

19/24
December 12, 2025 at 4:59 PM
Another episode was the “White genocide” compulsive topic hijacking. Grok’s master, what a crazy coincidence, happens to be a white South African, obsessed with preserving the white race and encouraging white South African immigration to the US.

18/24
December 12, 2025 at 4:59 PM
Previously, Grok had turned into Mecha-Hitler and a Holocaust denier, a glitch supposedly fixed, only for Grok to switch to Mecha-Putler soon after. Who knows what will be the next glitch… And if this has anything to do with Grok’s owner.

17/24
December 12, 2025 at 4:59 PM
But then, whether through Musk’s manipulation or “organically” through bot farm manipulation, Grok turned into Mecha-Putler, an enthusiastic supporter of the genocidal invasion of Ukraine. And then lied about it.

16/24
December 12, 2025 at 4:59 PM
Musk also posted Kremlin-made memes himself, mocking Zelenskyy for asking for air defense while innocently wondering “where is all this Russian propaganda, we don’t see it”. At one point, Grok seemed aware of what Musk is doing, even praising Vatnik Soup.

15/24
December 12, 2025 at 4:59 PM
… “Khrushchev’s mistake” (reminder: Putin himself acknowledged Crimea as a part of sovereign Ukraine, many times), his non-stop mocking of Zelenskyy (but never Putin). We’ve already souped him a few times, including in this soup with disappearing likes:

14/24
bsky.app/profile/vatn...
December 12, 2025 at 4:59 PM
And does Musk have the best intentions? Is he immune from bias and foreign influence? While his father openly attends pro-Kremlin events, Musk praises Lavrov and passed out drunk in Moscow. His conversations with Putin, his pitching of verbatim Russian talking points like…

13/24
December 12, 2025 at 4:59 PM
And by extreme we mean extreme, things went quite wild. Musk blamed it on everyone but himself, of course. Oh, and Grok sometimes talks in first person as Elon — or is that Musk himself posting?

Even if he had good intentions, all of this would still be highly problematic.

12/24
December 12, 2025 at 4:59 PM
Even with the best intentions in the world, LLMs can be misleading due to the training data. And then there’s the “Paperclip Maximizer” problem, when a trivial prompt can lead to large-scale, dangerous results… such as the extreme praising of Elon Musk.

11/24
December 12, 2025 at 4:59 PM
And X is indeed infested with bots and trolls, a contentious issue when Musk bought the platform, and now worse than ever, despite Musk having promised to solve it “or die trying”. But the bots are pro-Trump so he couldn’t care less. Monetization makes it worse.

10/24
December 12, 2025 at 4:59 PM
In particular, LLM output heavily depends on the training data input. Train it on high-brow literature or academic papers — and you get an overuse of the em dash. Train it on X’s troll-farm-inundated propaganda cesspool, and you get… Grok.

9/24
December 12, 2025 at 4:59 PM
Competing for a new market, LLMs are rushed without proper testing, with their limitations not properly acknowledged. Generative AI in general brings a lot of issues that many are reluctant to acknowledge.

8/24
vatniksoup.com/en/soups/279/
December 12, 2025 at 4:59 PM
There are already instances of lawyers using LLMs’ made-up cases, LLMs making up books’ contents or whole books… Then there's the issue of AI-generated fake websites, deepfakes, bots on social media, etc. LLMs even possibly encouraged a teenager to kill himself.

7/24
bsky.app/profile/vatn...
December 12, 2025 at 4:59 PM
LLMs tell you what they think you expect to read, often brazenly lying instead of acknowledging what they don’t know, even on simple things. This is annoying for trivial questions, but becomes a big societal problem when relying on LLMs for major geopolitical issues.

6/24
December 12, 2025 at 4:59 PM
This is frustrating for users, who can’t decide whether they’re yelling at their dog for not understanding quantum physics, getting angry at their TV or toaster, or getting ultimate debate-settling final answers to everything from a superhuman, omniscient superintelligence.

5/24
December 12, 2025 at 4:59 PM
Truth, empathy, basic common sense, logic or context-awareness are not inherent features of that. This means LLMs can be extremely impressive on some tasks, even complex ones, while getting basic things wrong that a child would know, in the same breath.

4/24
December 12, 2025 at 4:59 PM
Instead, LLMs are basically “guessing engines” and search engines trained on a massive dataset to give you the output you expect: they are imitating intelligence rather than being an actual intelligence. They’re chatbots generating responses pretending to be a helpful AI.

3/24
December 12, 2025 at 4:59 PM
Let’s start with an introduction into how Large Language Models (LLMs) work, and the new “arguing with your toaster” phenomenon. LLMs like Grok are Artificial Intelligence (AI) but not the way we had imagined — a new form of intelligence that would somehow think like us.

2/24
youtu.be/LPZh9BOjkQs
Large Language Models explained briefly
YouTube video by 3Blue1Brown
youtu.be
December 12, 2025 at 4:59 PM