Phillip de Wet
banner
phillipdewet.bsky.social
Phillip de Wet
@phillipdewet.bsky.social
110 followers 110 following 230 posts
Editor, columnist, ex News24, ex Business Insider SA, ex Mail & Guardian, ex Daily Maverick. Living in Canterbury, writing about geopolitics, thinking about AI, advocating risk-adjusted writing.
Posts Media Videos Starter Packs
The UK's energy minister this week talked about the national grid running entirely off clean* energy for one hour recently, albeit at 3:30AM.

Haven't confirmed, but I suspect that is a G7 milestone.

* Nuclear and biomass alongside renewables.
Google and OpenAI effectively have infinite money. Apple and Anthropic and Meta aren't far behind. I'm deep into the Google ecosystem, with a bit of apple. Yet, as a wildly promiscuous LLM user, Perplexity is still my go-to answer engine.

Starting to think we don't need to worry about monoculture.
Maybe I spent too much as a kid with the Civilisation games' technology trees. But it really does feel like ceremonial burial can lead to satellites. And that climate change will be fought with technologies created adjacent to industries that helped cause it.
Mobile phones begat battery development that massively accelerated renewable energy.

There seems a decent chance AI will do the same for carbon capture, because data centres that can't wait turn to LNG – but can't defend doing so. And have, effectively, infinite money to address that tension.
Quantum is so weird. The encryption side is scrambling to stay ahead of it, the startups are wondering when the avalanche of money will hit - yet we don't have a single proven ROI-capable use-case.

Unprecedented potential, to be sure, potentially an utter transformation, but it's all theoretical.
These are not people who come from a place of belief in Trump, but diplomats and academics desperately trying to make sense of what is, in a functional sense, divine self-contradiction.
Good thing we still keep some theologians around for just this kind of eventuality.
*drafts feature pitch*
Remarkable to listen to analysts talk about Donald Trump's statements (in this case on Nigeria) with the same language Christians use about the Old Testament: it's true but don't take it literally, it meets the morality of the audience, influences through description though it sounds like command.
Cracking TLS certificates could basically break the entire internet as we know it, doing unimaginable harm. But the crypto good guys are on it.

blog.cloudflare.com/bootstrap-mtc/

A while ago it was Signal. Likewise, post-quantum could be utter disaster for messaging. And likewise, they're on it.
Keeping the Internet fast and secure- introducing Merkle Tree Certificates
Cloudflare is launching an experiment with Chrome to evaluate fast, scalable, and quantum-ready Merkle Tree Certificates, all without degrading performance or changing WebPKI trust relationships.
blog.cloudflare.com
All the noise is about AI. But it is striking how 2025 is becoming the year in which the IT sector comes to grips with the encryption disaster that is quantum computing.

This week: Cloudflare says it is about to start experimenting with a neat way to prevent server impersonation.
This begs for some FOIs and data work, on the reliance of gov dot uk on what-was-Twitter, and whether there is even discussion on spinning up some Mastodon servers or something.
(Which you can totally classify as extended defence spending under Nato rules...)
One way to think about it: like any good employee, LLMs works towards their metrics – and if you have bad metrics, you get bad output from, eventually, bad employees.

Some good stuff happening in benchmarking, though, which could help significantly.

www.science.org/content/arti...
AI hallucinates because it’s trained to fake answers it doesn’t know
Teaching chatbots to say “I don’t know” could curb hallucinations. It could also break AI’s business model
www.science.org
There is no better pro-parent propaganda than Sunday morning. And that hits at a more visceral level than tax breaks and parental leave ever will.
Tactical proposal for countries with low birth rates: get younger people into family-friendly social spaces before 9AM Sundays.

Instead of sleeping off Saturday night, get late-20-year-olds where the toddlers – fresh from a good sleep, managed by parents not trying to juggle work – are being cute.
This has always been a fundamental problem across journalism and data analysis both: you need to ask the right questions. A companion system that acts as a sense-check on what I've missed is terribly exciting, now that I see it in action.
Gemini is the one top-end system I effectively get for free, and until this week it ranked way, waaaaayyyyyy below Perplexity and ChatGPT in use.
Then integration with Google Docs got good, and suddenly I'm using it to answer a lot of questions about my work and financials I never thought to ask.
But hey, the boats come with helicopters, so I guess people find ways to get around.
First time back in Manhattan for a couple of years after hanging around Europe, and I've figured out what feels wrong: too few bicycles.

Elsewhere they have grown into predatory flocks you had better keep an eye on if you like your limbs unbroken. Here they are few, and furtive.
"I lost propulsion while going at low speed" is the good news from the Jeep software update – over the air, just needed drivers to click "yes" – that bricked vehicles mid-drive.

www.thestack.technology/jeep-softwar...
Jeep software update bricks vehicles
“Please exercise extreme caution”
www.thestack.technology
The circular deals the likes of NVIDIA and OpenAI are cooking up? A potential disaster. The idea that AI is pure hot air? Not buying it.
The BoE and IMF warnings about the consequences of an AI bubble popping are necessary and important. The danger is real.

But, as someone who lived through dotcom, there is a fundamental difference: AI has huge revenue.

Customer value is in doubt. Cashflow is not.
I was a second-generation kid in a school system that actively indoctrinated for Apartheid, alongside robust religious institutions and the like.

And then, one day, the majority of people who went through that system went "yeah, let's not".

Conversion and radicalisation aren't the norm.
It doesn't help that an ex Deloitte partner heads the government department that has happily accepted all of this.

I sometimes think consultants are given too hard a time for no good reason – and then they do stuff like this.
* didn't check the LLM output
* issued a faceless "we stand by our work" when busted
* took more than a month to correct errors
* underplayed those errors
* didn't do any kind of erratum
* offered, or accepted a demand for, only a partial refund
* still won't talk about it
Honestly can't remember the last time I saw such an outstanding effort to keep failing at every turn – and to synergise the individual failings.

Deloitte in Australia
* didn't disclose the use of an LLM in an important report
* didn't use its own LLM
(cont)

www.thestack.technology/deloitte-use...
Deloitte used govt client's own GPT to invent references
Then failed to disclose it, took a month to fix errors publicly pointed out, and is apparently on the hook for a partial refund
www.thestack.technology
There should be a specific prize for that kind of high-context, high-foresight high-skill shot.
Maybe named the This Is Why You Still Need Specialist Photographers Yes Even With GenAI You Bloody Beancounters prize.