sromary
banner
sromary.bsky.social
sromary
@sromary.bsky.social
writer, photographer, motorcycle adventures, football, and travel
And she ends chapter one with showing how the quest to win AI domiance is being led by "...the polarized values, clashing egos, and messy humanity of a small handful of fallible people."
November 22, 2025 at 3:18 AM
Ah here we go... "Over the next four years, OpenAI became everything that it said it would not be." (p.14). With characters such as Thiel, and Musk as backers I can see a pattern here.
October 22, 2025 at 5:15 PM
Ten pages in and it's reading like a soap opera...who did what to whom, as OpenAI fired and rehired Altman. I need to get past this part and onto how it's an Empire - this is why I bought the book.
October 22, 2025 at 5:06 PM
Ten pages in and it's reading like a soap opera...who did what to whom, as OpenAI fired and rehired Altman. I need to get past this part.
October 22, 2025 at 5:02 PM
Bought your book and I'm on page one. Both encouraged and terrified.
October 22, 2025 at 2:05 AM
Here book is based on over 300 interviews with 260 people and details corroborated with at least two sources.
October 22, 2025 at 2:03 AM
KH begins with a quote from Sam Altman from 2013, talking about how "The most successful people create religions."
October 22, 2025 at 1:59 AM
He ends the book with the Luddites and saying the change happening now is different, more adapted to human needs and wants rather than focused on an industrial output, but to get there intense and unprecedented containment is a prerequisite.
October 14, 2025 at 6:26 PM
NGOs, media, trade unions, grass root campaigns, philanthropic organisations
October 14, 2025 at 6:12 PM
Despite Suleyman's efforts, DeepMind and Google were not able to establish ethical practices and oversight - AI was supposed to be open source, with finding set aside for social good, but in the end shareholder profit prevailed - but pushing from within for such mechanisms is another of the 10 tools
October 14, 2025 at 6:04 PM
Choke points in AI tech can be used to slow development and buy time for measured implementation - e.g. just 3 companies collaborte to produce the chips needed: NVIDIA, TSMC, and ASML
October 14, 2025 at 5:52 PM
Independent audits of AI tools to ensure transparency and trust, examples SecureDNA: an oversight program to screen for potentially dangerous elements.
October 13, 2025 at 5:47 PM
AI systems can be "boxed" - designed with "air gaps" and "sandboxes" to throughly test AIs before release into the wild.
October 13, 2025 at 5:47 PM
Lays out 10 steps to AI containment - beginning with direct funding of safety research on how to best teach AI's to be ethical and morally responsible, in the same way we teach children about right and wrong.
October 13, 2025 at 5:47 PM
Making the case regulation is never enough (e.g. we still have road accidents). And there's also "pessimism aversion" towards change.
October 13, 2025 at 1:33 PM
these examples and more are why Suleyman says containment must be possible - he refers to EU's AI Act still being developed (actually established post publication of the book) www.europarl.europa.eu/topics/en/ar...
EU AI Act: first regulation on artificial intelligence | Topics | European Parliament
The use of artificial intelligence in the EU is regulated by the AI Act, the world’s first comprehensive AI law. Find out how it protects you.
www.europarl.europa.eu
October 13, 2025 at 1:33 PM
but there's also unintended catastrophe, such as an AI engine misunderstanding it's purpose and running amok, turning the world into paper clips, caught in an infinite loop going in the wrong direction
October 13, 2025 at 1:33 PM
some of the "bad actor" AI assisted apocalyptic potential Suleyman outlines:
terrorist drone swarm, pandemics, political assainations and attacks, disinformation and fake news
October 13, 2025 at 1:33 PM
Suleyman also sees a potential for a fracturing of power, how access to AI tech might leverage small groups with considerable power, rather than the state.
October 13, 2025 at 12:18 PM