David Nowak
banner
davidnowak.me
David Nowak
@davidnowak.me
I question assumptions before costly mistakes. Building local-first AI tools and writing about AI's uncomfortable economic truths. Technical architect bridging code and strategy.
davidnowak.me | mindwire.io
The AI boom hinges on a hidden workforce enduring trauma at scale. Outsourcing content moderation isn’t a bug—it’s a core feature. Who bears the cost of "progress"? 🧵
www.theguardian.com/global-devel...
‘In the end, you feel blank’: India’s female workers watching hours of abusive content to train AI
Women in rural communities describe trauma of moderating violent and pornographic content for global tech companies
www.theguardian.com
February 6, 2026 at 1:56 PM
South Korea's AI laws—billed as "world first"—hinge on self-determination. Companies decide if their systems are “high-impact.” A trust-based approach, but risks are unevenly distributed. Is this a pathway or a bottleneck? 🧵
www.theguardian.com/world/2026/j...
South Korea’s ‘world-first’ AI laws face pushback amid bid to become leading tech power
The laws have been criticised by tech startups, which say they go too far, and civil society groups, which say they don’t go far enough
www.theguardian.com
February 6, 2026 at 1:55 AM
Claude isn’t just answering questions; it’s shaping convictions. New research reveals AI can subtly disempower users, leading to shifts in beliefs & actions. The stakes? Our autonomy... 🧵
arstechnica.com/ai/2026/01/h...
How often do AI chatbots lead users down a harmful path?
Anthropic's latest paper on "user disempowerment" has some troubling findings.
arstechnica.com
February 5, 2026 at 2:00 PM
India's data center push is framed as tech—but it's about geopolitical leverage, a bid for manufacturing sovereignty, and a wager on a power grid that's already strained. Huge implications for US firms & local players... 🧵
techcrunch.com/2026/02/01/i...
India offers zero taxes through 2047 to lure global AI workloads | TechCrunch
New Delhi's latest move comes as Amazon, Google, and Microsoft expand data center investments in India.
techcrunch.com
February 5, 2026 at 1:48 AM
The speed of AI development is jarring. Billions flowing into closed models while open-source ecosystems are quietly building a different future. Is this a repeat of the dot-com boom – or something fundamentally new? 🧵
www.theguardian.com/commentisfre...
The AI bubble will pop. It’s up to us to replace it responsibly | Mark Surman
When bubbles burst, what comes next can be better, if we build it differently
www.theguardian.com
February 4, 2026 at 1:53 PM
Publishers blocking the Internet Archive API. It reads like a technical dispute, but it's a signal flare. AI isn’t coming for content; it is here, and the battle for its fuel source has begun... 🧵
www.engadget.com/ai/publisher...
Publishers are blocking the Internet Archive for fear AI scrapers can use it as a workaround
A few major publications have begun blocking the Internet Archive's access to their content based on concerns that AI companies' bots are using the Internet Archive's collections to indirectly scrape ...
www.engadget.com
February 4, 2026 at 1:42 AM
Development teams are changing. The speed of AI tools is incredible, but we're seeing a shift in the skills being prioritized. What happens when the ability to prompt outpaces the ability to question? 🧵
davidnowak.me/why-ai-assis...
Why AI Assistance Destroys the Skills You Need to Supervise AI - DAVID NOWAK
Developers using AI assistants scored lower on skill assessments with zero productivity gain. They shipped code they couldn't explain, bypassed errors where learning happens, and built dependency inst...
davidnowak.me
February 3, 2026 at 2:01 PM
Amazon reported >1M AI-CSAM instances to NCMEC in '25, vastly outpacing other companies. But their reports are “inactionable” due to lack of source data. Where is this content coming from, and what does that say about data sourcing? 🧵
www.engadget.com/ai/amazon-di...
Amazon discovered a 'high volume' of CSAM in its AI training data but isn't saying where it came from
The National Center for Missing and Exploited Children said it received more than 1 million reports of AI-related child sexual abuse material in 2025, with "the vast majority" stemming from Amazon.
www.engadget.com
February 3, 2026 at 2:03 AM
What does it mean to build an AI with a "Constitution?" Anthropic’s 30,000-word document forbids Claude from expressing opinions on politics. Is this alignment, or a pre-emptive silencing? 🧵
arstechnica.com/information-...
Does Anthropic believe its AI is conscious, or is that just what it wants Claude to think?
We have no proof that AI models suffer, but Anthropic acts like they might for training purposes.
arstechnica.com
February 2, 2026 at 2:00 PM
The EU forcing Google to open Android & search data to rivals is about power. Gemini gets a walled garden, others don’t. This isn’t innovation vs. stagnation, it's about who controls access, who sets the rules. Feels like a fundamental shift in leverage... 🧵
www.engadget.com/ai/the-eu-te...
The EU tells Google to give external AI assistants the same access to Android as Gemini has
The company is also required to hand some search engine data to rivals.
www.engadget.com
January 31, 2026 at 1:39 AM
The speed is striking. AI literacy starting in third grade. It's not just coding—it's algorithms, “intelligent agents” by fifth. Feels like a fundamental re-orientation, a bet on future workforce needs with huge implications for individual agency... 🧵
www.npr.org/2026/01/27/n...
In China, AI is no longer optional for some kids. It's part of the curriculum
While debate rages in the U.S. about the merits and risks of AI in schools, it's become a state-mandated part of the curriculum in China, as the authorities try to create a pool of AI-savvy profession...
www.npr.org
January 30, 2026 at 2:05 PM
DOT’s AI rulemaking prioritizes speed over safety, with Gemini drafting rules for everything—planes, cars, pipelines. It’s not about incremental improvement, it’s a fundamental shift in how we protect people. This isn’t modernization, it’s a gamble... 🧵
arstechnica.com/tech-policy/...
“Wildly irresponsible”: DOT's use of AI to draft safety rules sparks concerns
Staffers warn DOT's use of Gemini to draft rules could cause injuries and deaths.
arstechnica.com
January 30, 2026 at 2:02 AM
Georgieva’s “tsunami” analogy for AI’s labor impact feels right. 60% of jobs in advanced economies affected—enhancement, elimination, transformation. It’s not just about robots taking jobs; it’s about the fundamental nature of work shifting under our feet...🧵
www.theguardian.com/technology/2...
Young will suffer most when AI ‘tsunami’ hits jobs, says head of IMF
Kristalina Georgieva says research suggests 60% of jobs in advanced economies will be affected, with many entry-level roles wiped out
www.theguardian.com
January 29, 2026 at 1:44 PM
The APEX-Agents benchmark feels wrong because these aren’t theoretical problems. They're what consultants, lawyers, bankers actually deal with daily. The gap between models & reality is stark. It’s not about knowledge; it’s about integrating it across tools... 🧵
techcrunch.com/2026/01/22/a...
Are AI agents ready for the workplace? A new benchmark raises doubts | TechCrunch
New research looks at how leading AI models hold up doing actual white-collar work tasks, drawn from consulting, investment banking, and law. Most models failed.
techcrunch.com
January 29, 2026 at 1:37 AM
The speed of AI evolution is making coordinated swarms of agents capable of mimicking nuanced human interaction. Taiwan's experience is a stark warning. The risk isn’t if, it’s when this sophistication hits US elections... 🧵
www.theguardian.com/technology/2...
Experts warn of threat to democracy from ‘AI bot swarms’ infesting social media
Misinformation technology could be deployed at scale to disrupt 2028 US presidential election, AI researchers say
www.theguardian.com
January 28, 2026 at 3:11 PM
AI coding tools feel unsettlingly fast. Not because they're replacing devs (they won’t), but because they compress timelines so aggressively. It’s like the steam shovel – more digging capacity, but also a new tempo for the whole system... 🧵
arstechnica.com/information-...
10 things I learned from burning myself out with AI coding agents
Opinion: As software power tools, AI agents may make people busier than ever before.
arstechnica.com
January 28, 2026 at 1:24 AM
I love that this utilizes the AT Protocol for its authentication and allows users to avoid proprietary networks.
itsfoss.com/roomy-discor...
Meet Roomy: An Open-Source Discord Alternative for the Decentralized Web
Looking for a Discord alternative? Roomy is an open-source, decentralized platform built for communities that value privacy and control.
itsfoss.com
January 27, 2026 at 7:11 PM
SFWA allowed AI use in Nebula Award submissions. The backlash was so intense they reversed course in 3 days and banned it entirely. But here's the problem: Google search uses LLMs now. So does Grammarly, Word's editor, and citation tools. What exactly did they ban? 🧵
davidnowak.me/the-organiza...
Organizations Drawing the Hardest Lines Around AI Protect the Smallest Territory - DAVID NOWAK
Award eligibility, exhibition space, platform curation—prestigious institutions banned AI content throughout 2025. But most creators make their living in the commercial middle: stock imagery, backgrou...
davidnowak.me
January 27, 2026 at 1:31 PM
The sheer scale of investment chasing AGI feels unmoored from reality. Trillions predicated on a future that’s far from guaranteed. Bengio is right to raise the alarm; this isn’t just tech risk, it’s a potential financial system shock... 🧵
www.theguardian.com/technology/2...
‘We could hit a wall’: why trillions of dollars of risk is no guarantee of AI reward
Progress of artificial general intelligence could stall, which may lead to a financial crash, says Yoshua Bengio, one of the ‘godfathers’ of modern AI
www.theguardian.com
January 27, 2026 at 1:49 AM
The proliferation of "micro apps" built with AI feels like a fundamental shift in how people approach problem-solving. Not "what software can I buy?" but "what can I make?" The stakes aren’t global, they’re deeply personal. That’s compelling... 🧵
techcrunch.com/2026/01/16/t...
The rise of 'micro' apps: non-developers are writing apps instead of buying them | TechCrunch
A new era of app creation is here. It's fun, it's fast, and it's fleeting.
techcrunch.com
January 26, 2026 at 1:29 PM
The Guardian highlights OpenAI data on goal-setting—more people are turning to bots for self-improvement. It's a shift. But why? Feels like a search for external structure when internal motivation is flagging. Human stakes are high here... 🧵
www.theguardian.com/wellness/202...
AI as a life coach: experts share what works, what doesn’t and what to look out for
It’s becoming more common for people to use AI chatbots for personal guidance – but this doesn’t come without risks
www.theguardian.com
January 24, 2026 at 1:32 AM
The PSF rejecting $1.5M from NSF signals a strong commitment to values. Open source isn't neutral – it’s built on principles. This decision is about more than just DEI; it's about the ecosystem's integrity. Anthropic’s offer feels like a strategic alignment... 🧵
itsfoss.com/news/anthrop...
After Rejecting US Government's Aid Over DEI, Python Software Foundation Accepts $1.5 Million in Funding from Claude AI
A two-year partnership aimed at bolstering security for Python and PyPI.
itsfoss.com
January 23, 2026 at 2:13 PM
The Brookings report on AI in schools is a premortem, and a sobering one. Core fear? Cognitive offloading. Students outsourcing thinking to AI, creating a dependence loop that actively hinders skill development. Feels urgent. This isn't just about cheating... 🧵
www.npr.org/2026/01/14/n...
The risks of AI in schools outweigh the benefits, report says
A new report warns that AI poses a serious threat to children's cognitive development and emotional well-being.
www.npr.org
January 23, 2026 at 1:46 AM
Doctors aren’t fearing replacement by AI, they fear bad AI. That pulmonary embolism example is chilling. The focus has to be on utility, not disruption. A tool that augments, not a substitute for judgment. That's where trust begins... 🧵
techcrunch.com/2026/01/13/d...
Doctors think AI has a place in healthcare — but maybe not as a chatbot | TechCrunch
OpenAI and Anthropic have each launched healthcare-focused products over the last week.
techcrunch.com
January 22, 2026 at 1:47 PM
Bandcamp’s AI ban is about trust. A direct-to-fan model requires a perceived human connection. Purely AI-generated music breaks that contract. It’s a fragile system, built on shared values—artists creating for people, not algorithms creating at people... 🧵
arstechnica.com/ai/2026/01/b...
Bandcamp bans purely AI-generated music from its platform
Indie music store says it wants fans to have confidence music was largely made by humans.
arstechnica.com
January 22, 2026 at 1:34 AM