David Nowak
@davidnowak.me
63 followers 41 following 1.3K posts
I bridge technical expertise with human understanding. Built solutions for millions. I help organizations question assumptions before costly mistakes. Connecting dots, creating impact. 🌐 davidnowak.me 🗞️ strategicsignals.business
Posts Media Videos Starter Packs
Pinned
davidnowak.me
Sign up for Strategic Signals - Free Weekly Intelligence Briefing for Small Business Leaders - strategicsignals.business
Sign up for Strategic Signals - Free Weekly Intelligence Briefing for Small Business Leaders - https://strategicsignals.business
davidnowak.me
The question isn't whether to build these tools—that ship sailed. It's about who gets to shape the guardrails, how we rebuild verification systems, and whether we can preserve shared reality. The conversation needs more voices, especially those affected most.
davidnowak.me
Regulators are scrambling. The G20's Financial Stability Board admits an 'urgent need to raise their game.' But policy moves slower than code. By the time frameworks exist, the next generation of tools will be here, more sophisticated and harder to detect.
davidnowak.me
What gets lost in the hype cycle? Trust. The 'liar's dividend' means bad actors dismiss real evidence while synthetic content floods feeds. We're not headed toward a single nightmare scenario. We're walking into baseline erosion of what we can believe.
davidnowak.me
Financial fraud attempts using deepfakes jumped 2,137%. Educators worry about critical thinking erosion even as they see potential. Journalism faces what one researcher called 'the end of visuals as proof.' Each industry sees a different fracture line.
davidnowak.me
Here's the systems problem: platforms optimizing for viral adoption can't effectively police their own outputs. Even experts who study fabricated content now struggle to spot fakes. The tech for detection lags behind generation by design.
davidnowak.me
Hollywood went to war this week. All three major agencies called Sora 'exploitation, not innovation.' Disney opted out. The MPA demanded action. This isn't resistance to change—it's about who controls the tools that remake reality itself.
davidnowak.me
OpenAI's Sora hit 1M downloads in 5 days. But here's what matters: we're watching deepfakes get rebranded as entertainment. Reality Defender bypassed its safeguards in 24 hours. What does it mean when seeing is no longer believing? 🧵
www.npr.org/2025/10/10/n...
Sora gives deepfakes 'a publicist and a distribution deal.' It could change the internet
OpenAI's new hit app has unleashed a new wave of AI slop across the internet. But what happens when there are no rules over hyper-realistic synthetic videos?
www.npr.org
davidnowak.me
The companies making real money aren't chasing general AI—they're solving one thing exceptionally well within clear constraints. That gap between impressive benchmarks and reliable delivery? That's where competitive advantage actually lives:
davidnowak.me/what-18-bill...
davidnowak.me
Mention cats in a math problem and error rates jump 7x in state-of-the-art models. Claude apologizes for being correct 98% of the time when challenged. We're betting operations on systems optimized for sterile demos, not operational reality.
davidnowak.me
This is techno-legal solutionism at its worst: legislators solving complex social problems with mandated technical fixes that ignore security realities. The question isn't whether this protects kids. It's whether we're willing to sacrifice everyone's privacy to try.
davidnowak.me
Smaller platforms like Bluesky and Dreamwidth are already blocking entire states rather than comply. Meanwhile, Big Tech can absorb the costs—meaning these laws consolidate power in the hands of the giants who can afford compliance infrastructure.
davidnowak.me
The human cost? Your face, birthdate, address, and ID—linked to your browsing history. Perfect for extortion, identity theft, or worse. When these databases are breached (not if, but when), there's no going back. You can't change your face.
davidnowak.me
Tech companies, privacy experts, and civil liberties groups are united in warning this is dangerous. The EFF calls it a 'death sentence for smaller platforms' and a surveillance nightmare. Over 80 digital identity leaders are sounding alarms.
davidnowak.me
Politicians across 19 US states, the UK, France, and Australia are mandating age verification with government IDs—for social media, app stores, adult content. They're building honeypots of identity documents that hackers dream about.
davidnowak.me
Here's what happened: A third-party customer service vendor got compromised. Attackers had access for 58 hours. Now 70,000 driver's licenses and passports are in criminal hands. The breach was preventable if Discord hadn't been required to store IDs.
davidnowak.me
Discord just admitted hackers stole 70,000 government IDs. This isn't a one-off—it's the inevitable result when lawmakers force platforms to collect documents they can't secure... 🧵
arstechnica.com/security/202...
Discord says hackers stole government IDs of 70,000 users
As more sites require IDs for user age verification, expect more such breaches to come.
arstechnica.com
davidnowak.me
Bottom line: we're watching capability emerge through falsifiable predictions on future events that can't exist in training data. ForecastBench updates biweekly. This is empirical progress we can track. Not hype, not speculation—measurable advancement. That deserves serious attention.
davidnowak.me
For business intelligence work, the practical implications are immediate. AI forecasting already outperforms crowd wisdom and approaches expert-level accuracy on real events. The strategic question isn't "if" we integrate these tools into decision frameworks—it's how we do it responsibly.
davidnowak.me
What strikes me most: this isn't about replacement, it's about convergence. Superforecasters might start using these tools to augment their judgment. The models learn from superforecaster methodologies. The boundary between human and machine forecasting blurs faster than we expected.
davidnowak.me
Nate Silver predicted 10-15 years to parity. Tyler Cowen said 1-2 years. The data vindicates Cowen. GPT-4 to GPT-4.5 closed more ground than what remains between current AI and superforecasters. The gap between model generations now exceeds the gap to human expertise. That's striking.
davidnowak.me
Progress follows a measurable curve. From GPT-4 (March 2023, 0.131 Brier) to GPT-4.5 (February 2025, 0.101), we've seen consistent improvement. Linear extrapolation puts AI-human parity around late 2026. Could be the last mile slows us down. Or exponential gains surprise us. Watch the data.
davidnowak.me
The inflection point already happened: AI crossed the public baseline. A year ago, everyday forecasters ranked #2 on the leaderboard—right behind experts, ahead of all AIs. Today? They're #22. Multiple LLMs surpassed them. The question shifted from "can it?" to "when does it match experts?"
davidnowak.me
AI can now forecast the future better than most humans—but not experts yet. GPT-4.5 scores 0.101 vs superforecasters' 0.081. That's a 25% gap closing at 0.016 Brier points annually. We're watching measurable capability emergence in real-time... 🧵
forecastingresearch.substack.com/p/ai-llm-for...
How well can large language models predict the future?
We’ve just released an updated version of ForecastBench, our LLM forecasting benchmark. Here’s what the new results reveal about the accuracy of state-of-the-art models.
forecastingresearch.substack.com
davidnowak.me
I remember the public's reaction to public place google glasses as being less than stellar.