Daniel Arteaga
dnlrtg.bsky.social
Daniel Arteaga
@dnlrtg.bsky.social
Physicist. Audio and deep learning research at Dolby Labs. Physics, audio, AI, science, technology and society.

Personal account @contraidees.bsky.social
The question isn't whether AI doom is likely.

It's whether the expected harm is significant enough to act on.

Given the math? The answer is clearly yes.

We don't need certainty to justify precaution. We need responsible risk assessment.

(7/7)
November 26, 2025 at 6:05 PM
This same risk framework (expected value of harm) applies to non-catastrophic AI risks too:

- Algorithmic bias
- Economic displacement
- Privacy violations
- Misinformation at scale

If the product of probability and harm that matters.

(6/7)

www.ibm.com/think/insigh...
10 AI dangers and risks and how to manage them | IBM
A closer look at 10 dangers of artificial intelligence and actionable risk management strategies to consider today.
www.ibm.com
November 26, 2025 at 6:05 PM
Most AI researchers agree existential risk from AI has LOW probability.

But low ≠ zero.

And when we're talking about existential outcomes, non-zero probabilities require:

- Serious research
- Robust safety measures
- Contingency planning

(5/7)

en.wikipedia.org/wiki/Safety_...
Safety engineering - Wikipedia
en.wikipedia.org
November 26, 2025 at 6:05 PM
We have precedent for this thinking.

Before launching the Large Hadron Collider, CERN seriously studied scenarios like micro black holes destroying Earth.

Did physicists think it would happen? No. But the potential harm was so large, they HAD to investigate. (4/7)

en.wikipedia.org/wiki/Strange...
Strangelet - Wikipedia
en.wikipedia.org
November 26, 2025 at 6:05 PM
If potential harm approaches infinity, only a truly negligible probability makes the overall risk acceptable.

Low probability ≠ no risk when the stakes are existential.

(3/7)

en.wikipedia.org/wiki/Risk_as...
Risk assessment - Wikipedia
en.wikipedia.org
November 26, 2025 at 6:05 PM
What matters is: Probability × Magnitude of Harm = Risk

When harm could be civilization-ending, even tiny probabilities demand serious attention.

(2/7)

en.wikipedia.org/wiki/Risk_ma...
Risk matrix - Wikipedia
en.wikipedia.org
November 26, 2025 at 6:05 PM
Am I missing something?
October 27, 2025 at 5:19 PM
Yet in AI research (which is essentially statistical modeling)we routinely abandon these basic practices. The irony is striking.
October 27, 2025 at 5:19 PM
In other scientific fields (natural and social sciences), proper statistical analysis is fundamental. You simply cannot publish without it.
October 27, 2025 at 5:19 PM
There's also the added problem that metrics often don't correlate with perception. A 0.1 dB SDR improvement might be meaningless perceptually. But this issue has been discussed more often than the statistical rigor problem.
October 27, 2025 at 5:19 PM
❌ Claims of "superior performance" based on point estimates alone

Example: Paper A reports 15.21 dB, Paper B reports 15.01 dB. Is this difference meaningful or just noise? Do those decimal places have any meaning? Usually impossible to tell from the paper.
October 27, 2025 at 5:19 PM
❌ Values without error bars/confidence intervals
❌ Standard deviations sometimes quoted but no uncertainty estimates of means
❌ No significance testing whatsoever
❌ No effect size analysis
❌ No exploratory analysis beyond the mean
October 27, 2025 at 5:19 PM
We're not even applying methods from first-year undergraduate physics—like reporting results with error bars. The problems I regularly see would make any physics professor cringe.
October 27, 2025 at 5:19 PM
This work was the result of Silvia Arellano's internship in Dolby Barcelona with us.

Come explore the demo here:
🔗 silviaarellanogarcia.github.io/rir-acoustic/
📄 Paper: arxiv.org/pdf/2507.12136

Feedback & questions welcome!
July 18, 2025 at 8:13 AM
We explore 4 DAC-based models:
1️⃣ AR w/ cross-attention
2️⃣ AR w/ classifier guidance
3️⃣ MaskGIT w/ adaptive layer norm
4️⃣ Flow matching

The MaskGIT model achieves the best subjective quality (avg. 70 MUSHRA score), beating state of the art comparisons.
July 18, 2025 at 8:13 AM
Instead of simulating room geometry, we train four different generative model to produce RIRs conditioned on acoustic attributes (T30, T15, EDT, D50, C80, source-receiver distance)
July 18, 2025 at 8:09 AM