Adrià Moret
banner
adriamoret.bsky.social
Adrià Moret
@adriamoret.bsky.social
Philosophy undergrad and Board Member, UPF-Centre for Animal Ethics. I conduct independent research on Animal Ethics, Well-being, Consciousness, AI welfare and AI safety
See publications at: https://sites.google.com/view/adriamoret
9/ In the conclusion, we provide low-cost, realistic policy recommendations for AI companies and governments to ensure frontier AIs have some basic concern for the welfare of the vast majority of moral patients.
September 11, 2025 at 4:38 PM
8/ The indirect approach is immediately implementable. AI companies could add simple principles like the following to the normative principles present in their alignment documents (i.e. ModelSpec, the Constitution...):
September 11, 2025 at 4:38 PM
7/ We propose practical implementation through both direct methods (using animal welfare science, bioacoustics, neurotechnology) and indirect methods (adding basic animal welfare principles to existing alignment documents, i.e. ModelSpec, the Constitution).
September 11, 2025 at 4:38 PM
6/ Our solution: "Alignment with a Basic Level of Animal Welfare" AI systems should at least minimize harm to animals when achievable at low cost, without requiring them to prioritize animals over humans or continuously engage in moralizing, preachy messaging about animal welfare.
September 11, 2025 at 4:38 PM
5/ Long-term risks are even more concerning. If advanced AI systems lack basic consideration for animal welfare, they could lock in speciesist values for centuries, increasing the likelihood that animal suffering scales by orders of magnitude.
September 11, 2025 at 4:38 PM
4/ This omission creates significant near-term risks: LLMs might entrench speciesist biases, AI-controlled vehicles might lead to increased animal deaths, and AI used to manage animals in factory farms could optimize for efficiency, increasing and prolonging the harms they suffer.
September 11, 2025 at 4:38 PM
3/ Specifically, current alignment techniques (RLHF, Constitutional AI, deliberative alignment) explicitly focus on preventing harm to humans and even to property and environment, but extend no concern to animal welfare in their normative instructions (ModelSpec, the Constitution)
September 11, 2025 at 4:38 PM
2/ We show that non-human animals—despite being 99.9% of sentient beings—are almost entirely excluded from AI alignment efforts and frameworks.
September 11, 2025 at 4:38 PM