Travis LaCroix
banner
travislacroix.bsky.social
Travis LaCroix
@travislacroix.bsky.social
Dr // Asst. Prof // Philosopher @ Durham University (UK)
(I am also a human being)

Language Origins // AI Ethics // Autism
I will post more summaries of the results of our search later, but the full article / data summary / analysis can be found (open access!) here:

doi.org/10.1007/s112...
July 22, 2025 at 10:42 AM
Examining philosophical works mentioning autism across time, we found: (1) the number of articles has increased significantly in the last decade or two. (2) The rate of change is also trending upward in the last two decades. (3) The majority (> 50%) of the corpus was published in the last decade.
July 22, 2025 at 10:42 AM
Normalising by articles per issue, we found that the leading publishers are fairly specialist: Neuroethics (0.8958 articles/issue); Rev. Phil. Psyc. (0.8400); Phenomenology and Cog. Sci. (0.7937); PPP (0.6610); Mind & Lang. (0.5114); American Journal of Bioethics (0.4087); and Phil. Psych. (0.4051).
July 22, 2025 at 10:42 AM
We searched 67 "leading" philosophy journals for several relevant search terms and created a corpus using specific inclusion/exclusion criteria. The total corpus result in 1112 articles mentioning autism published between 1911 and the end of 2023.
July 22, 2025 at 10:42 AM
Basically, the term "ethics" is laden with philosophical baggage; what is considered "ethical" is often subjective / context-dependent, varying across cultures, individuals, and situations; so, there's no universally accepted set of moral principles that can be used to evaluate AI systems.
June 15, 2025 at 4:47 PM
To try to fix the hot mess that is the field of value alignment, I give a new description of the problem based on the principal-agent problem from economics.

The value alignment problem is a class of problems, which is instantiated when we delegate tasks to AI systems.
May 5, 2025 at 5:07 PM
I agree with @abeba.bsky.social here; but, for better or worse, I am trying to fix the hot mess that is the field of value alignment! 😬
May 5, 2025 at 4:51 PM
Classic from the archives.
March 8, 2025 at 5:36 PM