andymckilliam.bsky.social
@andymckilliam.bsky.social
Postdoctoral researcher at National Taiwan University (previously at Monash's M3CS). Intersted in philosophy of science, consciousnesss, measurement, misinformation, the psychology of decision-making, and metacognitive training for clearer thinking.
Cool!
September 5, 2025 at 10:42 AM
Yeah, cool! What I’m still not sure about is why AI would be globally confident. If they’re relying on unconscious imagery, why should the task seem easy?
September 5, 2025 at 3:18 AM
Awesome! Looking forward to reading about it!
May 16, 2025 at 5:44 AM
Cool. But I share @annakeesey.bsky.social's worry. Folks don't trust the AI because it's AI. They trust it (in part) because it's not a judgmental jerk.
It would be interesting to manipulate the 'judginess' of the llm and see whether/how much that impacts the effect.
May 15, 2025 at 9:00 PM
The first (the hard problem) is well-known. This paper draws more attention to the second. In consciousness science, the two main methods for progress—theory testing and epistemic iteration—face serious obstacles.

Read more here:
📖 link.springer.com/article/10.1...
April 1, 2025 at 9:01 PM
These differences reveal two reasons why explaining consciousness may be 'hard'. First, a theory that enables prediction and control may not yield understanding—consciousness may still seem mysterious. Second, limited epistemic access may restrict our ability to predict and control consciousness.
April 1, 2025 at 9:01 PM
Philosophers of mind and philosophers of science have quite different views on the relationship between explanation and understanding. While philosophers of mind prioritise logical entailment the sense of understanding, philosophers of science focus on prediction and control.
April 1, 2025 at 9:01 PM
Absolutely well-deserved
November 20, 2024 at 10:52 PM