William Ngiam | 严祥全
banner
williamngiam.github.io
William Ngiam | 严祥全
@williamngiam.github.io
Cognitive Neuroscientist at Adelaide University | Perception, Attention, Learning and Memory Lab (https://palm-lab.github.io) | Open Practices Editor at Attention, Perception, & Psychophysics | ReproducibiliTea | http://williamngiam.github.io
The data from the task is compelling because there is no "correct" answer; it's a free choice! Super-recognisers are grouping faces based on identity rather than valence or gaze angle. This ends up interfering their expression-matching in an independent face task (they are worse when faces mismatch)
November 24, 2025 at 4:37 AM
Tim Cottier @tvcottier.bsky.social introduces a novel face triad task to explore whether super-recognisers decipher the identity, valence or gaze of faces. When asked which face is distinct out of the three, super-recognisers preference identity information more than controls! #ASPP2025
November 24, 2025 at 4:37 AM
Wawu means spirit - the life force connecting people, Country and story; it conveys that knowledge and care are intertwined, and that knowledge is relationship and responsibility.
November 24, 2025 at 3:21 AM
The second keynote for the conference is Cammi Murrup-Stewart on "Connection as Knowledge: Decolonising Mind through Wawu". A good reminder on our responsibility as knowledge leaders is sure to come.
November 24, 2025 at 3:21 AM
After seeing @micahgoldwater.bsky.social speak on memes as forms of argument-making, I've decided to livetweet what I can of the Australasian Society of Philosophy and Psychology conference #ASPP2025. Conservatives can see the effectiveness of liberal memes, whereas the reverse is not observed.
November 24, 2025 at 1:08 AM
November 9, 2025 at 4:39 AM
For the Feyerabend anarchists:
November 9, 2025 at 4:39 AM
me: finding a recent and relevant Chaz phil vis paper that I had missed through zohran-posting
November 9, 2025 at 3:37 AM
As per Schurgin et al. (2020), we find that the cognitive representation underlying all tasks do not match the physical stimulus space. But we also find that the representation is different for similarity comparison and both reproduction tasks; similarity is not the basis for working memory. /7
November 3, 2025 at 7:34 AM
We conducted a follow-up experiment comparing static discs to moving discs; we wanted to make sure the canonical CDA set-size effect replicated in our paradigm where we cued targets first. And it does when the discs are static, and we also replicated the lack of color load effect for moving discs!
September 18, 2025 at 2:56 PM
A surprisingly similar result! While the CDA amplitude was slightly elevated overall, there was no clear effect of working memory load. That is, the CDA appeared to be largely driven by the attentional tracking load – one or two discs.
September 18, 2025 at 2:56 PM
Our result? When subjects were only required to complete the MOT task (and could ignore the colors), the CDA was as expected. We observed a sustained difference in the CDA amplitude when tracking one disc compared to tracking two discs. But what about tracking and remembering?
September 18, 2025 at 2:56 PM
We combined MOT with the whole-report WM task – subjects had to track one or two moving discs, while also remembering the colors of those discs. There were either one or two colors per target discs, so either two or four colors to remember in total. We had subjects do MOT-only as well to compare.
September 18, 2025 at 2:56 PM
I had a fantastic experience presenting at Raising the Bar last night! It was nerve-wracking to speak about research with no slides, but it meant I really connected with the audience on the current discourse around the perceived impact of digital technology on attention spans and brain function.
August 6, 2025 at 11:46 AM
Does tracking and remembering the colour of multiple moving objects share a common mechanism? How might the encoded information be represented in the mind and brain? See my talk tomorrow morning to hear about a couple of EEG studies looking at this (see task below)!
@expsyanz.bsky.social #EPC2025
June 18, 2025 at 1:04 PM
In the first iteration of the session, @styrkowiec.bsky.social and I shared the very beginnings of an experiment idea testing how information would be organised in working memory when stimuli were moving and changing at the same time. We got a useful signal – that others were interested in the idea!
May 1, 2025 at 12:11 AM
I created this reading list on theory in psychology a while back, so it probably needs an update! Would love any recommendations for papers to include – maybe I can turn this into a syllabus of sorts.

PDF of this reading list here: williamngiam.github.io/reading_list...
April 24, 2025 at 1:23 PM
Embarrassing as a vision scientist to upload a SVG of the logo, for it to come out blurry. Hopefully this one is a bit nicer to look at
March 14, 2025 at 12:21 AM
For the #visionscience folks, the pre-data poster session is back at VSS for its third year. It only makes sense to get feedback at the conference at a point in time where you can actually action it! I think it is a great opportunity for ECRs to be involved in VSS as well!
March 14, 2025 at 12:15 AM
I feel like this will give ECRs (both postdoctoral researchers and early-career faculty) a great deal of insecurity, bouncing around from academic job to research job and back again, from project to project. Would like to see more explanation of how the following is achieved:
February 27, 2025 at 12:42 PM
I don't think this is "radical" or is in opposition with institutional structures. Reform won't come in one fell swoop – I hope that with enough individual and local community action, there will be a strong resonance that tumbles existing structures or forces them to evolve to remain standing.
January 23, 2025 at 11:24 PM
A new update to quokka – my open-source, in-browser, free-to-use qualitative coding ShinyApp! In the new 'sorting' tab, users who have finished analysing can organise their codes into themes (or subthemes or categories based on the approach) with a simple drag-and-drop interface. #CAQDAS
January 18, 2025 at 5:04 AM
Actually Google, I did not mean that.
October 7, 2024 at 4:17 AM
Everything is spot on, even down to the quotes. And if anyone wants to know how I think (some of the) Open Science principles can transform psychological science for the better, I will happily talk for hours on end.

Check out my Veggie ID! You can create yours at sophie006liu.github.io/vegetal/
October 7, 2024 at 3:43 AM
I thoroughly enjoyed "Why We Remember" by @charan-neuro.bsky.social - it was fun to follow the weaving of personal anecdotes with our current understanding of (long-term) memory. (The Australian book cover changed the byline and lacks the distinctive brain-shaped cloud though...)
September 28, 2024 at 8:25 AM