tal boger
talboger.bsky.social
tal boger
@talboger.bsky.social
third-year phd student at jhu psych | perception + cognition

https://talboger.github.io/
can't believe the IRB approved this part — hope the children are ok!
October 14, 2025 at 9:44 PM
Finally, visual search. Previous work shows targets are easier to find when they differ from distractors in their real-world size. However, in our experiments with anagrams, this was not the case (even though we easily replicated this effect with ordinary, non-anagram images).
August 19, 2025 at 4:38 PM
Next, aesthetic preferences. People think real-world large objects look better when displayed large, and vice versa for small objects. Our experiments show that this is true with anagrams too!
August 19, 2025 at 4:37 PM
First, the “real-world size Stroop effect”. If you have to say which of two images is larger (on the screen, not in real life), it’s easier if displayed size is congruent with real-world size. We found this to be true even when the images were perfect anagrams of one another!
August 19, 2025 at 4:36 PM
We generated images using this technique (see examples). Each pair differs in real-world size but are otherwise identical* in lower-level features, because they’re the same image down to the last pixel.

(*avg orientation, aspect-ratio, etc, may still vary. ask me about this!)
August 19, 2025 at 4:35 PM
This challenge may seem insurmountable. But maybe it isn’t! To overcome it, we used a new technique from Geng et al. called “visual anagrams”, which allows you to generate images whose interpretations vary as a function of orientation.
August 19, 2025 at 4:34 PM
Take real-world size. Tons of cool work shows that it’s encoded automatically, drives aesthetic judgments, and organizes neural responses. But there’s an interpretive challenge: Real-world size covaries with other features that may cause these effects independently.
August 19, 2025 at 4:33 PM
On the left is a rabbit. On the right is an elephant. But guess what: They’re the *same image*, rotated 90°!

In @currentbiology.bsky.social, @chazfirestone.bsky.social & I show how these images—known as “visual anagrams”—can help solve a longstanding problem in cognitive science. bit.ly/45BVnCZ
August 19, 2025 at 4:32 PM
Finally, we sought a computational account of stylistic similarity. We found that an object recognition model (with no explicit knowledge of style) successfully predicts human judgments of similarity across styles.
May 14, 2025 at 4:45 PM
But these cases involve extracting style to discard it in a sense. Might we also use style to generate new representations? Suppose you’re shown the fork and spoon from a styled cutlery set; can you imagine the knife? We used a priming task to find exactly these representations.
May 14, 2025 at 4:44 PM
Next, we considered cases of ‘discounting’ in vision, like when we discount lighting conditions to discern an object’s color, or when we discount a cloth to discern the object beneath it. We found similar effects for style: Vision discounts style to discriminate images.
May 14, 2025 at 4:44 PM
First, we were inspired by ‘font tuning’, wherein the mind adapts to typefaces in ways that aid text comprehension. Might similar effects arise for style? In other words, might perception tune to the style of images in ways that aid scene comprehension? We show: Yes!
May 14, 2025 at 4:44 PM
So, we thought, let’s study style perception like we study those processes! We adapted a number of paradigms used in those literatures to study how the mind represents style.
May 14, 2025 at 4:43 PM
Style is the subject of considerable humanistic study, from art history to sociology to political theory. But a scientific account of style perception has remained elusive.

Using style transfer algorithms, we generated stimuli in various styles to use in psychophysics studies.
May 14, 2025 at 4:43 PM
Looking at Van Gogh’s Starry Night, we see not only its content (a French village beneath a night sky) but also its *style*. How does that work? How do we see style?

In @nathumbehav.nature.com, @chazfirestone.bsky.social & I take an experimental approach to style perception! osf.io/preprints/ps...
May 14, 2025 at 4:42 PM
We found large filling-in effects almost everywhere (not just due to inattention or an object-presence bias) — including when we disrupted cues previously proposed to create event completion. But abolishing object persistence made event completion effects disappear entirely.
March 4, 2025 at 6:15 PM
This allowed us to systematically disrupt various cues that have been proposed to create event completion effects. These included causality, continuity, familiarity, physical coherence, event coherence, and object persistence.
March 4, 2025 at 6:14 PM
We rendered animations in Blender — like the one you just saw — with an object either present or absent in each half. Participants watched these animations and simply had to complete a forced-choice judgment about whether the ball was present or absent in a given half.
March 4, 2025 at 6:14 PM
Watch this video.

Do you remember seeing a ball in the second half of the video? Up to 37% of our participants reported seeing a ball, even though it wasn’t there. Why?

In a new paper in press @ Cognition, Brent Strickland and I ask what causes event completion. osf.io/preprints/ps...
March 4, 2025 at 6:13 PM
But for random behavior to be truly trait-like, it should also be stable over *time*. So, in Experiment 3, we tested the same participants from Experiment 2 one full year later (!). We found remarkably stable behavior across these two timepoints.
February 5, 2025 at 5:23 PM
This provided initial evidence for stable random behavior across tasks. However, numbers and one-dimensional locations share a representational format (i.e., a mental number line). In Experiment 2, we extended this to two-dimensional random locations and found the same pattern of results.
February 5, 2025 at 5:22 PM
In Experiment 1, we gave participants a number-generation and a one-dimensional location-generation task. Subjects’ sequences shared behavioral signatures across the two tasks; the model parameters were correlated across the two tasks; and our model accurately predicted choice-level behavior.
February 5, 2025 at 5:21 PM
To approach this, we gave participants random generation tasks of different kinds (e.g., a number-generation and a two-dimensional location-generation task).
February 5, 2025 at 5:20 PM
In Experiments 4 and 5, we say: yes! We adapt two tasks used to study implicit effects of visual complexity: a visual search task and a visual working memory task. In both, we find that mechanistic complexity drives performance more than visual complexity.
February 4, 2025 at 4:20 PM
First, we show that mechanistic complexity is more predictive of general intuitions of object complexity than visual complexity. This holds true both for numerical ratings and for forced-choice judgments.
February 4, 2025 at 4:19 PM