Brent W. Roberts
@bwroberts.bsky.social
2.1K followers 1.5K following 490 posts
Respirating carbon-based life form. Pit of despair dweller. Bread maker. Sometimes personality psychologist at the University of Illinois at Urbana-Champaign
Posts Media Videos Starter Packs
Reposted by Brent W. Roberts
ophastings.bsky.social
The GSS asked the same people about their childhood income rank three different times. 56% changed their answer, even though what was trying to be measured couldn’t change! We dig into this in a new article at @socialindicators.bsky.social. 



doi.org/10.1007/s112...

🧵👇 (1/5)
Growing up Different(ly than Last Time We Asked): Social Status and Changing Reports of Childhood Income Rank - Social Indicators Research
How we remember our past can be shaped by the realities of our present. This study examines how changes to present circumstances influence retrospective reports of family income rank at age 16. While retrospective survey data can be used to assess the long-term effects of childhood conditions, present-day circumstances may “anchor” memories, causing shifts in how individuals recall and report past experiences. Using panel data from the 2006–2014 General Social Surveys (8,602 observations from 2,883 individuals in the United States), we analyze how changes in objective and subjective indicators of current social status—income, financial satisfaction, and perceived income relative to others—are associated with changes in reports of childhood income rank, and how this varies by sex and race/ethnicity. Fixed-effects models reveal no significant association between changes in income and in childhood income rank. However, changes in subjective measures of social status show contrasting effects, as increases in current financial satisfaction are associated with decreases in childhood income rank, but increases in current perceived relative income are associated with increases in childhood income rank. We argue these opposing effects follow from theories of anchoring in recall bias. We further find these effects are stronger among males but are consistent across racial/ethnic groups. This demographic heterogeneity suggests that recall bias is not evenly distributed across the population and has important implications for how different groups perceive their own pasts. Our findings further highlight the malleability of retrospective perceptions and their sensitivity to current social conditions, offering methodological insights into survey reliability and recall bias.
doi.org
Reposted by Brent W. Roberts
resprofnews.bsky.social
Research funders urged to drive culture shift on negative results.

Reform needed to improve trust in science, patient care and training of AI, advocates say.

www.researchprofessionalnews.com/rr-news-worl...
Reposted by Brent W. Roberts
elisekalo.bsky.social
In this behemoth effort led by @anhhtran.bsky.social, we reanalysed 11 experience-sampling datasets, and found limited evidence that context (intensity, controllability, and social features) meaningfully shaped everyday emotion regulation strategy use.
psyarxivbot.bsky.social
Context Matters, Doesn't It? The Role of Context in Everyday Emotion Regulation Strategy Use: https://osf.io/axzk6
bwroberts.bsky.social
The problem is testing whether effects exist and the theories that predict them is boring and runs counter to the dopamine treadmill of new discoveries that drives our status system. Come to think of it, we were doing click-bait before Tik Tok even existed....
bwroberts.bsky.social
I mean, the fact that we have "theory" journals that require that every new paper burps up a new theoretical contribution reflects a perverse self-loathing against our own efforts. Really? Year in, year out, with every issue we got something new? 3/
bwroberts.bsky.social
Having said that, I totally agree with the sentiment that our grubby obsession with "finding an effect" or cooking up a new theory is our Achilles heal. By definition, we can't be a cumulative science as our goal is to cast off whatever came before us 2/
bwroberts.bsky.social
Having spent more and more time with other science guilds, I'd say there is no reason to single out psychology. Medicine, Kinesiology, Political Science, Anthropology, Sociology, the B-school types...oh, the B-school types--they all practice the p-value dark arts of discovery. 1/
bwroberts.bsky.social
And the "niceness" norms to not share bad news about someone's idea (and thus that person), and you will get the impulse to start up replication journals that will then not get much traffic. I hope I'm wrong. It is a lot of work to do this type of thing. 3/
bwroberts.bsky.social
And, prophetically, that journal folded due to lack of interest. I have a sinking feeling this effort will suffer the same fate. We still value the publication over the ideas contained therein. Combine that with the continued overvaluing of "finding something" (i.e., p<.05) 2/
bwroberts.bsky.social
It is sad and nice isn't it? It is not the first. When we failed to replicate the MacBeth effect we ended up publishing it in the "Journal of Articles in Support of the Null Hypothesis" the go-to 4th tier journal at that time for failures to replicate: www.dropbox.com/scl/fi/rpgtj... 1/
www.dropbox.com
Reposted by Brent W. Roberts
aufdroeseler.bsky.social
ReplicationResearch.org is now open for submissions!

Submit replications and reproductions from many different fields, as well as conceptual contributions. With diamond OA, open and citable peer review reports, and reproducibility checks, we push the boundaries of open and fair publishing.
Reposted by Brent W. Roberts
impartialspectator.bsky.social
I'd lie if I said I am surprised.
jrgptrs.bsky.social
New DP @i4replication.bsky.social: Meta-analysis on green nudges correcting for publication bias. "Behavioral interventions on households and individuals are unlikely to deliver material climate benefits." www.econstor.eu/bitstream/10...
Reposted by Brent W. Roberts
vferraretto.bsky.social
Entry into adulthood is being postponed, but when do young adults intend and deem ideal to experience events? 💭A joint work w/ K. Schwanitz @utu.fi, @francescorampazzo.com & @agnese-vitali.bsky.social reveals mismatches btw ideals and reality ⬇️:

comparativepopulationstudies.de/index.php/CP...
bwroberts.bsky.social
Kudos for pushing this through to publication. Null findings are always interesting if the questions you ask are interesting, like this one. And, in terms of a null being constructive, think of all the future researchers who won't go down this path and will instead test a different idea.
Reposted by Brent W. Roberts
mattsouthward.bsky.social
When we measure personality multiple times in a study, does it matter if we ask people about their personality *in general* or *since the last time point*?

Turns out: yes!

We found differences in internal consistency, Ms, & SDs but not in the underlying constructs 🧵

doi.org/10.31234/osf...
https://doi.org/10.31234/osf.io/tb94v_v1🧵
bwroberts.bsky.social
Don't tell me. I want to believe...
bwroberts.bsky.social
The tour was great. The company and conversation better.
bwroberts.bsky.social
I went to Rotterdam for a reproducibility therapy session with @lakens.bsky.social. I am happy to report that the patient is doing much better—maybe due to a placebo effect. Who knows.
Daniel and Brent in Rotterdam
bwroberts.bsky.social
No irony in the fact that the best way to convince people to use OS is to have 1) a compelling spokesperson, 2) a stimulating anecdote, 3) a convincing data point, and 4) an overgeneralization. We are, after all, human. (That, btw, is the Gladwell formula).
bwroberts.bsky.social
Once the story is published, god forbid you would tell them the method was flawed since it already worked for them once. Then you find the PI arguing for the method regardless of what is right. 3/
bwroberts.bsky.social
Smart people can get all or most of the fancy models to work on the data whether it makes sense to or not. Once you've done that and little asterisks float to the surface, the PI takes a microsecond to glom onto them and write a story. 2/
bwroberts.bsky.social
And, having been in your position before, I know that if you come back to them with the fact that their inspiration to use CLPM or some other model is unfounded, you soon find yourself not included on the next grant/paper. 1/
bwroberts.bsky.social
Don't make me blog again...it is not doomerism in this case, it is simply the typical way we employ stats which is in a thoughtless fashion. There are some places where CLPM is entirely appropriate as well as RI-CLPM. It's just that we almost never do studies that fit with the models.
bwroberts.bsky.social
Sean you took the bait! For the most part, no model is viable, largely because the problem is not with the statistical models it is with the data, our methods, our research questions and how they don't align with any of the statistical models to address the questions we really want to answer.