James Bland
@jamesbland.bsky.social
950 followers 1.5K following 310 posts
Economist at UToledo. 🇦🇺 Bayesian Econometrics for economic experiments and Behavioral Economics Free online book on this stuff here: https://jamesblandecon.github.io/StructuralBayesianTechniques/section.html https://sites.google.com/site/jamesbland/ He/his
Posts Media Videos Starter Packs
jamesbland.bsky.social
An amazing paper, and related to some work I am doing on optimal experiment design. Well worth a read!

#EconSky
aalexee.bsky.social
Thrilled to see my paper "The (Statistical) Power of Incentives" out at the Journal of the Economic Science Association. 🥳

Read it here (open access) 👉 dx.doi.org/10.1017/esa....

#Econsky
The (Statistical) Power of Incentives | Journal of the Economic Science Association | Cambridge Core
The (Statistical) Power of Incentives
dx.doi.org
jamesbland.bsky.social
The take-away? For most participants we are probably OK in interpreting revisions as correcting mistakes. But this is not true for the entire sample.
jamesbland.bsky.social
And about 75% of participants actually make themselves better off on average.

But that leaves about 25% of participants who made themselves *worse* off by revising their choices.
jamesbland.bsky.social
I then use this estimated utility function to calculate the certainty equivalent of initial and revised choices.

For almost all participants, the revised choice is much more likely to be a utility improvement than not.
jamesbland.bsky.social
I approach this problem differently: with structural estimation. Here, I make much stronger assumptions about the functional form of utility, but that lets me make stronger claims.

Specifically, I assume that participants have a rank-dependent utility function.
jamesbland.bsky.social
Using indices that measure how consistent decisions are with maximizing a utility function, Breig & Feldman (2024) find that revised choices are generally more consistent than initial choices.
jamesbland.bsky.social
In this experiment, participants made 50 convex budget choices that determined a risky payoff. Then, participants were given an opportunity to revise 36 of these.
jamesbland.bsky.social
I use structural techniques to estimate the utility gain (or loss) associated with these revisions in an existing economic experiment (Breig & Feldman, 2024).
jamesbland.bsky.social
Sometimes in experiments, we give participants an opportunity to revise their choices. These revisions are often interpreted as revisions of mistakes.
jamesbland.bsky.social
New working paper: The normative value of a revised choice: a
structural approach

papers.ssrn.com/sol3/papers....

#EconSky

A thread ...
jamesbland.bsky.social
Thanks to the @ecscienceassoc.bsky.social team for an amazing N American meeting. I am glad that there is room for me in this space!
Reposted by James Bland
dariia.bsky.social
❗️Our next workshop will be on Oct 16th, 6 pm CEST titled Structural Bayesian Techniques for Experimental and Behavioral Economics in R& Stan by @jamesbland.bsky.social
Register or sponsor a student by donating to support Ukraine!
Details: bit.ly/3wBeY4S
Please share!
#AcademicSky #EconSky #RStats
jamesbland.bsky.social
One week until this online workshop on my book!
jamesbland.bsky.social
Is this something you could feasibly do? Yes! My computer designed these experiments in less than a day. It is cheap compared to paying for a less good experiment.
jamesbland.bsky.social
So should I use this technique for designing experiments? I would say yes with a but: Yes, but the algorithm only understands the econometrics. It doesn't really understand humans the same way experimenters do. So in the end it should just be used as a suggestion.
jamesbland.bsky.social
Admittedly I don't gain any intuition along the way, but my computer provides suggestions for designs that I couldn't have thought of myself.
jamesbland.bsky.social
The take-away? The best experiment design depends on the research question. In some sense: duh. But in another sense going into this I had absolutely no intuition as to how these should differ.
jamesbland.bsky.social
The designs are really different to each other, and also look (to me at least) very different to existing designs by humans.
jamesbland.bsky.social
Each of these experiments consists of 80 pairwise choices over lotteries with 3 prizes. There are 320 design variables!
jamesbland.bsky.social
- One to estimate all of the parameters as precisely as possible,

- another to estimate just the probability-weighting parameters as precisely as possible, and

- another to estimate a certainty equivalent.
jamesbland.bsky.social
I demonstrate this approach by designing three experiments to estimate a rank-dependent utility model. ...
jamesbland.bsky.social
Once we've written down (and probably also approximated) our utility function, we then need to maximize it. This is computationally non-trivial: the choice set for experimenters is very large! I tackle this with an exchange algorithm.
jamesbland.bsky.social
Furthermore, in structural estimation we are often interested in transformations of the parameters. What if we wanted to estimate one of these transformations well? My approach also lets you do that!