How can we know whether the NYPD kept its promise?
How can we know whether the NYPD kept its promise?
Now imagine if forgetting meant ending up in jail.
Two years ago, we ran a randomized experiment that found that text message reminders reduce jail stays for missed court dates by over 20%.
Now imagine if forgetting meant ending up in jail.
Two years ago, we ran a randomized experiment that found that text message reminders reduce jail stays for missed court dates by over 20%.
We're hiring a clinical (teaching-based) Assistant Professor of Applied Statistics for Social Science Research at NYU!
Application review begins on February 10, and the position would start on September 1.
Apply here: apply.interfolio.com/162021
We're hiring a clinical (teaching-based) Assistant Professor of Applied Statistics for Social Science Research at NYU!
Application review begins on February 10, and the position would start on September 1.
Apply here: apply.interfolio.com/162021
- During the learning phase AND
- After we stop learning!
Of course, this approach applies in any resource-constrained setting, not just for rides to court!
(14/)
- During the learning phase AND
- After we stop learning!
Of course, this approach applies in any resource-constrained setting, not just for rides to court!
(14/)
One approach would be to run a randomized controlled trial to learn how people respond to rides.
We could then estimate the tradeoffs at hand, and choose an tradeoff that best reflects our preferences.
(11/)
One approach would be to run a randomized controlled trial to learn how people respond to rides.
We could then estimate the tradeoffs at hand, and choose an tradeoff that best reflects our preferences.
(11/)
Instead, we should make decisions in a way that reflects our preference for how to make difficult tradeoffs.
(In practice, one could run a survey like the above to elicit preferences from people.)
(10/)
Instead, we should make decisions in a way that reflects our preference for how to make difficult tradeoffs.
(In practice, one could run a survey like the above to elicit preferences from people.)
(10/)
Most people preferred an outcome other than demographic parity—even people in the same political party!
(9/)
Most people preferred an outcome other than demographic parity—even people in the same political party!
(9/)
But “what feels fair” is ultimately a matter of personal preference, and depends on the exact tradeoff in question.
(8/)
But “what feels fair” is ultimately a matter of personal preference, and depends on the exact tradeoff in question.
(8/)
But there would be real drawbacks!
- We’d pay for longer rides to court, so
- We’d provide fewer rides overall, so
- More people would go to jail for missing court.
In other words, there’s an inherent tradeoff at play.
(7/)
But there would be real drawbacks!
- We’d pay for longer rides to court, so
- We’d provide fewer rides overall, so
- More people would go to jail for missing court.
In other words, there’s an inherent tradeoff at play.
(7/)
In prioritizing cheap + short rides, imagine that we drew from people who lived close to the courthouse (like in the map).
By trying to be efficient with our budget, we’d exclude many Black + Hispanic residents of Boston.
(5/)
In prioritizing cheap + short rides, imagine that we drew from people who lived close to the courthouse (like in the map).
By trying to be efficient with our budget, we’d exclude many Black + Hispanic residents of Boston.
(5/)
One possible initiative would be providing people with free rides to court. But with a limited budget, we wouldn’t have enough funding to give everyone a ride.
(3/)
One possible initiative would be providing people with free rides to court. But with a limited budget, we wouldn’t have enough funding to give everyone a ride.
(3/)
My coauthors and I came up with a new consequentialist approach to designing equitable algorithms.
Instead of imposing fairness criteria on an algorithm (like equal false negative rates), we aim for good outcomes.
More in the 🧵 below! (1/)
My coauthors and I came up with a new consequentialist approach to designing equitable algorithms.
Instead of imposing fairness criteria on an algorithm (like equal false negative rates), we aim for good outcomes.
More in the 🧵 below! (1/)
6/11
6/11
5/11
5/11