(Head of T&S at Musbi; writes T&S Insider; host of Trust in Tech podcast)
More here: https://alicelinks.com/about-alice
/🧵
/🧵
Recognize that some open ways of signaling support (i.e. putting pronouns in your signature) will also open people up to harassment.
Allow folks to make decisions based on what's right for them.
Recognize that some open ways of signaling support (i.e. putting pronouns in your signature) will also open people up to harassment.
Allow folks to make decisions based on what's right for them.
Let them know that you have their back.
Listen to them.
Check in on them.
Make sure they have benefits that cover mental health support.
We're all going to need it.
Let them know that you have their back.
Listen to them.
Check in on them.
Make sure they have benefits that cover mental health support.
We're all going to need it.
@anikacolliernavaroli.com calls this "compelled identity labor": hire people whose explicit job is to be an expert, instead of voluntelling your employees who have other jobs to do.
@anikacolliernavaroli.com calls this "compelled identity labor": hire people whose explicit job is to be an expert, instead of voluntelling your employees who have other jobs to do.
Make sure that escalation chain goes all the way up to VP or C-Suite level so everyone is supported.
Make sure that escalation chain goes all the way up to VP or C-Suite level so everyone is supported.
"We will ban you if you disrespect or threaten our staff", for example.
Or "We will ban you if you report trans people simply for being trans."
"We will ban you if you disrespect or threaten our staff", for example.
Or "We will ban you if you report trans people simply for being trans."
Create tricky hypothetical scenarios (i.e. your biggest client sends a racist email; someone threatens to sue you for having a DEI program) and get answers BEFORE you need them.
Create tricky hypothetical scenarios (i.e. your biggest client sends a racist email; someone threatens to sue you for having a DEI program) and get answers BEFORE you need them.
If you know of other conversations/ resources in this area, or are an expert and want to be on the podcast or chat TrustCon proposals, let me know!
If you know of other conversations/ resources in this area, or are an expert and want to be on the podcast or chat TrustCon proposals, let me know!
www.cjr.org/tow_center/b...
www.cjr.org/tow_center/b...
integrityinstitute.org/podcast/its-...
integrityinstitute.org/podcast/its-...
integrityinstitute.org/podcast/work...
integrityinstitute.org/podcast/work...
Looking forward to more people here on Bluesky :)
Looking forward to more people here on Bluesky :)
This allows meta to dodge responsibility. “The users don’t like it. They reported it. It’s not us.”
It won’t make moderation more fair or better. It’ll be less consistent.
But gives Meta an excuse that is more politically accepted right now.
This allows meta to dodge responsibility. “The users don’t like it. They reported it. It’s not us.”
It won’t make moderation more fair or better. It’ll be less consistent.
But gives Meta an excuse that is more politically accepted right now.
/🧵
/🧵
Automated detection isn’t perfect by any means, but it’s a heck of a lot better than user reports alone.
Automated detection isn’t perfect by any means, but it’s a heck of a lot better than user reports alone.
Many users will get away with rules-violating behavior because it is never reported.
Many users will get away with rules-violating behavior because it is never reported.
I have learned this the hard way, when I was head of t&s at a platform with little no to automated detection.
I have learned this the hard way, when I was head of t&s at a platform with little no to automated detection.
… or they will want to leave.
… or they will want to leave.