skewer0846.bsky.social
@skewer0846.bsky.social
160 followers 120 following 510 posts
Posts Media Videos Starter Packs
I think you might have it backwards regarding copyright:
www.law.cornell.edu/uscode/text/...

(e) Effect on other laws
(2) No effect on intellectual property law
Nothing in this section shall be construed to limit or expand any law pertaining to intellectual property.
47 U.S. Code § 230 - Protection for private blocking and screening of offensive material
www.law.cornell.edu
Yeah, it's a problem with facets:
bsky.app/profile/imlu...

I'm not sure if there's a labeler for posts that have that kind of abuse of facets yet but I think Skywatch is working on it:
bsky.app/profile/offl...
anyone start seeing accounts in their notifications that are somehow mentioning you even though they don't have your handle in the post?
Reposted
If you're the "junior staffer" being blamed for this, my Signal is drewharwell.01.
Reposted
This would strangle the hopes of any emerging site ever challenging the market dominance of the sites you object to ever again, *and* impose potentially ruinous liability on average people who repost a post or forward an email.
Hmm, maybe some kind of system that uses text filters (cheaper) for potentially but not necessarily problematic content to tag a user as higher risk and then use the perceptual hashing on just a subset of users?
Found the post from a few weeks ago.

The stats were quite staggering:
bsky.app/profile/hail...
an interesting note re: scaling image moderation. in the past 24 hours:

at least 500 thousand images in posts (not considering # of images, just if an image was in the post)
86 thousand avatars
500 thousand url thumbnails
58 thousand videos.

that's a lot to moderate!
It might be a nice addition to the low-quality replies filter as well (or maybe a separate one) if you can somewhat reliably detect meme reactions.

I'd be a bit worried about the bandwidth/processing requirements of consuming that volume of images though.
The perceptual hashing being used to catch slightly modified/lossy/cropped/etc versions of the same meme to avoid the extra burden of dealing with almost identical images multiple times on the moderation side?
Maybe Phalanx for if you develop some kind of system that let's different moderation services cooperate, collaborate, or share data?

Or maybe just for something that handles a large number of events per second?
Reposted
Reposted
He gets the purpose of Section 230 wrong. He gets the history of 230 wrong. He misrepresents how defamation law works. He misunderstands the incentives Section 230 creates (and the incentives repealing it would create) & he wraps it all in a single manipulative anecdote, leaving out crucial details
Reposted
And Wyden is still alive! Still in the Senate! Eager to talk about the intent of the law!
How the heck do people still fuck it up this badly UNLESS they have already decided on the outcome, as Mike says.
The alternative for most other platforms is a one-size-fits-all system with a single set of rules made by a company that may not have everyone's best interests at heart (or those interests may later get bought out). That to me seems even less democratic.
The way I see it: the people choosing to use a moderation list to mute or block are the ones who elected the people making the list but they make that choice mostly just for themselves. They can provide input by questioning list additions/removals or ultimately by stopping using the list.
I'm not saying there's a perfect causation or even correlation but a feed filtering measure like blocking those over a certain following count doesn't have to be perfect to still be helpful for those wanting to cleanup their feeds.
(So many problems in the world would be less bad if we all had infinite time. But we don't, so that's why we need automation or some kind of trust in the systems of others to offload the work to or share the burden.)
In my opinion, that can then lead to just amplifying topics and stories without due diligence as to the sources or their veracity. It doesn't even have to be due to any maliciousness, just due to the lack of time to evaluate everything.
The way I see it, the following feed for those over those thresholds just becomes a feed of whatever is popular/trending at that moment based only a little bit randomly on the time when they load the following feed and what is posted at that exact moment by the thousands of accounts being followed.
It's not about an imbalance at all in this case. It's strictly following more than a certain threshold.

The most common reason I've seen people give when mentioning these lists is that if someone is following that many people then they are not actually able to keep up with that many individuals.
The list maintenance is the part that's automated/a bot, with accounts being added/removed automatically when they cross the relevant thresholds for the lists.

"There is no implication that these accounts themselves are not run by humans, simply that they follow a large number of accounts."
Reposted
@annabower.bsky.social can you text Lindsay and ask if she thinks her use of disappearing texts (and her use of Signal, generally) complies with data preservation rules under the Federal Records Act?
Oh nooooo, Lindsey Halligan, this is not how any of this works

(15 screens into a Signal exchange) www.lawfaremedia.org/article/anna...
Reposted
*imagines an editor at the NYT saying that since it is AI they can't actual say that it's shit and screaming until my voice gives out*
The New York Times twists itself in knots pretending it doesn’t know things that it absolutely knows. Such as referring to the shit in Trump’s AI video as “brown liquid.” 9/
Reposted
As a reminder to everyone, a key part of the Bari Weiss origin story is her livetweeting editorial meetings.
In the institutions she critiques she treats leakers (including herself) as principled dissenters. In the institutions she controls, she expects loyal silence.