Broker of Data
banner
brokerofdata.bsky.social
Broker of Data
@brokerofdata.bsky.social
PhD candidate studying data brokering in Canada. Comments mine, reposts not endorsements.
Reposted by Broker of Data
A poll from @techpolicypress.bsky.social shows overwhelming support for the regulation of generative AI in political advertisements – something tech lobbyists have staunchly opposed or sought to water down. Read more, here: www.techpolicy.press/poll-indicat...
Poll Indicates Broad Support for Regulating AI-Generated Media Related to Elections | TechPolicy.Press
And, Americans are overconfident about their ability to discern AI-generated content, writes Tim Bernard.
www.techpolicy.press
October 1, 2024 at 6:54 PM
Reposted by Broker of Data
Some iOS developers worry that an iOS 18 change to let users choose contacts an app can access may make growing social apps hard; Apple says it boosts privacy (Kevin Roose/New York Times)

Main Link | Techmeme Permalink
October 2, 2024 at 3:31 PM
Reposted by Broker of Data
The US and Microsoft seize 107 websites used by Russian intelligence agents and their proxies in the US operating under Star Blizzard, a group active since 2016 (Katrina Manson/Bloomberg)

Main Link | Techmeme Permalink
October 3, 2024 at 6:40 PM
Reposted by Broker of Data
Texas AG Ken Paxton sues TikTok for allegedly violating a new state law by sharing and selling minors' personal information without parental consent (Pooja Salhotra/The Texas Tribune)

Main Link | Techmeme Permalink
October 3, 2024 at 10:05 PM
Reposted by Broker of Data
New piece in @techpolicypress.bsky.social from me and others the Integrity Institute!

Want a safer social media? Then we need more transparency from the platforms about the scale, cause, and nature of harms. A key way to change the incentives of the companies.

www.techpolicy.press/making-socia...
Making Social Media Safer Requires Meaningful Transparency | TechPolicy.Press
The authors argue that meaningful transparency requires companies to disclose the total number of exposures to violating content, not just the prevalence.
www.techpolicy.press
October 2, 2024 at 3:07 PM
Reposted by Broker of Data
Just a week after both Meta & Google promised Senators that they would never block a potentially embarrassing story related to a candidate for national office, both companies did just that. Apparently the promise only applied to embarrassing info on GOP candidates. www.techdirt.com/2024/09/30/b...
Big Tech’s Promise Never To Block Access To Politically Embarrassing Content Apparently Only Applies To Democrats
It probably will not shock you to find out that big tech’s promises to never again suppress embarrassing leaked content about a political figure came with a catch. Apparently, it only applies when …
www.techdirt.com
September 30, 2024 at 7:16 PM
Reposted by Broker of Data
Evelina Ayrapetyan applauds California’s governor for signing generative AI legislation to increase transparency in training data but argues state leaders must also address automated decision-making.
Time for California to Act on Algorithmic Discrimination | TechPolicy.Press
Evelina Ayrapetyan applauds California’s governor for signing generative AI legislation but argues state leaders must also address automated decision-making.
www.techpolicy.press
September 30, 2024 at 1:38 PM
Reposted by Broker of Data
An FBI affidavit and European media leaks show how Russia's Doppelganger disinformation project operated and give insights into the Kremlin's online tactics (Thomas Rid/Foreign Affairs)

Main Link | Techmeme Permalink
September 30, 2024 at 11:36 AM
Reposted by Broker of Data
Safety frameworks are intended to help companies make informed decisions about safely increasing the size and capabilities of AI models. But they must be grounded in rigorous scientific methodologies, writes Carnegie Mellon’s Atoosa Kasirzadeh.
Measurement Challenges in AI Catastrophic Risk Governance and Safety Frameworks | TechPolicy.Press
AI safety frameworks must be grounded in rigorous scientific methodologies, writes Carnegie Mellon’s Atoosa Kasirzadeh.
www.techpolicy.press
September 30, 2024 at 1:20 PM
Reposted by Broker of Data
There is overwhelming support amongst US voters for regulating AI-generated media related to elections, according to a new Tech Policy Press/YouGov poll. Also, Americans are overconfident about their ability to discern AI-generated content, writes @timbernard.bsky.social.
Poll Indicates Broad Support for Regulating AI-Generated Media Related to Elections | TechPolicy.Press
And, Americans are overconfident about their ability to discern AI-generated content, writes Tim Bernard.
www.techpolicy.press
September 27, 2024 at 4:30 PM
Reposted by Broker of Data
Our letter & accompanying news article + editorial in Science question the unwarranted conclusion generally drawn by the media and science community (as well as Meta) about a positive role of the Facebook ranking algorithm on misinformation 🧪
Details in the 🧵 below...
Science published our eLetter, along with an editorial and attached article discussing its implications. In it, we call into question a widely reported Science paper, funded by Meta, which suggested that Facebook's news feed algorithm prevents misinformation.
www.science.org/content/arti...
🧵1/7
A study found Facebook’s algorithm didn’t promote political polarization. Critics have doubts
Letter to Science questions experiment done during 2020 U.S. elections
www.science.org
September 27, 2024 at 5:28 PM
Reposted by Broker of Data
As part of our work at the Stanford Cyber Policy Center, @dwillner.bsky.social and I wrote up a practical guide on how to effectively use LLMs for content moderation. It shows both the promise and limitations of the current generation of LLMs for at scale trust and safety work. Hope it is helpful!
Using LLMs for Policy-Driven Content Classification | TechPolicy.Press
Dave Willner and Samidh Chakrabarti
www.techpolicy.press
January 29, 2024 at 3:41 PM
I'm genuinely interested in learning what the "technical limitations" are for an OS-level "do not track" flag, given it is the main argument Gov Newsom hangs his veto on. And particularly as "or use this browser" and "install this plugin" put all the burden on users.

arstechnica.com/tech-policy/...
Calif. Governor vetoes bill requiring opt-out signals for sale of user data
Gavin Newsom said he opposes mandate on mobile operating system developers.
arstechnica.com
September 25, 2024 at 7:27 PM
Reposted by Broker of Data
The US AI Safety institute's agreement with OpenAI and Anthropic represents a crucial step in the US government's efforts to ensure AI safety, despite falling short on accountability and clear enforcement mechanisms, writes Stephanie Haven. The true test will be in the implementation.
The US Government's AI Safety Gambit: A Step Forward or Just Another Voluntary Commitment? | TechPolicy.Press
The US AISI's agreement with OpenAI and Anthropic falls short on accountability and clear enforcement mechanisms, writes Stephanie Haven.
www.techpolicy.press
September 20, 2024 at 12:58 PM
Reposted by Broker of Data
We should know by now that we can’t rely on the conscience of tech barons to save us – we need governments to step in, writes Martha Dark, co-founder and director of the litigation and campaigning non-profit Foxglove.
Is Accountability Finally Coming for Online Platforms? | TechPolicy.Press
We should know by now that we can’t rely on the conscience of tech barons to save us – we need governments to step in, writes Martha Dark.
www.techpolicy.press
September 20, 2024 at 1:05 PM
Reposted by Broker of Data
The US AI Safety institute's agreement with OpenAI and Anthropic represents a crucial step in the US government's efforts to ensure AI safety, despite falling short on accountability and clear enforcement mechanisms, writes Stephanie Haven. The true test will be in the implementation.
The US Government's AI Safety Gambit: A Step Forward or Just Another Voluntary Commitment? | TechPolicy.Press
The US AISI's agreement with OpenAI and Anthropic falls short on accountability and clear enforcement mechanisms, writes Stephanie Haven.
www.techpolicy.press
September 20, 2024 at 11:38 AM
Reposted by Broker of Data
New Logics for Governing Human Discourse in the Online Era - part of Freedom of Though Project @CIGIonline. How 1) user agency (Freedom of Impression), 2) restoring our traditional social mediation ecosystem, & 3) systems of social trust synergize, 2/3
www.cigionline.org/publications...
New Logics for Governing Human Discourse in the Online Era
The democratization of access to online media tools is driving a transformation of human discourse that is disrupting freedom of thought. Governance is needed to restore individual and community agenc...
www.cigionline.org
April 25, 2024 at 5:34 PM
Reposted by Broker of Data
The 4/30 symposium is "Shaping the Future of Social Media with Middleware" by @StanfordCyber/@JoinFAI. Leading thinkers delve into complexities, potential of middleware as transformative force, leading to comprehensive white paper. Stay tuned! 3/3
www.thefai.org/posts/shapin...
Shaping the Future of Social Media with Middleware | The Foundation for American Innovation
Debates around content moderation, competition, and the control of digital discourse have never been more contentious. Concerns range widely, from left-leaning voices arguing that platforms fail to cu...
www.thefai.org
April 25, 2024 at 5:34 PM
Reposted by Broker of Data
📣 Some personal news: one month from today, my book is out! 🎉

I wrote the 📖 during a wild year as Jim Jordan subpoenaed & Stephen Miller sued me & my colleagues. Now it’s my turn to tell the story.

Preorders are open —they really help! If you hate AMZN, small shops on my website.

amzn.to/4bbiDJD
Amazon.com: Invisible Rulers: The People Who Turn Lies into Reality: 9781541703377: DiResta, Renee: Books
Amazon.com: Invisible Rulers: The People Who Turn Lies into Reality: 9781541703377: DiResta, Renee: Books
amzn.to
May 12, 2024 at 1:36 AM
Reposted by Broker of Data
Outstanding, easy to understand, explanation of why we need social media middleware to "…maximize user control over what information is received by individuals… who use the Internet…” -- as § 230 makes US policy. Thanks @ethanz.bsky.social @knightcolumbia.org
It's interesting which audiences different media reach. With an essay in the NYTimes today, I've heard from lots of faculty at universities I've studied at, and an equal amount of complaints from internet strangers that I am abusing the US legal system... www.nytimes.com/2024/05/05/o...
Opinion | I Love Facebook. That’s Why I’m Suing Meta.
We must be able to create a more civic-minded internet, with tools that would empower users to better control what they see.
www.nytimes.com
May 6, 2024 at 9:35 PM
Reposted by Broker of Data
A level-headed approach to solving the privacy issues with “middleware” or interoperability. This is the biggest gating issue for changes to our tech ecosystem that could be profoundly game-changing. A workable privacy approach to interoperability would be a HUGE step forward.
A Better Approach to Privacy for Third-Party Social Media Tools
A robust ecosystem of third-party tools that will require fresh thinking about privacy, say Chand Rajendra-Nicolucci and Ethan Zuckerman.
techpolicy.press
September 15, 2023 at 2:53 PM
Reposted by Broker of Data
Hey everyone, Consumer Reports wants to hear how software obsolescence has affected you. We're gathering stories as part of a future push to get companies to disclose their plans to support a product before you buy it. Share your story here:

www.consumerreports.org/stories?ques...
Stories - Consumer Reports
www.consumerreports.org
August 13, 2024 at 6:48 PM
Reposted by Broker of Data
Latest publication w/ @yang3kc.bsky.social & Danish Singh 🧪
Characteristics and Prevalence of Fake Social Media Profiles with AI-generated Faces
doi.org/10.54501/jot...
tl;dr: At least 9-18k daily active X accounts use AI profiles to spread scams, spam, amplify coordinated messages, etc.
September 19, 2024 at 11:43 PM