Alisar Mustafa
banner
alisarmustafa.bsky.social
Alisar Mustafa
@alisarmustafa.bsky.social
140 followers 150 following 240 posts
Author of the AI Policy Newsletter. Subscribe here : Alisarmustafa.substack.com
Posts Media Videos Starter Packs
▶ Chief counsel Kristin Bray said, “Our aim is not to slow progress, but to guide it. AI is not a replacement for our people.”
▶ No police or law enforcement departments are currently using AI tools.
▶ The city will also roll out employee training sessions on AI systems.
▶ Councilmembers emphasized avoiding civil liberties violations and ensuring technology serves residents safely.
▶ Lawmakers asked that residents with AI and data expertise be added to ensure public involvement.
▶ The task force will establish guidance on transparency, ethics, and accountability in AI use.
▶ Philadelphia already uses AI for cybersecurity alerts and meeting transcription.
▶ Mayor Cherelle Parker’s administration announced a new AI task force to create citywide policies for employee AI use.
▶ Members will include the chief administrative officer, legal counsel, IT officials, and representatives from multiple city departments.
▶ The legislation also aligns with SB 53, California’s broader AI transparency framework, which regulates large AI developers.
▶ SB 243 takes effect January 1, 2026, establishing California’s first comprehensive safety and accountability standards for AI companion technology.
▶ The law imposes penalties up to $250,000 per violation for producing or distributing AI-generated sexual material involving minors.
▶ Individuals harmed by noncompliance can seek damages, injunctions, and attorney fees in civil court.
▶ Operators are required to report annually to the California Office of Suicide Prevention beginning July 1, 2027, detailing how their systems respond to crisis-related content.
▶ Providers must display warnings for minors, issue reminders every three hours during long interactions, and block sexually explicit or romantic material for underage users.
▶ The bill prohibits AI chatbots from engaging in or generating sexualized or romantic content with minors, and requires age verification measures for all users.
▶ Companies must establish safety protocols to detect and respond to users expressing suicidal thoughts or self-harm, including automatic referrals to crisis hotlines.
▶ AI companions are barred from presenting themselves as health professionals or offering medical or mental health advice.
▶ Governor Gavin Newsom signed Senate Bill 243, authored by Senators Steve Padilla and Josh Becker, making California the first state to regulate AI companion chatbots.
▶ The law requires chatbot operators to clearly disclose when users are interacting with AI rather than a human.
California Enacts Nation’s First Law Regulating AI Companion Chatbots (SB 243)
leginfo.legislature.ca.gov/faces/billNa...
▶ The partnership builds on prior collaborations, including Claude for Financial Services and Deloitte’s global AI certification initiative.
▶ Deloitte’s $1.4 billion Project 120 aims to train 15,000 AI practitioners and strengthen enterprise AI capabilities.
▶ The collaboration represents Anthropic’s largest enterprise AI deployment to date.
▶ Both companies emphasized shared commitments to responsible AI development and regulatory compliance.
▶ A new Claude Center of Excellence will train specialists, create frameworks, and provide technical support for enterprise implementation.
▶ Deloitte and Anthropic will introduce a formal certification program for Claude implementation experts.
▶ The systems will be powered by Anthropic’s AI assistant Claude, with a focus on compliance and responsible AI deployment.
▶ Deloitte will make Claude available to its 470,000 employees across its global network.
▶ Anthropic and Deloitte are partnering to build AI solutions tailored for regulated industries, including financial services, healthcare, life sciences, and public administration.
▶ Nayya is required to protect health data under HIPAA and cannot sell or disclose user information.
▶ Other large employers, including Salesforce and Walmart, have also integrated AI-driven health benefits tools into their HR systems.
▶ Employees voiced concerns internally, calling the initial policy “coercive” and questioning consent practices.
▶ A Google spokesperson said the company corrected its HR site to reflect its original intent.
▶ For those who choose to participate, shared data includes pay, gender, and Social Security number.
▶ Nayya, a healthcare AI startup backed by Workday, ADP, and Iconiq Capital, provides personalized benefits recommendations.
▶ After internal complaints and media reports, Google clarified that Nayya is optional and employees can opt out without losing eligibility.
▶ Data from employees who do not opt in will not be shared with Nayya.
▶ Google updated its internal AI health benefits policy following employee backlash over mandatory data sharing.
▶ The company initially required employees to allow the third-party AI tool Nayya access to personal data to enroll in health benefits.