Musk effectively copied over all of Wikipedia and had Grok go over it to give it a far right bias in its framing in articles to turn it into a propaganda tool. It's conservapedia 2.0 backed by the world's richest man.
Musk effectively copied over all of Wikipedia and had Grok go over it to give it a far right bias in its framing in articles to turn it into a propaganda tool. It's conservapedia 2.0 backed by the world's richest man.
Chile is increasingly becoming an interesting case study for governments grappling with building out AI infrastructure to attract investment, while also balancing the need for ethical guardrails, including those that address environmental concerns
Chile is increasingly becoming an interesting case study for governments grappling with building out AI infrastructure to attract investment, while also balancing the need for ethical guardrails, including those that address environmental concerns
Clearview AI is now facing a criminal complaint from noyb for its violation of GDPR in an escalation of complaints against the company, which could see its executives facing personal liability, including potential jail time, for non-compliance
Clearview AI is now facing a criminal complaint from noyb for its violation of GDPR in an escalation of complaints against the company, which could see its executives facing personal liability, including potential jail time, for non-compliance
Chatbots helping to facilitate the mental health crisis is becoming increasingly concerning, with OpenAI now saying that more than a million people each week show suicidal tendencies in interactions with ChatGPT
October 28, 2025 at 5:21 PM
Chatbots helping to facilitate the mental health crisis is becoming increasingly concerning, with OpenAI now saying that more than a million people each week show suicidal tendencies in interactions with ChatGPT
There have been several developments in the last few days in relation to AI-job displacement concerns, with Chegg firing 45% of its workforce due to "realities of AI", Meta laying off privacy & risk auditors in favor of automation and Amazon laying off thousands of workers to focus on AI investment
October 28, 2025 at 5:09 PM
There have been several developments in the last few days in relation to AI-job displacement concerns, with Chegg firing 45% of its workforce due to "realities of AI", Meta laying off privacy & risk auditors in favor of automation and Amazon laying off thousands of workers to focus on AI investment
Also in revenant cyber threats today, the UK NSCS noted a 50% uptick in highly sophisticated cyberattacks, underscoring compounding cyber challenges in the country after a rough few months dealing with Scattered Lapsus$ Hunters ransomware attacks and others, some potentially linked to nation-states
October 14, 2025 at 9:55 PM
Also in revenant cyber threats today, the UK NSCS noted a 50% uptick in highly sophisticated cyberattacks, underscoring compounding cyber challenges in the country after a rough few months dealing with Scattered Lapsus$ Hunters ransomware attacks and others, some potentially linked to nation-states
Last week, I published an analysis on the rise of politically motivated doxxing databases that are popping up after politically-charged developments and increasingly including social media posts, employment information and locations. Check it out here!
Last week, I published an analysis on the rise of politically motivated doxxing databases that are popping up after politically-charged developments and increasingly including social media posts, employment information and locations. Check it out here!
In the Western world, we often hear about the risks of Iranian cyber operations, but we see less in the news about Israel's advanced cyber capabilities. That's why my latest analysis is a deep dive into Israel's cyber strategy, and risks from its tech industry, like spyware and influence-for-hire
In the Western world, we often hear about the risks of Iranian cyber operations, but we see less in the news about Israel's advanced cyber capabilities. That's why my latest analysis is a deep dive into Israel's cyber strategy, and risks from its tech industry, like spyware and influence-for-hire
ICE officials added a random person to a group chat planning an ongoing manhunt in a situation remarkably similar to the Signal incident earlier this year. This time, however, messages were not end-to-end encrypted, underscoring the lax attitude in the Trump administration re: secure communication
ICE officials added a random person to a group chat planning an ongoing manhunt in a situation remarkably similar to the Signal incident earlier this year. This time, however, messages were not end-to-end encrypted, underscoring the lax attitude in the Trump administration re: secure communication
Amid a broader push to lighten various EU regulations in the name of bolstering the bloc's competitiveness, several European AI and tech companies are asking EU officials to delay the start AI rules, with the weight of major companies adding pressure that could compel the EU to dilute some aspects
July 3, 2025 at 4:04 PM
Amid a broader push to lighten various EU regulations in the name of bolstering the bloc's competitiveness, several European AI and tech companies are asking EU officials to delay the start AI rules, with the weight of major companies adding pressure that could compel the EU to dilute some aspects
X is planning on using AI for community notes. I can't tell if this is helpful because it will enable more moderation? or if it will just add to the growing mis/disinformation/hate speech/cluster mess that's already been going on with X's content moderation polices since Musk took over
X is planning on using AI for community notes. I can't tell if this is helpful because it will enable more moderation? or if it will just add to the growing mis/disinformation/hate speech/cluster mess that's already been going on with X's content moderation polices since Musk took over
I've seen several articles recently about a growing use of AI for sensitive applications, like hiring/firing decisions and for deciding on promotions and raises. For me, this raises a lot of alarms about the growing push for various teams to adopt AI into their workflows.
July 2, 2025 at 4:30 PM
I've seen several articles recently about a growing use of AI for sensitive applications, like hiring/firing decisions and for deciding on promotions and raises. For me, this raises a lot of alarms about the growing push for various teams to adopt AI into their workflows.
The June 24 cyberattack on Columbia University now appears to be right-wing hacktivism. Most apparent, screens were defaced with photos of Donald Trump. The alleged hacker now also claims to have stolen data of applicants in efforts to prove that Columbia is still practicing affirmative action
The June 24 cyberattack on Columbia University now appears to be right-wing hacktivism. Most apparent, screens were defaced with photos of Donald Trump. The alleged hacker now also claims to have stolen data of applicants in efforts to prove that Columbia is still practicing affirmative action
Republican Senators reached an agreement yesterday to reduce the proposed moratorium on state AI regulations from 10 years to 5. The 10 year moratorium passed in the House last month was a blanket ban, but the Senate version contains different provisions that aim to ease some concerns
June 30, 2025 at 6:49 PM
Republican Senators reached an agreement yesterday to reduce the proposed moratorium on state AI regulations from 10 years to 5. The 10 year moratorium passed in the House last month was a blanket ban, but the Senate version contains different provisions that aim to ease some concerns
Denmark is working towards making sharing deepfakes illegal -- something that would likely be challenging to enforce on its own, but especially as AI-generated content continues to proliferate online
Denmark is working towards making sharing deepfakes illegal -- something that would likely be challenging to enforce on its own, but especially as AI-generated content continues to proliferate online
security researchers are now warning that scattered spider is also targeting airlines and the transportation sector. The previous disruptions this group had caused does not bode well for these sectors that are already vulnerable to delays, complications etc….
June 28, 2025 at 3:00 AM
security researchers are now warning that scattered spider is also targeting airlines and the transportation sector. The previous disruptions this group had caused does not bode well for these sectors that are already vulnerable to delays, complications etc….
Brazil's Supreme Court has ruled that social media companies can be held liable to content that users post on their platform, highlighting ongoing tensions between Brazil and online platforms that gained publicity after a spat between Brazilian lawmakers and Elon Musk over X posts
June 27, 2025 at 9:12 PM
Brazil's Supreme Court has ruled that social media companies can be held liable to content that users post on their platform, highlighting ongoing tensions between Brazil and online platforms that gained publicity after a spat between Brazilian lawmakers and Elon Musk over X posts
Bipartisan lawmakers introduced a bill that would ban federal agencies from using AI tools from adversaries, like China's DeepSeek, in a move motivated by both national security interests (stopping china from spying on US networks) and economic competition interest (US vs. China in the AI race)
June 27, 2025 at 3:19 PM
Bipartisan lawmakers introduced a bill that would ban federal agencies from using AI tools from adversaries, like China's DeepSeek, in a move motivated by both national security interests (stopping china from spying on US networks) and economic competition interest (US vs. China in the AI race)
Yes, but this is an important distinction: the pirated copies of books are the subject of a trial AS A CENTRAL LIBRARY, regardless of their use in training. The training aspect was still fair use.
Of course, another district judge or a higher judge could (and very probably may) rule the other way.
Interestingly, Anthropic is still required to go to trial over initially using pirated copies of books to train models, which means that some of these future/ongoing cases could also shift to focus more on whether or not authors were paid for access to content like books or articles
June 25, 2025 at 7:26 PM
Yes, but this is an important distinction: the pirated copies of books are the subject of a trial AS A CENTRAL LIBRARY, regardless of their use in training. The training aspect was still fair use.
Of course, another district judge or a higher judge could (and very probably may) rule the other way.