Bob E Hayes
bobehayes.bsky.social
Bob E Hayes
@bobehayes.bsky.social
PhD in industrial-organizational psychology. Interests in #datascience #customerexperience #statistics #machinelearning #artificialintelligence
Reposted by Bob E Hayes
Not sure who made this, but probably the most accurate representation of the current state of tech to date
November 20, 2025 at 10:59 PM
The #AI #Bubble is About to Burst—Here’s What Survives

"The #opensource factor prevents the winner-take-all outcome that many fear, instead creating a more resilient, competitive ecosystem that will reshape how enterprises deploy AI infrastructure." ~ @davidnowak.me
The AI Bubble is About to Burst—Here’s What Survives - DAVID NOWAK
Billions are at stake as AI giants race toward a cliff few acknowledge. Discover how open source, shifting infrastructure, and hard economic realities reveal what survives when the AI bubble bursts—no monopoly, just a new balance.
davidnowak.me
November 27, 2025 at 10:24 AM
What OpenAI Did When #ChatGPT Users Lost Touch With Reality
What OpenAI Did When ChatGPT Users Lost Touch With Reality
In tweaking its chatbot to appeal to more people, OpenAI made it riskier for some of them. Now the company has made its chatbot safer. Will that undermine its quest for growth?
www.nytimes.com
November 27, 2025 at 6:41 AM
A tale of two #AI capitalisms

"Stay strong, Early November David Sacks, stay strong. If OpenAI fails, Google can readily step in, no bailout required." ~ @garymarcus.bsky.social

garymarcus.substack....

#GenerativeAI
A tale of two AI capitalisms
Things might get rough
garymarcus.substack.com
November 26, 2025 at 10:30 PM
Meet the #AI workers who tell their friends and family to stay away from AI
Meet the AI workers who tell their friends and family to stay away from AI
When the people making AI seem trustworthy are the ones who trust it the least, it shows that incentives for speed are overtaking safety, experts say
www.theguardian.com
November 26, 2025 at 6:30 PM
#AI and #Deepfake-Powered #Fraud Skyrockets Amid Global Stagnation

"This matters because stable percentages can create a false sense of security.
November 26, 2025 at 10:16 AM
Reposted by Bob E Hayes
The draft dodging idiot posted a pic of this West Point plaque. It contains these words:

“Our American Code of Military Obedience requires that, should orders and the law ever conflict, our officers must obey the law.”

That is precisely what Senator Mark Kelly told officers to do. #NoSedition
November 25, 2025 at 6:49 PM
One in four unconcerned by sexual #deepfakes created without consent, survey finds

www.theguardian.com/...

#AI
One in four unconcerned by sexual deepfakes created without consent, survey finds
Senior UK police officer says AI is accelerating violence against women and girls and that technology companies are complicit
www.theguardian.com
November 25, 2025 at 6:30 PM
Germany delivers landmark #copyright ruling against OpenAI: What it means for #AI and IP

"the use of copyrighted song lyrics for training #generativeAI models without a licence violates German copyright law..."
Germany delivers landmark copyright ruling against OpenAI: What it means for AI and IP
The Regional Court of Munich (LG München I) has issued a landmark judgment in GEMA v OpenAI (Case No. 42 O 14139/24), holding that the use of copyrighted song lyrics for training generative AI models without a licence violates German copyright law.
www.insidetechlaw.com
November 25, 2025 at 10:26 AM
"Young children are especially susceptible to the potential harms of these toys, such as invading their privacy, collecting data, engendering false trust and friendship, and displacing what they need to thrive, like human-to-human interactions and time to play with all their senses.
1/2
November 25, 2025 at 6:33 AM
Google boss issues warning ahead of Gemini 3.0 launch

www.newsweek.com/goo...

#GenerativeAI
Google boss issues warning ahead of Gemini 3.0 launch
Sundar Pichai has warned against an overreliance on artificial intelligence and said the tech remains "prone to errors."
www.newsweek.com
November 24, 2025 at 10:30 PM
"To ensure our use of #LLMs does not degrade science, we must use them as zero-shot translators: to convert accurate source material from one form to another."
To protect science, we must use LLMs as zero-shot translators
Nature Human Behaviour - Large language models (LLMs) do not distinguish between fact and fiction. They will return an answer to almost any prompt, yet factually incorrect responses are...
www.nature.com
November 24, 2025 at 6:30 PM
Do large language models have a legal duty to tell the truth? | Royal Society Open Science

"This article examines the existence and feasibility of a legal duty for LLM providers to create models that ‘tell the truth’."

royalsocietypublishi...

#AI
Do large language models have a legal duty to tell the truth? | Royal Society Open Science
Careless speech is a new type of harm created by large language models (LLM) that poses cumulative, long-term risks to science, education and shared social truth in democratic societies. LLMs produce responses that are plausible, helpful and confident, ...
royalsocietypublishing.org
November 24, 2025 at 10:14 AM
Hot take on Google’s Gemini 3

"If Google were to make those TPUs commercially available at scale and reasonable price, Nvidia’s dominance would end." ~ @garymarcus.bsky.social

garymarcus.substack....

#GenerativeAI
Hot take on Google’s Gemini 3
Still no AGI, but it may nonetheless represent serious threats both to OpenAI and Nvidia
garymarcus.substack.com
November 24, 2025 at 7:00 AM
Court Rules #AI News Summaries May Infringe #Copyright
Court Rules AI News Summaries May Infringe Copyright
News publishers just cleared a key hurdle against Cohere in a copyright fight over AI-generated "substitutive summaries" of their reporting.
copyrightlately.com
November 24, 2025 at 2:30 AM
The False Glorification of Yann LeCun

"But he has also systematically dismissed and ignored the work of others for years, including Schmidhuber, Fukushima, Zhang, Bender, Li and myself, in order to exaggerate his own contributions." @garymarcus.bsky.social

garymarcus.substack....

#AI
The False Glorification of Yann LeCun
Don’t believe everything you read
garymarcus.substack.com
November 23, 2025 at 6:30 PM
The more that people use #AI, the more likely they are to overestimate their own abilities

www.livescience.com/...

#bias #dunningkruger
The more that people use AI, the more likely they are to overestimate their own abilities
Researchers found that AI flattens the bell curve of a common principle in human psychology, known as the Dunning-Kruger effect, giving us all the illusion of competence.
www.livescience.com
November 23, 2025 at 10:29 AM
The state of #AI in 2025: Agents, innovation, and transformation

www.mckinsey.com/cap...
November 23, 2025 at 6:57 AM
#AI doesn’t really ‘learn’ – and knowing why will help you use it more responsibly
AI doesn’t really ‘learn’ – and knowing why will help you use it more responsibly
The idea that AI ‘learns’ like humans do is one of many misconceptions about the technology.
theconversation.com
November 22, 2025 at 2:30 AM