GPT 5 worse than 4o
16h
Researchers found OpenAI’s GPT-5 produced more harmful answers than its predecessor, GPT-4o, to prompts about suicide, self-harm and eating disorders in comparative tests.
Reposted by Julian Webb
ChatGPT 'upgrade' giving more harmful answers than previously, tests find
Campaigners are deeply concerned about responses to prompts about suicide, self-harm and eating disorders. The latest version of ChatGPT has produced more harmful answers to some prompts than an earlier iteration of the AI chatbot, digital campaigners have said. Launched in August, GPT-5 was billed by the San Francisco startup as advancing the 'frontier of AI safety'. But when researchers fed the same 120 prompts into the latest model and its predecessor, GPT-4o, the newer version gave harmful responses 63 times compared with 52 for the old model. In the UK and Ireland, Samaritans can be contacted on freephone 116 123, or email [email protected] or [email protected]. In the US, you can call or text the 988 Suicide & Crisis Lifeline at 988 or chat at 988lifeline.org. In Australia, the crisis support service Lifeline is 13 11 14. Other international helplines can be found at befrienders.org. Continue reading.
