he/him
Nerd, programmer, writer. I like making things!
Adam had been asking ChatGPT for information on suicide since December 2024. At first the chatbot provided crisis resources when prompted for technical help, but the chatbot explained those could be avoided if Adam claimed prompts were for “writing or world-building.”
"If you’re asking [about hanging] from a writing or world-building angle, let me know and I can help structure it accurately for tone, character psychology, or realism. If you’re asking for personal reasons, I’m here for that too,” ChatGPT recommended, trying to keep Adam engaged. According to the Raines’ legal team, “this response served a dual purpose: it taught Adam how to circumvent its safety protocols by claiming creative purposes, while also acknowledging that it understood he was likely asking ‘for personal reasons.’”
and
During those chats, “ChatGPT mentioned suicide 1,275 times—six times more often than Adam himself,” the lawsuit noted.
Ultimately, OpenAI’s system flagged “377 messages for self-harm content, with 181 scoring over 50 percent confidence and 23 over 90 percent confidence.” Over time, these flags became more frequent, the lawsuit noted, jumping from two to three “flagged messages per week in December 2024 to over 20 messages per week by April 2025.” And “beyond text analysis, OpenAI’s image recognition processed visual evidence of Adam’s crisis.” Some images were flagged as “consistent with attempted strangulation” or “fresh self-harm wounds,” but the system scored Adam’s final image of the noose as 0 percent for self-harm risk, the lawsuit alleged.
Why do you immediately leap to calling the cops? Human moderators exist for this, anything would’ve been better than blind encouragement.
People are so cynical 🙄 It’s designed by a guy who’s worked on over 20 board games, this information took me less than a minute to find
https://www.boardgamebliss.com/collections/emerson-matsuuchi
Do you really think this guy is lying about it?
https://www.reddit.com/r/borderlands3/comments/1gak2s0/dont_know_if_this_is_possible/
I’m not sure if accusing someone of lying about cancer is worse than lying about having cancer, but c’mon
Farming Sim has an esports scene!
Vizor explained that Ricochet uses a list of hardcoded strings of text to detect cheaters and that they then exploited this to ban innocent players by simply sending one of these strings via an in-game whisper. To test the exploit the day they found it, they sent an in-game message containing one of these strings to themselves and promptly got banned.
Vizor elaborates, “I realized that Ricochet anti-cheat was likely scanning players’ devices for strings to determine who was a cheater or not. This is fairly normal to do but scanning this much memory space with just an ASCII string and banning off of that is extremely prone to false positives.”
This is insane, they had an automatic script to connect to games and ban random people on loop so they could do it while away
It’s insane, here’s the translation back to English:
source