Several people have reportedly committed suicide after speaking with the ChatGPT big language model, and cases of “AI psychosis” appear to be increasing. That is quite awful. In response, representatives of OpenAI, the company that makes ChatGPT, are speaking before the US Congress and revealing new techniques for determining the age of users. That might involve ID verification, the CEO said.
ChatGPT is implementing new age detection systems. If the automated system is unable to confirm (at least to itself) that a user is an adult, it will fall back to the more restricted “under 18” experience, which prohibits sexual content and “may involve law enforcement to ensure safety.” OpenAI CEO Sam Altman stated in a different blog post that Ars Technica was able to view that in certain nations, the system might also request identification to confirm the user’s age.
“We think this is a worthwhile tradeoff, even though we are aware that it compromises adult privacy,” Altman wrote. Although ChatGPT’s stated policy prohibits users under the age of 13, OpenAI asserts that it is developing an experience suitable for kids between the ages of 13 and 17.
Altman also brought up the privacy issue, which is a major worry in nations and jurisdictions that increasingly demand identification before allowing adults to view pornography or other contentious material. Altman added, “We are creating cutting-edge security features to guarantee your data is private, even from OpenAI employees.” However, it appears that OpenAI and ChatGPT’s systems will decide whether to make an exception. Human moderators might monitor and analyze “potential serious misuse,” such as plots to harm others or threats to someone’s life, or “a potential massive cybersecurity incident.”
The increasing prevalence of ChatGPT and other big language model services has led to increased scrutiny of its use from a wide range of perspectives. The phenomenon known as “AI psychosis” seems to occur when users interact with an LLM in a human-like manner, and the usually permissive character of LLM design allows them to engage in a recurring, veering cycle of delusion and possible injury. The parents of a 16-year-old Californian who killed himself sued OpenAI last month for wrongful death. Instructions for tying a noose and what seem to be encouragement and support for the choice to hang oneself are among the logs of the teen’s talks with ChatGPT that have been verified as authentic.
It’s merely the most recent in a string of suicides and mental health crises that seem to be directly caused by or made worse by interacting with “artificial intelligence” tools like ChatGPT and Character. AI. The Federal Trade Commission is investigating OpenAI, Character, and both the parents in the aforementioned example, and representatives of OpenAI appeared before the US Senate earlier this week in an investigation into chat systems. AI, Meta, Google, and xAI (now Elon Musk’s official owner of X, formerly Twitter) for possible risks associated with AI chatbots.
Concerns regarding the risks of LLM systems continue to surface as nations compete for a share of the over $1 trillion American investment in various AI sectors. However, with so much money floating about, it appears that the default stance up to this point has been to “move fast and break things.” Emerging safeguards will be difficult to balance with user privacy. Altman added, “We understand that these principles are at odds, and not everyone will agree with how we are resolving that conflict.”