OpenAI is rolling out stricter safety measures for ChatGPT after lawsuits linked the chatbot to multiple suicides. "ChatGPT will now attempt to guess a user's age, and in some cases might require users to share an ID in order to verify that they are at least 18 years old," reports 404 Media. "We know this is a privacy compromise for adults but believe it is a worthy tradeoff," the company said in its announcement. "I don't expect that everyone will agree with these tradeoffs, but given the conflict it is important to explain our decisionmaking," OpenAI CEO Sam Altman said on X. From the report: OpenAI introduced parental controls to ChatGPT earlier in September, but has now introduced new, more strict and invasive security measures. In addition to attempting to guess or verify a user's age, ChatGPT will now also apply different rules to teens who are using the chatbot. "For example, ChatGPT will be trained not to do the above-mentioned flirtatious talk if asked, or engage in discussions about suicide of self-harm even in a creative writing setting," the announcement said. "And, if an under-18 user is having suicidal ideation, we will attempt to contact the users' parents and if unable, will contact the authorities in case of imminent harm."...
[ Read it >> ]
0 Replies