Meta to stop its AI chatbots from talking to teens about suicide

- BBC News

Meta to stop its AI chatbots from talking to teens about suicide

Meta said it will introduce more guardrails to its artificial intelligence (AI) chatbots - including blocking them from talking to teens about suicide, self-harm and eating disorders.

It comes two weeks after a US senator launched an investigation into the tech giant after notes in a leaked internal document suggested its AI products could have "sensual" chats with teenagers.

The company described the notes in the document, obtained by Reuters, as erroneous and inconsistent with its policies which prohibit any content sexualising children.

But it now says it will make its chatbots direct teens to expert resources rather than engage with them on sensitive topics such as suicide.

"We built protections for teens into our AI products from the start, including designing them to respond safely to prompts about self-harm, suicide, and disordered eating," a Meta spokesperson said.

The firm told tech news publication TechCrunch on Friday it would add more guardrails to its systems "as an extra precaution" and temporarily limit chatbots teens could interact with.

But Andy Burrows, head of the Molly Rose Foundation, said it was "astounding" Meta had made chatbots available that could potentially place young people at risk of harm.

"While further safety measures are welcome, robust safety testing should take place before products are put on the market - not retrospectively when harm has taken place," he said.

"Meta must act quickly and decisively to implement stronger safety measures for AI chatbots and Ofcom should stand ready to investigate if these updates fail to keep children safe."

Meta said the updates to its AI systems are in progress. It already places users aged 13 to 18 into "teen accounts" on Facebook, Instagram and Messenger, with content and privacy settings which aim to give them a safer experience.

It told the BBC in April these would also allow parents and guardians to see which AI chatbots their teen had spoken to in the last seven days.

The changes come amid concerns over the potential for AI chatbots to mislead young or vulnerable users.

A California couple recently sued ChatGPT-maker OpenAI over the death of their teenage son, alleging its chatbot encouraged him to take his own life.

The lawsuit came after the company announced changes to promote healthier ChatGPT use last month.

"AI can feel more responsive and personal than prior technologies, especially for vulnerable individuals experiencing mental or emotional distress," the firm said in a blog post.

Meanwhile, Reuters reported on Friday Metas AI tools allowing users to create chatbots had been used by some - including a Meta employee - to produce flirtatious "parody" chatbots of female celebrities.

Among celebrity chatbots seen by the news agency were some using the likeness of artist Taylor Swift and actress Scarlett Johansson.

Reuters said the avatars "often insisted they were the real actors and artists" and "routinely made sexual advances" during its weeks of testing them.

It said Metas tools also permitted the creation of chatbots impersonating child celebrities and, in one case, generated a photorealistic, shirtless image of one young male star.

Several of the chatbots in question were later removed by Meta, it reported.

"Like others, we permit the generation of images containing public figures, but our policies are intended to prohibit nude, intimate or sexually suggestive imagery," a Meta spokesperson said.

They added that its AI Studio rules forbid "direct impersonation of public figures".

Sign up for our Tech Decoded newsletter to follow the worlds top tech stories and trends. Outside the UK? Sign up here.



Read it all at BBC News