The World's Leading AI Companies Are Committed to Protecting Children's Safety Online - Latest Global News

The World’s Leading AI Companies Are Committed to Protecting Children’s Safety Online

Leading artificial intelligence companies, including OpenAI, Microsoft, Google, Meta and others, have jointly committed to preventing their AI tools from being used to exploit children and generate child sexual abuse material (CSAM). The initiative was led by child safety group Thorn and All Tech Is Human, a nonprofit focused on responsible technology.

The commitments from AI companies, Thorn said, “set a groundbreaking precedent for the industry and represent a significant step in efforts to protect children from sexual abuse as generative AI unfolds.” The aim of the initiative is to Prevent the creation of sexually explicit material involving children and remove it from social media platforms and search engines. In 2023 alone, more than 104 million files containing suspected child sexual abuse material were reported in the United States, Thorn says. Without collective action, generative AI is likely to worsen this problem and overwhelm law enforcement agencies that already struggle to identify real victims.

On Tuesday, Thorn and All Tech Is Human released a new paper, “Safety by Design for Generative AI: Preventing Child Sexual Abuse,” that outlines strategies and provides recommendations for companies that use AI tools, search engines, social media Platforms and hosting companies develop and developers to take measures to prevent generative AI from being used to harm children.

For example, one of the recommendations asks companies to carefully select datasets used to train AI models and avoid those that contain only instances of CSAM but also adult content, as generative AI tends to combine the two concepts to combine. Thorn is also calling on social media platforms and search engines to remove links to websites and apps that allow people to take pictures of children “naked,” creating new AI-generated child sexual abuse material online. A flood of AI-generated CSAM will make it harder to identify real victims of child sexual abuse, according to the paper, by exacerbating the “haystack problem” — a reference to the volume of content law enforcement agencies currently have to sift through.

“This project should make it clear that you don’t have to throw your hands up,” said Rebecca Portnoff, vice president of data science at Thorn Wall Street Journal. “We want to be able to change the course of this technology so that the existing harms of this technology fall by the wayside.”

Some companies, Portnoff said, have already agreed to separate images, videos and audio involving children from datasets containing adult content to prevent their models from combining the two. Others also add watermarks to identify AI-generated content, but the method is not foolproof – watermarks and metadata can be easily removed.

Sharing Is Caring:

Leave a Comment