Anthropic Now Allows Children to Use Its AI Technology – Within Limits | TechCrunch - Latest Global News

Anthropic Now Allows Children to Use Its AI Technology – Within Limits | TechCrunch

AI startup Anthropic is changing its policies to allow minors to use its generative AI systems – at least under certain circumstances.

As announced in a post on the company’s official blog on Friday, Anthropic will begin allowing teenagers and young people to use third-party apps (but not necessarily its own apps) based on its AI models, as long as the developers of them Apps implement certain security features and disclose to users which Anthropic technologies they use.

In a support article, Anthropic lists several safety measures that developers building AI-powered apps for minors should include, such as age verification systems, content moderation and filtering, and educational resources on “safe and responsible” AI use for minors. The company says that too May Providing “technical measures” aimed at tailoring the experience of AI products to minors, such as: B. a “child safety system prompt” that developers would have to implement for minors.

Developers using Anthropic’s AI models must also comply with “applicable” child safety and privacy regulations, such as the Children’s Online Privacy Protection Act (COPPA), the U.S. federal law protecting the privacy of children under 13, according to Anthropic years. Anthropic plans to review apps for compliance “periodically,” suspend or suspend the accounts of those who repeatedly violate the compliance requirement, and require that developers “clearly state” that on publicly available websites or documentation they maintain compliance.

“There are certain use cases where AI tools can provide significant benefits to younger users, such as exam preparation or tutoring assistance,” Anthropic writes in the post. “With this in mind, our updated policy allows organizations to integrate our API into their products for minors if they agree to implement certain security features and disclose to their users that their product uses an AI system.”

Anthropic’s policy change comes as children and teens increasingly use generative AI tools to help with not only school work but also personal problems, and as rival generative AI providers – including Google and OpenAI – explore use cases for children. This year, OpenAI formed a new team to study child safety and announced a partnership with Common Sense Media to collaborate on child-friendly AI policies. Meanwhile, Google has made its chatbot Bard (now renamed Gemini) available in English to teens in select countries.

According to a survey by the Center for Democracy and Technology, 29% of children say they have used generative AI like OpenAI’s ChatGPT to deal with anxiety or mental health issues, 22% for problems with friends and 16% for family conflicts.

Last summer, schools and colleges rushed to ban generative AI apps – particularly ChatGPT – over fears of plagiarism and misinformation. Since then, some have lifted their bans. But not everyone is convinced that generative AI has positive potential, pointing to surveys such as those from the UK Safer Internet Center which found that more than half of children (53%) report that they are people their own age have seen that use generative AI negatively – for example, creating believable false information or images designed to upset someone (including pornographic deepfakes).

Calls for guidelines for children’s use of generative AI are increasing.

The UN Educational, Scientific and Cultural Organization (UNESCO) pushed late last year for governments to regulate the use of generative AI in education, including imposing age limits for users and guidelines on user data protection and privacy. “Generative AI can be a tremendous opportunity for human development, but it can also cause harm and prejudice,” UNESCO Director-General Audrey Azoulay said in a press release. “Without public engagement and the necessary protections and regulations from governments, it cannot be integrated into education.”

Sharing Is Caring:

Leave a Comment