This Week in AI: OpenAI Considers Allowing AI Porn | TechCrunch - Latest Global News

This Week in AI: OpenAI Considers Allowing AI Porn | TechCrunch

Keeping up with an industry as fast-moving as AI is a huge challenge. Until an AI can do it for you, here’s a handy roundup of the latest stories from the world of machine learning, as well as notable research and experiments that we haven’t covered alone.

By the way, TechCrunch is planning to publish an AI newsletter soon. Stay tuned. In the meantime, we’re increasing the frequency of our semi-regular AI column, which previously appeared about twice a month, to weekly – so keep an eye out for further editions.

This week, OpenAI in AI announced that it is studying how to “responsibly” generate AI porn. Yes, you heard that right. OpenAI’s new NSFW policy was announced in a document intended to pull back the curtain and collect feedback on its AI’s instructions. It is intended to spark a discussion about how and where the company might allow explicit images and text in its AI products, OpenAI said.

“We want to ensure that people have maximum control without violating the law or other people’s rights,” Joanne Jang, a product team member at OpenAI, told NPR. “There are creative cases where content with sexuality or nudity is important to our users.”

It’s not the first time OpenAI has signaled its willingness to put its foot in controversial territory. Earlier this year, Mira Murati, the company’s CTO, told the Wall Street Journal that she was “not sure” whether OpenAI would eventually allow its video generation tool Sora to be used to create adult content.

So what to make of this?

There Is A future where OpenAI opens the door to AI-generated porn and everything will be fine. I don’t think Jang is wrong when he says that there are legitimate forms of artistic expression for adults – expressions that could be created using AI-powered tools.

But I’m not sure we can trust OpenAI – or any other generative AI provider – to get it right.

Firstly, consider the rights of the authors. OpenAI’s models were trained on massive amounts of public web content, some of which is undoubtedly pornographic in nature. But OpenAI hasn’t licensed all of this content until relatively recently – or even allowed creators to opt out of training (and even then only certain forms of training).

It’s hard to make a living from adult content, and if OpenAI were to bring AI-generated porn into the mainstream, there would be even stiffer competition for creators – a competition that wasn’t built on the basis of those creators’ works for nothing becomes.

The other problem, in my opinion, is the fallibility of the current protection measures. OpenAI and competitors have been refining their filtering and moderation tools for years. But users continue to discover workarounds that allow them to misuse companies’ AI models, apps and platforms.

Just in January, Microsoft was forced to make changes to its Designer image creation tool, which uses OpenAI models, after users found a way to create nude images of Taylor Swift. As far as text generation goes, it’s trivial to find chatbots based on supposedly “safe” models like Anthropic’s Claude 3 that willingly spit out eroticism.

AI has already created a new form of sexual abuse. Elementary and high school students use AI-powered apps to “remove” photos of their classmates without their consent. A survey conducted in the UK, New Zealand and Australia in 2021 found that 14% of respondents aged 16 to 64 had been victims of deepfake images.

New laws in the US and elsewhere aim to address this. But the jury is out on whether the justice system — a justice system that already struggles to eradicate most sex crimes — can regulate an industry as fast-moving as AI.

Frankly, it’s hard to imagine OpenAI taking an approach to AI-generated porn that isn’t fraught with risk. Maybe OpenAI will reconsider its stance. Or maybe – against all odds – it will Find a better way. Whatever the case, it seems we’ll find out sooner rather than later.

Here are some other notable AI stories from recent days:

  • Apple’s AI plans: Apple CEO Tim Cook revealed some details about the company’s plans to advance AI during its earnings call with investors last week. Sarah has the whole story.
  • Enterprise GenAI: The CEOs of Dropbox and Figma – Drew Houston and Dylan Field – have invested in Lamini, a startup that develops generative AI technology along with a generative AI hosting platform for enterprise organizations.
  • AI for customer service: Airbnb is rolling out a new feature that allows hosts to opt in to AI-powered suggestions to respond to guests’ questions, such as sending guests a property’s checkout guide.
  • Microsoft restricts AI use: Microsoft has reiterated its ban on US police departments using generative AI for facial recognition. Additionally, law enforcement agencies worldwide are banned from using facial recognition technology on body cameras and dash cams.
  • Money for the cloud: Alternative cloud providers like CoreWeave are raising hundreds of millions of dollars as the boom in generative AI drives demand for low-cost hardware to train and run models.
  • RAG has its limits: Hallucinations are a major problem for companies looking to integrate generative AI into their operations. Some providers claim that they can eliminate them using a technique called RAG. But these claims are greatly exaggerated, you really think so.
  • Summary of the Vogels meeting: Amazon CTO Werner Vogels has open sourced a meeting summarization app called Distill. As you might expect, it relies heavily on Amazon’s products and services.
Sharing Is Caring:

Leave a Comment