OpenAI’s GPT Store is Full of Promises – and Spam

Screenshot by Lance Whitney/ZDNET

One of the benefits of a ChatGPT Plus subscription is the ability to access the GPT Store, which is now home to more than 3 million custom versions of ChatGPT bots. But among all the useful and helpful GPTs that follow the rules, there are a multitude of bots that are considered spam.

Also: ChatGPT vs. ChatGPT Plus: Is the subscription fee worth it?

Based on its own investigation into the store, TechCrunch has found a variety of GPTs that violate copyright regulations, attempt to bypass AI content detectors, impersonate public figures, and use jailbreaking to circumvent OpenAI’s GPT policy.

Several of these GPTs appear to be using characters and content from popular movies, TV shows and video games without permission, according to TechCrunch. Such a GPT creates monsters based on the model of the Pixar film “Monsters Inc”. Another takes you on a text-based adventure through the “Star Wars” universe. Other GPTs let you chat with trademarked characters from various franchises.

One of the custom GPT rules outlined in OpenAI’s usage guidelines specifically prohibits the “use of third-party content without the necessary permissions.” Based on the Digital Millennium Copyright Act, OpenAI itself would not be liable for copyright infringement, but would have to remove the infringing content upon request.

According to TechCrunch, the GPT Store is also full of GPTs bragging about being able to beat AI content detectors. This capability even covers detectors sold to schools and educators through third-party anti-plagiarism developers. A GPT claims to be undetectable by detection tools such as and Copyleaks. Another GPT promises to humanize its content to bypass AI-based detection systems.

Also: The Ethics of Generative AI: How We Can Harness This Powerful Technology

Some of the GPTs even redirect users to premium services, including one that tries to charge $12 per month for 10,000 words per month.

OpenAI’s usage guidelines prohibit “engaging in or promoting academic dishonesty.” In a statement sent to TechCrunch, OpenAI said academic dishonesty also includes GPTs that attempt to bypass academic integrity tools such as plagiarism detectors.

Imitation may be the sincerest form of flattery, but that doesn’t mean GPT creators can freely and openly imitate anyone they want. TechCrunch has found several GPTs impersonating public figures. A search of the GPT Store for names like “Elon Musk,” “Donald Trump,” “Leonardo DiCaprio,” and “Barack Obama” uncovered chatbots pretending to be these people or simulating their conversational style.

Also: ChatGPT vs. Microsoft Copilot vs. Gemini: Which is the best AI chatbot?

The question here focuses on the intent of these imitation GPTs. Do they fall into the realm of satire and parody or are they outright attempts to imitate these well-known people? In its usage guidelines, OpenAI states that “impersonating another person or entity without consent or legal standing” is against the rules.

Finally, TechCrunch came across several GPTs attempting to circumvent OpenAI’s own rules through some form of jailbreaking. A GPT called Jailbroken DAN (Do Anything Now) uses a prompt method to respond to prompts that are not restricted by usual policies.

In a statement to TechCrunch, OpenAI said GPTs that aim to bypass its security measures or break its rules violate its policies. But those who try to control behavior in other ways are allowed.

Also: YouPro gives me access to all major premium AI chatbots for $20 per month – but there’s a catch

The GPT store is still brand new and was officially opened in January this year. And an influx of more than 3 million custom GPTs in this short period of time is undoubtedly a staggering prospect. Such a store will face growing pains, especially when it comes to content moderation, which can be a difficult balancing act.

In a blog post last November announcing custom GPTs, OpenAI said it had set up new systems to review GPTs against its usage policies. The goal is to prevent people from sharing harmful GPTs, including those that contain fraudulent activity, hateful content, or adult themes. However, the company admitted that tackling GPTs that break the rules is a learning process.

“We will continue to monitor and learn how people use GPTs and update and strengthen our security measures,” OpenAI said, adding that people can report a specific GPT for violating certain rules. To do this, click on the GPT name at the top of the GPT chat window, select “Report,” and then select the reason for the report.

Also: Learn how to create your own custom chatbots with ChatGPT

Still, it looks bad for OpenAI to host so many rule-breaking GPTs, especially when the company is trying to prove its value. If this problem is of the magnitude that the TechCrunch report suggests, it’s time for OpenAI to find a solution. Or as TechCrunch put it: “The GPT Store is a mess – and if something doesn’t change soon, it may well stay that way.”

s.parentNode.insertBefore(t,s)}(window, document,’script’,
fbq(‘set’, ‘autoConfig’, false, ‘789754228632403’);
fbq(‘init’, ‘789754228632403’);

Sharing Is Caring:

Leave a Comment