Spawning Aims to Create More Ethical Training Datasets for AI | TechCrunch - Latest Global News

Spawning Aims to Create More Ethical Training Datasets for AI | TechCrunch

Jordan Meyer and Mathew Dryhurst founded Spawning AI to develop tools to help artists exert more control over the online use of their work. Their latest project, called Source.Plus, aims to curate “non-infringing” media for training AI models.

The Source.Plus project’s first initiative is a dataset of nearly 40 million public domain images and images under Creative Commons’ CC0 license, which allows creators to waive nearly all legal claims to their works. Meyer claims that Source.Plus’ dataset, while significantly smaller than some other training datasets for generative AI, is already “high quality” enough to train a state-of-the-art image generation model.

“With Source.Plus, we are building a universal opt-in platform,” said Meyer. “Our goal is to make it easy for rights holders to offer their media for generative AI training – on their own terms – and to enable developers to seamlessly integrate that media into their training workflows.”

Rights management

The debate surrounding the ethical aspects of training generative AI models, particularly art-generating models like OpenAI’s Stable Diffusion and DALL-E 3, continues unabated—and has massive implications for artists, however the dust eventually settles.

Generative AI models “learn” to produce their outputs, such as photorealistic art, by training on a large amount of relevant data—in this case, images. Some developers of these models argue that the fair use rule gives them the right to extract data from public sources, regardless of the copyright status of that data. Others have tried to play by the rules by compensating, or at least giving credit to, content owners for their contributions to the training datasets.

Meyer, CEO of Spawning, believes that they have not yet agreed on the best course of action.

“AI training often defaulted to using the simplest data available – which were not always the fairest or most responsible sources,” he said in an interview with TechCrunch. “Artists and rights holders had little control over how their data was used for AI training, and developers didn’t have high-quality alternatives that made it easy to respect data rights.”

Source.Plus, available in limited beta, builds on Spawning’s existing art provenance and rights management tools.

In 2022, Spawning created HaveIBeenTrained, a website that allows developers to opt out of the training datasets provided by vendors that partner with Spawning, including Hugging Face and Stability AI. After raising $3 million in venture capital from investors including True Ventures and Seed Club Ventures, Spawning launched ai.text, a way for websites to set “permissions” for AI, and a system – Kudurru – to protect against data-scraping bots.

Source.Plus is Spawning’s first attempt to build a media library – and manage that library in-house. The initial image dataset, PD/CC0, can be used for commercial or research purposes, Meyer says.

The Source.Plus library.
Photo credits: Spawn

“Source.Plus is not just a repository for training data, but an enrichment platform with tools to support the training pipeline,” he continued. “Our goal is to have a high-quality, non-violating CC0 dataset available within a year that can support a powerful baseline AI model.”

Companies like Getty Images, Adobe, Shutterstock and AI startup Bria claim to use only fairly sourced data for model training. (Getty even goes so far as to call its generative AI products “commercially safe.”) But Meyer says Spawning wants to “raise the bar” for fair data sourcing.

Source.Plus filters images by “opt-outs” and other artist education preferences, and displays provenance information about how and where the images came from. It also excludes images that are not licensed under CC0, including those with a Creative Commons BY 1.0 license that require attribution. And Spawning says it looks for copyright infringement from sources where someone other than the creators is responsible for indicating the copyright status of a work, such as Wikimedia Commons.

“We carefully reviewed the stated licenses of the images we collected and excluded any questionable licenses – a step that many ‘fair’ datasets do not take,” Meyer said.

In the past, both public and commercial training datasets have been contaminated by problematic images, including violent and pornographic images as well as sensitive personal images.

The operators of the LAION dataset were forced to take a library offline after reports uncovered medical records and depictions of child sexual abuse. Just this week, a study by Human Rights Watch found that one of the LAION repositories contained the faces of Brazilian children without their consent or knowledge. Additionally, Adobe Stock, Adobe’s stock media library that the company uses to train its generative AI models, including the art-generating Firefly Image model, was found to contain AI-generated images from competitors like Midjourney.

Spawning Source.Plus
Artwork in the Source.Plus gallery.
Photo credits: Spawn

Spawning’s solution is classifier models trained to detect nudity, blood, personally identifiable information and other undesirable parts in images. Since no classifier is perfect, Spawning plans to give users the ability to “flexibly” filter the Source.Plus dataset by adjusting the classifiers’ detection thresholds, Meyer says.

“We employ moderators to verify data ownership,” Meyer added. “We also have built-in remediation features that allow users to flag offensive or potentially infringing works and track how that data has been used.”

Compensation

Most programs designed to compensate creators for their contributions to training data for generative AI have not fared particularly well. Some programs rely on opaque metrics to calculate payouts to creators, while others pay out amounts that artists consider unreasonably low.

Take Shutterstock, for example. The stock library, which has tens of millions of dollars in deals with AI vendors, pays into a “contribution fund” for artwork it uses to train its generative AI models or for licenses to third-party developers. But Shutterstock isn’t transparent about what artists can expect, nor does it allow them to set their own prices and terms. One third-party estimate puts earnings at $15 for 2,000 images, not exactly an earth-shattering sum.

Once Source.Plus leaves beta later this year and expands to datasets beyond PD/CC0, it will take a different tack from other platforms, allowing artists and rights holders to set their own prices per download. Spawning will charge a fee, but only a flat amount — a “tenth of a penny,” Meyer says.

Customers can also choose to pay Spawning $10 per month – plus the usual per-image download fee – for Source.Plus Curation, a subscription that lets them privately manage image collections, download the dataset up to 10,000 times per month, and get early access to new features like “premium” collections and data enrichment.

Spawning Source.Plus
Photo credits: Spawn

“We will provide guidance and recommendations based on current industry standards and internal metrics, but ultimately the dataset contributors will decide what is worthwhile for them,” Meyer said. “We deliberately chose this pricing model to give artists the lion’s share of revenue and allow them to set their own terms for participation. We believe this revenue split is significantly more favorable for artists than the more common percentage revenue split and will result in higher payouts and greater transparency.”

If Source.Plus is as successful as Spawning hopes, Spawning intends to expand beyond images to other media types, including audio and video. Spawning is in talks with unnamed companies to make their data available on Source.Plus. And, Meyer said, Spawning could build its own generative AI models using data from the Source.Plus data sets.

“We hope that rights holders who want to participate in the generative AI economy will have the opportunity to do so and receive fair compensation for doing so,” Meyer said. “We also hope that artists and developers who have been conflicted about engaging with AI will have the opportunity to do so in a way that is respectful to other creatives.”

Spawning certainly has a niche to carve out for itself here. Source.Plus seems to be one of the most promising attempts to involve artists in the generative AI development process – and let them share in the profits of their work.

As my colleague Amanda recently wrote, the emergence of apps like art hosting community Cara, whose usage skyrocketed after Meta announced it could train its generative AI on content from Instagram, including artist content, shows that the creative community has reached a tipping point. It’s desperately seeking alternatives to companies and platforms that perceive it as thieves—and Source.Plus could be a viable alternative.

But if Spawning always acts in the best interests of artists (a big if, considering Spawning is a VC-backed company), I wonder if Source.Plus can grow as successfully as Meyer envisions. If social media has taught us anything, it’s that moderation – especially of millions of pieces of user-generated content – is an intractable problem.

We’ll find out soon enough.

Sharing Is Caring:

Leave a Comment