This Week in AI: Can We Trust OpenAI (and Could We Ever)? | TechCrunch - Latest Global News

This Week in AI: Can We Trust OpenAI (and Could We Ever)? | TechCrunch

Keeping up with an industry as fast-moving as AI is a big challenge. Until an AI can do it for you, here’s a handy roundup of the latest developments from the world of machine learning, as well as notable research and experiments that we haven’t covered separately.

By the way, TechCrunch plans to launch an AI newsletter on June 5. Stay tuned. In the meantime, we’re increasing the frequency of our semi-regular AI column, which used to appear about twice a month, to weekly—so keep an eye out for more installments.

This week, OpenAI introduced discounted rates for nonprofits and education customers in the AI ​​space and unveiled its latest efforts to prevent malicious actors from abusing its AI tools. There isn’t much to criticize about that – at least not in this writer’s opinion. But I will say the flood of announcements seemed to have come at the right time to counteract the bad press the company has received recently.

Let’s start with Scarlett Johansson. OpenAI removed one of the voices of its AI-powered chatbot ChatGPT after users pointed out that it sounded eerily similar to Johansson’s voice. Johansson later released a statement saying she had hired legal counsel to inquire about the voice and get precise details about its development – and she had refused repeated requests from OpenAI to license her voice for ChatGPT.

Now an article in the Washington Post suggests that OpenAI was not actually trying to clone Johansson’s voice and that any similarities are coincidental. But why then did OpenAI CEO Sam Altman contact Johansson and ask her to reconsider two days before a spectacular demo with the similar-sounding voice? That’s a little suspicious.

Then there are OpenAI’s trust and security issues.

As we reported earlier this month, OpenAI’s now-defunct Superalignment team, which was responsible for developing methods to guide and control “superintelligent” AI systems, was promised 20% of the company’s computing resources – but always (and rarely) received a fraction of that. This (among other things) led to the resignation of the team’s two co-leaders, Jan Leike and Ilya Sutskever, OpenAI’s former chief scientist.

Nearly a dozen security experts have left OpenAI in the past year; some, including Leike, have publicly expressed concerns that the company prioritizes commercial projects over security and transparency efforts. In response to the criticism, OpenAI has formed a new committee to oversee security decisions related to the company’s projects and activities. However, the committee has been filled with company insiders – including Altman – rather than outside observers. This comes as OpenAI is reportedly considering abandoning its nonprofit structure in favor of a traditional for-profit model.

Incidents like these make it harder to trust OpenAI, a company whose power and influence grows every day (see: its deals with news publishers). Few, if any, companies deserve trust. But OpenAI’s market-disrupting technologies make the breaches all the more troubling.

The fact that Altman himself is not exactly a model of truthfulness does not help.

When OpenAI’s aggressive practices toward former employees became known – including threatening to lose their vested shares or preventing them from selling shares unless they signed restrictive non-disclosure agreements – Altman apologized and claimed he had no knowledge of the policies. However, according to Vox, Altman’s signature is on the founding documents that put the policies into effect.

And if former OpenAI board member Helen Toner—one of the former board members who tried to remove Altman from his post late last year—is to be believed, Altman withheld information, misrepresented things that happened at OpenAI, and in some cases outright lied to the board. Toner says the board learned about ChatGPT’s release from Twitter, not Altman; Altman gave false information about OpenAI’s official security practices; and Altman was unhappy with a scientific paper co-authored by Toner that cast OpenAI in a critical light and tried to manipulate board members to force Toner off the board.

None of this bodes well.

Here are some other notable AI stories from the last few days:

  • Voice cloning made easy: A new report from the Center for Countering Digital Hate finds that AI-powered voice cloning services make falsifying politicians’ statements quite trivial.
  • Google’s AI overviews have problems: AI Overviews, the AI-generated search results that Google started rolling out more widely in Google Search earlier this month, still need improvement. The company admits as much — but claims it’s iterating quickly. (We’ll see.)
  • Paul Graham on Altman: In a series of posts on X, Paul Graham, co-founder of startup accelerator Y Combinator, refuted claims that Altman was pressured to resign as president of Y Combinator in 2019 due to potential conflicts of interest. (Y Combinator owns a small stake in OpenAI.)
  • xAI raises $6 billion: xAI, Elon Musk’s AI startup, has raised $6 billion in funding, giving Musk the capital to aggressively compete with rivals like OpenAI, Microsoft and Alphabet.
  • Perplexity’s new AI feature: With its new feature “Perplexity Pages,” AI startup Perplexity wants to help users create reports, articles or guides in a more visually appealing format, Ivan reports.
  • Favorite numbers of AI models: Devin writes about the numbers different AI models choose when asked to give a random answer. As it turns out, they have favorites – a reflection of the data they were trained with.
  • Mistral releases Codestral: Mistral, the Microsoft-backed French AI startup valued at $6 billion, has released its first generative AI model for programming, called Codestral. However, due to Mistral’s rather restrictive license, it cannot be used commercially.
  • Chatbots and data protection: Natasha writes about the European Union ChatGPT taskforce and how it offers a first approach to unraveling AI chatbot privacy compliance.
  • The sound generator from ElevenLabs: Voice cloning startup ElevenLabs introduced a new tool in February that allows users to create sound effects through input prompts.
  • Connecting elements for AI chips: Technology giants like Microsoft, Google and Intel – but not Arm, Nvidia or AWS – have formed an industry group, the UALink Promoter Group, to help develop next-generation AI chip components.
Sharing Is Caring:

Leave a Comment