This Week in AI: Former OpenAI Employees Call for Safety and Transparency | TechCrunch - Latest Global News

This Week in AI: Former OpenAI Employees Call for Safety and Transparency | TechCrunch

Hey guys, and welcome to TechCrunch’s first AI newsletter. It’s really exciting to type these words – this newsletter has been a long time in the making, and we’re excited to finally share it with you.

With the launch of TC’s AI newsletter, we are ending This Week in AI, the semi-regular column formerly known as Perceptron. But you can find all the analysis we published in This Week in AI, and moreincluding a spotlight on notable new AI models, right here.

This week, OpenAI is in renewed trouble in the AI ​​space.

A group of former OpenAI employees spoke to Kevin Roose of the New York Times about what they see as glaring security flaws within the organization. They — along with others who have left OpenAI in recent months — claim that the company is not doing enough to prevent its AI systems from becoming potentially dangerous, and accuse OpenAI of using harsh measures to try to discourage employees from raising the alarm.

The group published an open letter on Tuesday calling on leading AI companies, including OpenAI, to provide greater transparency and protection for whistleblowers. “Unless there is effective government oversight of these companies, current and former employees are among the few people who can hold them accountable to the public,” the letter said.

Call me pessimistic, but I expect the former employees’ demands will fall on deaf ears. It’s hard to imagine a scenario in which AI companies not only agree to “foster a culture of open criticism,” as the signatories recommend, but also decide not to enforce non-disparagement clauses or retaliate against current employees who speak out.

Keep in mind that OpenAI’s Security Committee, which the company recently formed in response to initially criticism of its security practices, is staffed by company insiders – including CEO Sam Altman. And keep in mind that Altman, who once claimed ignorance of OpenAI’s restrictive non-disparagement agreements, oneself signed the founding documents from which they emerge.

Sure, things could change for the better at OpenAI tomorrow, but I’m not so optimistic. And even if they did, it would be hard to trust it.

News

AI Apocalypse: OpenAI’s AI-powered chatbot platform ChatGPT, as well as Anthropic’s Claude and Google’s Gemini and Perplexity all went down at around the same time this morning. All services have since been restored, but the cause of the downtime remains unclear.

OpenAI explores fusion: According to the Wall Street Journal, OpenAI is currently negotiating with fusion startup Helion Energy over a deal that would see the AI ​​company buy large amounts of electricity from Helion to power its data centers. Altman owns a $375 million stake in Helion and sits on the company’s board of directors, but has reportedly withdrawn from the negotiations.

The cost of training data: TechCrunch takes a look at the expensive data licensing deals that are becoming increasingly common in the AI ​​industry – deals that could make AI research prohibitive for smaller organizations and academic institutions.

Hateful music generators: Malicious actors are abusing AI-powered music generators to create homophobic, racist and propaganda songs – and publishing instructions telling others how to do the same.

Cash for Cohere: Reuters reports that Cohere, an enterprise-focused generative AI startup, has raised $450 million from Nvidia, Salesforce Ventures, Cisco and others in a new tranche that values ​​Cohere at $5 billion. Sources familiar with TechCrunch report that Oracle and Thomvest Ventures – both returning investors – also participated in the round, which was left open-ended.

Research paper of the week

In a 2023 research paper titled “Let’s Verify Step by Step,” which OpenAI recently featured on its official blog, OpenAI scientists claimed to have optimized the startup’s general-purpose generative AI model, GPT-4, to perform better than expected when solving mathematical problems. The approach could make generative models less prone to going off the rails, say the paper’s co-authors – but they point out several caveats.

In the paper, the co-authors describe in detail how they trained reward models to detect hallucinations or cases where GPT-4 got its facts and/or answers to math problems wrong. (Reward models are specialized models for evaluating the results of AI models, in this case, math-related results from GPT-4.) The reward models “rewarded” GPT-4 every time it answered a step of a math problem correctly, an approach the researchers call “process monitoring.”

The researchers say that process monitoring improved GPT-4’s accuracy on math problems compared to previous techniques for “rewarding” models – at least in their benchmark tests. But they admit that it’s not perfect; GPT-4 still got problem steps wrong. And it’s unclear how the form of process monitoring the researchers studied generalizes beyond the math domain.

Model of the week

Weather forecasting may not feel like a science (at least when it’s raining, like I just did), but that’s because it’s about probabilities, not certainties. And what better way to calculate probabilities than with a probability model? We’ve already seen AI used in weather forecasting for timescales ranging from hours to centuries, and now Microsoft is getting in on the act. The company’s new Aurora model moves the ball forward in this rapidly evolving area of ​​the AI ​​world, offering globe-level forecasts with ~0.1° resolution (think on the order of 10 km²).

Photo credits: Microsoft

Trained on over a million hours of weather and climate simulations (no real weather? Hmm…) and optimized for a number of interesting tasks, Aurora outperforms traditional numerical forecasting systems by several orders of magnitude. Even more impressively, it beats Google DeepMind’s GraphCast at its own game (though Microsoft picked the field), providing more accurate estimates of weather conditions on the one- to five-day scale.

Of course, companies like Google and Microsoft are also in the mix. Both are vying for your online attention by trying to give you the most personalized web and search experience possible. Accurate and efficient first-hand weather forecasts will play an important role in that, at least until we go outside.

Grab bag

In a post published in Palladium last month, Avital Balwit, chief of staff at AI startup Anthropic, said the next three years could be the last she and many knowledge workers have to work due to the rapid advances in generative AI. This should be a comfort rather than a cause for fear, she says, because it could “[lead to] a world in which people’s material needs are met but they do not have to work.”

“A renowned AI researcher once told me that he was practicing for [this inflection point] by taking up activities he is not particularly good at: jiu-jitsu, surfing, and so on, and enjoying what he is doing even if he is not excellent,” writes Balwit. “In this way, we can prepare ourselves for our future, in which we will have to do things for pleasure rather than necessity, in which we will no longer be the best at them but will still have to choose how we spend our days.”

That is certainly the “glass half full” view – but I cannot claim that I share it.

If generative AI replaces most knowledge workers within three years (which seems unrealistic to me given the many unresolved technical problems of AI), we could well see an economic collapse. Knowledge workers make up a large part of the workforce and tend to be high earners – and therefore big spenders. They drive the wheels of capitalism.

Balwit points to universal basic income and other large-scale social safety nets, but I don’t have much confidence that countries like the US, which are unable to even enact basic federal AI legislation, will implement universal basic income programs anytime soon.

With any luck I’m wrong.

Sharing Is Caring:

Leave a Comment