This Week in AI: OpenAI Moves Away from Security | TechCrunch - Latest Global News

This Week in AI: OpenAI Moves Away from Security | TechCrunch

Keeping up with an industry as fast-moving as AI is a major challenge. Until an AI can do it for you, here’s a handy roundup of the latest stories from the world of machine learning, as well as notable research and experiments that we haven’t covered alone.

By the way, TechCrunch is planning to publish an AI newsletter soon. Stay tuned. In the meantime, we’re increasing the frequency of our semi-regular AI column, which previously appeared about twice a month, to weekly – so keep an eye out for further editions.

This week in the AI ​​space, OpenAI once again dominated the news cycle (despite Google’s best efforts) with a product launch but also some palace intrigue. The company unveiled GPT-4o, its most powerful generative model to date, and just days later effectively disbanded a team dedicated to developing controls to prevent “superintelligent” AI systems from spiraling out of control.

As expected, the team’s dissolution made many headlines. Reports – including ours – suggest that OpenAI prioritized the team’s security research in favor of launching new products like the aforementioned GPT-4o, ultimately leading to the resignation of the team’s two co-leaders, Jan Leike and OpenAI co-founder Ilya Sutskever, led.

Superintelligent AI is currently more theoretical than real; It’s not clear when — or if — the tech industry will make the breakthroughs necessary to create AI capable of handling any task a human can. But this week’s reporting seems to confirm one thing: that OpenAI’s leadership — particularly CEO Sam Altman — has increasingly chosen to prioritize products over security measures.

Altman is said to have “infuriated” Sutskever by accelerating the introduction of AI-powered features at OpenAI’s first developer conference last November. And he is said to have criticized Helen Toner, director of the Georgetown Center for Security and Emerging Technologies and a former member of the OpenAI board, over a paper she co-authored that cast OpenAI’s approach to security in a critical light – to the point that …he tried to push her off the board.

Last year, for example, OpenAI filled its chatbot shop with spam and (allegedly) stole data from YouTube in violation of the platform’s terms of service, while also expressing a desire to have its AI generate depictions of porn and gore. Certainly, security seems to have taken a backseat within the enterprise – and a growing number of OpenAI security researchers have concluded that their work would be better supported elsewhere.

Here are some other notable AI stories from recent days:

  • OpenAI + Reddit: In other OpenAI news, the company has reached an agreement with Reddit to use the social site’s data to train AI models. Wall Street welcomed the deal with open arms — but Reddit users may not be so pleased.
  • Google’s AI: Google held its annual I/O developer conference this week, where it debuted the company a tonne of AI products. We’ve rounded them up here, from video-generating Veo to AI-organized results in Google Search to upgrades to Google’s Gemini chatbot apps.
  • Anthropic hires warriors: Mike Krieger, a co-founder of Instagram and most recently co-founder of personalized messaging app Artifact (which TechCrunch parent company Yahoo recently acquired), is joining Anthropic as the company’s first chief product officer. He will oversee both the company’s consumer and corporate efforts.
  • AI for children: Anthropic announced last week that it would allow developers to create kid-focused apps and tools based on its AI models — as long as they follow certain rules. In particular, competitors such as Google prohibit the integration of their AI into apps aimed at younger age groups.
  • At the film festival: AI startup Runway held its second-ever AI film festival earlier this month. Take that away? Some of the more powerful moments in the showcase came not from AI, but from more human elements.

More machine learning

AI safety is obviously top of mind this week with OpenAI’s departures, but Google Deepmind is moving forward with a new “Frontier Safety Framework.” Essentially, this is the organization’s strategy to identify and hopefully prevent runaway capabilities – it doesn’t necessarily have to be AGI, it could also be a malware generator gone crazy or something similar.

Photo credit: Google Deepmind

The framework consists of three steps: 1. Identify potentially harmful capabilities in a model by simulating its development paths. 2. Evaluate models regularly to determine when they have reached known “critical performance levels.” 3. Implement a mitigation plan to prevent exfiltration (by others or yourself) or problematic deployment. Further details can be found here. It may sound like an obvious sequence of actions, but it’s important to formalize it otherwise everyone will just do it. This is how you get the bad AI.

A completely different risk has been identified by Cambridge researchers who are rightly concerned about the proliferation of chatbots that are trained using the data of a deceased person in order to create a superficial image of that person. You may find the whole concept a bit abhorrent (like I did), but if we’re careful, it could be used in grief recovery and other scenarios. The problem is that we are not careful.

Photo credit: University of Cambridge / T. Hollanek

“This area of ​​AI is an ethical minefield,” said lead researcher Katarzyna Nowaczyk-Basińska. “We now need to think about how to mitigate the social and psychological risks of digital immortality, because the technology is already here.” The team identifies numerous scams, possible bad and good outcomes, and discusses the concept in general (including fake services) in one Article published in Philosophy & Technology. Black Mirror predicts the future once again!

In less scary applications of AI, physicists at MIT are looking for a useful (to them) tool for predicting the phase or state of a physical system, usually a statistical task that can become tedious with more complex systems. However, if you train a machine learning model based on the right data and associate it with some known material properties of a system, you have a much more efficient approach. Just another example of how ML is finding niches even in advanced science.

Over at CU Boulder, they’re talking about how AI can be used in disaster management. The technology can be useful for quickly predicting where resources will be needed, mapping damage, and even helping to train responders, but people are (understandably) hesitant to use it in life-or-death scenarios. to use.

Participants of the workshop.
Photo credit: WITH boulders

Professor Amir Behzadan tries to move the ball forward here by saying, “Human-centered AI leads to more effective disaster response and recovery practices by fostering collaboration, understanding and inclusivity among team members, survivors and stakeholders.” They are still in the workshop phase, but it’s important to think carefully about these things before attempting, for example, to automate aid distribution after a hurricane.

Finally, some interesting work from Disney Research looking at diversifying the output of diffusion image generation models, which can produce similar results over and over again with some prompts. Her solution? “Our sampling strategy tempers the conditioning signal by adding planned, monotonically decreasing Gaussian noise to the conditioning vector during inference to balance diversity and condition bias.” I simply couldn’t put it better myself.

Photo credit: Disney Research

The result is a much greater variety of angles, settings and the general appearance of the image outputs. Sometimes you want that, sometimes you don’t, but it’s nice to have the option.

Sharing Is Caring:

Leave a Comment