This Week in AI: OpenAI and Publishers Partner for Purpose | TechCrunch - Latest Global News

This Week in AI: OpenAI and Publishers Partner for Purpose | TechCrunch

Keeping up with an industry as fast-moving as AI is a big challenge. Until an AI can do it for you, here’s a handy roundup of the latest developments from the world of machine learning, as well as notable research and experiments that we haven’t covered separately.

By the way, TechCrunch plans to launch an AI newsletter soon. Stay tuned. In the meantime, we’re increasing the frequency of our semi-regular AI column, which used to appear about twice a month, to weekly – so keep an eye out for more issues.

This week in AI, OpenAI announced that it has reached an agreement with News Corp, the new publishing giant, to train generative AI models developed by OpenAI on articles from News Corp brands, including The Wall Street Journal, Financial Times and MarketWatch. The agreement, which the companies describe as “multi-year” and “historic,” also gives OpenAI the right to display News Corp titles in apps like ChatGPT in response to certain questions – presumably in cases where the answers come in whole or in part from News Corp publications.

Sounds like a win for both sides, right? News Corp is getting a cash injection for its content—reportedly over $250 million—at a time when the outlook for the media industry is even bleaker than usual. (Generative AI hasn’t helped matters, as it threatens to severely reduce publications’ referral traffic.) Meanwhile, OpenAI, which is battling fair-use disputes with copyright holders on multiple fronts, has less to worry about when it comes to a costly court battle.

But the devil is in the details. Note that the News Corp deal has an end date – like all of OpenAI’s content licensing deals.

This in and of itself is not bad will on the part of OpenAI. Perpetual licenses are a rarity in the media, considering that all parties have the opportunity to renegotiate the contract. It Is a little suspicious given recent comments from OpenAI CEO Sam Altman about the declining importance of training data for AI models.

In an appearance on the “All-In” podcast, Altman said he “definitely [doesn’t] I think there will be an arms race for [training] Data,” because “when models become intelligent enough, at some point it shouldn’t be about more data – at least not for training.” Elsewhere, he told James O’Donnell of MIT Technology Review that he was “optimistic” that OpenAI – and/or the broader AI industry – “will find a way out [needing] more and more training data.”

The models aren’t that “intelligent” yet, which is supposedly prompting OpenAI to experiment with synthetic training data and scour the vastness of the internet – and YouTube – for organic sources. But let’s assume they will one day not need a lot of additional data to improve by leaps and bounds. Where does that leave the publishers, especially after OpenAI has searched their entire archives?

My point is that the publishers – and the other content owners that OpenAI has partnered with – appear to be short-term, self-interested partners, nothing more. By entering into licensing agreements, OpenAI effectively neutralizes a legal threat – at least until the courts decide how to apply fair use in the context of AI training – and can celebrate a PR victory. The publishers get much-needed capital. And work on AI that could seriously harm those publishers continues.

Here are some other notable AI stories from the last few days:

  • Spotify’s AI DJ: Spotify’s addition of the “AI DJ” feature, which presents users with personalized song selections, was the company’s first step into an AI future. Now Spotify is developing an alternate version of that DJ who will speak Spanish, Sarah writes.
  • Meta’s AI advice: Meta announced the creation of an AI advisory board on Wednesday. But there is one big problem: It is made up of only white men. That seems a little insensitive considering that marginalized groups are most likely to suffer the consequences of the shortcomings of AI technology.
  • FCC proposes AI disclosures: The Federal Communications Commission (FCC) has issued a rule requiring disclosure—but not banning—of AI-generated content in political ads. Devin has the full story.
  • Answer calls with your voice: Truecaller, the widely known caller ID service, will soon allow customers to use its AI-powered assistant to make calls in their own Voice, thanks to a new partnership with Microsoft.
  • Humane is considering a sale: Humane, the company behind the highly publicized Ai Pin, which had a mixed bag launch last month, is looking for a buyer. The company has reportedly set a price tag between $750 million and $1 billion and the sales process is still in the early stages.
  • TikTok relies on generative AI: TikTok is the latest tech company to integrate generative AI into its ad business. On Tuesday, the company announced that it is launching a new TikTok Symphony AI suite for brands. The tools will help marketers write scripts, produce videos and enhance their current ad assets, Aisha reports.
  • AI Summit in Seoul: At an AI security summit in Seoul, South Korea, government officials and AI industry leaders agreed to apply basic security measures in this rapidly evolving field and build an international security research network.
  • Microsoft’s AI PCs: In two keynotes during its annual Build developer conference this week, Microsoft unveiled a new line of Windows computers (and Surface laptops) it calls Copilot+ PCs, as well as generative AI-powered features like Recall, which helps users find apps, files and other content they’ve viewed in the past.
  • OpenAI’s voting debacle: OpenAI removes one of the voices from ChatGPT’s text-to-speech feature. Users found the voice, named Sky, to be eerily similar to Scarlett Johansson (who has played AI characters before) – and Johansson herself issued a statement saying she had hired legal counsel to inquire about the Sky voice and get precise details about its development.
  • British autonomous driving law: The UK’s self-driving car regulations are now official after receiving Royal Assent, the final rubber stamp that legislation must go through before coming into force.

More machine learning

This week, we have some interesting AI research for you. Prolific researcher Shyan Gollakota from the University of Washington strikes again, creating a pair of noise-cancelling headphones that you can set to block out everything except the person you want to listen to. While wearing the headphones, you press a button while looking at the person, and it picks up a voice coming from that specific direction. This runs an acoustic exclusion engine, so background noise and other voices are filtered out.

The researchers, led by Gollakota and several graduate students, call the system Target Speech Hearing and unveiled it at a conference in Honolulu last week. It’s useful both as an accessibility tool and as an everyday option, and one can imagine one of the major technology companies copying the feature for the next generation of high-end headphones.

EPFL chemists are clearly tired of doing 18 tasks individually, because they have trained a model called ChemCrow to do those tasks instead. Not real tasks like titrating and pipetting, but planning work like combing through literature and planning reaction chains. ChemCrow doesn’t do everything for researchers, of course, but acts more like a natural language interface for the entire set, using the search or calculation option as needed.

Photo credits: EPFL

The lead author of the paper introducing ChemCrow said it is “analogous to a human expert with access to a calculator and databases,” i.e. a PhD student, so hopefully they can work on something more important or skip the boring parts. Reminds me a bit of Coscientist. As for the name, it’s “because crows are known to be good with tools.” Good enough!

Disney Research roboticists are working hard to make the movements of their creations more realistic without having to manually animate each possible movement. A new paper they will present at SIGGRAPH in July shows a combination of procedurally generated animation and an artist interface for tweaking those animations. All of this works on a real bipedal robot (a Groot).

The idea is that the artist can create a mode of locomotion – springy, stiff, unstable – and the engineers don’t have to implement every detail, just make sure it’s within certain parameters. The movement can then be performed spontaneously, with the proposed system improvising the exact movements. Expect to see this at Disney World in a few years…

Sharing Is Caring:

Leave a Comment