Technology Leaders Once Called for AI Regulation. Now the Message is “Slow Down”

The other night I attended a press luncheon hosted by a company called Box. Other guests included the leaders of two data-focused companies, Datadog and MongoDB. Typically, executives are on their best behavior at these soirées, especially when the discussion is recorded like this one. So I was startled by a conversation with boxing CEO Aaron Levie, who told us that he had to make a hard stop at dessert because he was flying to Washington, DC that evening. He was on his way to a special interest day called TechNet Day, where Silicon Valley meets with dozens of members of Congress to determine what the (uninvited) public will have to live with. And what did he want from this legislation? “As little as possible,” Levie replied. “I will be solely responsible for stopping the government.”

He joked about that. Somehow. He went on to say that while it makes sense to regulate clear abuses of AI like deepfakes, it is still far too early to think about restrictions such as forcing companies to submit large language models to state-approved AI cops or chatbots Things like checking bias or the ability to hack real infrastructure. As an example of this, he pointed to Europe, which has already introduced restrictions on AI not make. “What Europe is doing is quite risky,” he said. “There is a view in the EU that if you regulate first you create an atmosphere of innovation,” Levie said. “This has been empirically proven to be false.”

Levie’s comments contradict a standard position among Silicon Valley AI elites like Sam Altman. “Yes, regulate us!” They say. But Levie finds that consensus falls apart when it comes to what exactly the laws should say. “We as a tech industry don’t know what we’re actually asking for,” Levie said. “I’ve never been to a dinner with more than five AI people where there was a single agreement on how to regulate AI.” Not, that it would be important – Levie believes that dreams of a comprehensive AI law are doomed to failure. “The good news is that the US could never be coordinated in this way. There simply won’t be an AI law in the US.”

Levie is known for his irreverent garrulousness. But in this case, he’s simply more open than many of his colleagues, whose “please regulate us” position is a kind of sophisticated tug-of-war. TechNet Day’s only public event, at least as far as I could tell, was a livestreamed panel discussion on AI innovation featuring Kent Walker, Google’s president of global affairs, and Michael Kratsios, the youngest U.S. chief technology officer and now senior executive at, participated in scaling AI. These panelists believed that the administration should focus on protecting U.S. leadership in this area. While they acknowledged that the technology poses risks, they argued that existing laws largely cover the potential dangers.

Google’s Walker seemed particularly concerned that some states were developing their own AI legislation. “In California alone, there are 53 different AI bills pending in the legislature today,” he said, and he wasn’t bragging. Walker, of course, knows that this Congress has little ability to keep the government itself afloat, and the prospect of both chambers successfully juggling this hot potato in an election year is as unlikely as Google bringing back the eight authors of the Transformer paper sets.

The US Congress has a bill pending. And the bills keep coming – some perhaps less meaningful than others. This week, Rep. Adam Schiff, a California Democrat, introduced a bill called the Generative AI Copyright Disclosure Act of 2024. It requires large language models to submit to the Copyright Office “a sufficiently detailed summary of all copyrighted works used…in the training.” Data set.” It is not clear what “sufficiently detailed” means. Would it be okay to say, “We just scraped the open net?” Schiff’s staff told me they would adopt a measure in the EU’s AI law.

Sharing Is Caring:

Leave a Comment