Google I/O Was an AI Development, Not a Revolution | TechCrunch - Latest Global News

Google I/O Was an AI Development, Not a Revolution | TechCrunch

At Google’s I/O developer conference, the company explained to developers – and to some extent consumers – why it’s using AI to outperform the competition. At the event, the company presented a revised model AI-powered search engine, an AI model with an expanded context window of 2 million tokens, AI helpers across the Workspace app suite like Gmail, Drive, and Docs, tools to integrate AI into developer apps, and even a future vision for it AI codenamed Project Astra that can respond to sight, sound, voice and text in combination.

While each advance was promising in its own right, the onslaught of AI news was overwhelming. Although these major events are obviously aimed at developers, they also provide an opportunity to get end users excited about the technology. But after the deluge of news, even somewhat tech-savvy consumers may be wondering: Wait, what’s Astra again? Is this what drives Gemini Live? Is Gemini Live something like Google Lens? How is it different from Gemini Flash? Does Google actually make AI glasses or are they vaporware? What is Gemma, what is LearnLM… what are Gems? When will Gemini arrive in your inbox, your documents? How do I use these things?

If you know the answers to these, congratulations, you’re a TechCrunch reader. (If you don’t, click the links to be informed.)

Photo credit: Google

What was missing from the overall presentation, despite the enthusiasm of the individual speakers or the jubilant cheers of Google employees in the crowd, was a sense of the coming AI revolution. If AI will ultimately lead to a product that will influence the direction of technology as profoundly as the iPhone influenced personal computers, then this was not the event at which it made its debut.

Rather, it turned out that we are still at the very beginning of AI development.

On the sidelines of the event, it felt like even Google employees knew the work wasn’t finished. When we demonstrated how AI could put together a study guide and quiz for a student within moments of uploading a multi-hundred-page document—an impressive feat—we noticed that the quiz answers were not credited. When asked about accuracy, one employee admitted that the AI ​​gets things mostly right and a future version would point to sources so people could fact-check their answers. But if you need to check facts, how reliable is an AI study guide to even prepare you for the exam?

In the Astra demo, using a camera mounted above a table and connected to a large touchscreen, you can do things like play Pictionary with the AI, show it objects, ask questions about those objects, have it tell a story, and more. But the use cases for how these capabilities could be used in everyday life were not readily apparent, despite the technological advances being impressive in their own right.

For example, you could ask the AI ​​to describe objects using alliteration. In the livestreamed keynote, Astra saw a set of crayons and replied: “Creative crayons, cheerfully colored.” Nice party trick.

In a private demo, when we challenged Astra to guess the object in a scribbled drawing, it immediately correctly identified the flower and house I had drawn on the touchscreen. When I drew a beetle – a larger circle for the body, a smaller circle for the head, small legs on the sides of the large circle – the AI ​​stumbled. Is it a flower? No. Is it the sun? No. The employee instructed the AI ​​to guess something alive. I added 2 more legs for a total of 8. Is it a spider? Yes. A human would have seen the beetle immediately, despite my lack of artistic ability.

No, you shouldn’t have recorded. But here is a similar demo released on X.

To give you an idea of ​​where the technology is today, Google employees didn’t allow any recordings or photos in the Astra demo room. Astra also ran on an Android smartphone, but you couldn’t see the app or hold the phone. The demos were fun and the technology that made them possible is certainly worth exploring, but Google missed the opportunity to show how its AI technology will impact your everyday life.

When do you need to ask an AI to come up with a band name based on, say, a picture of your dog and a stuffed tiger? Do you really need AI to help you find your glasses? (These were more Astra demos from the keynote).

Photo credit: Google demo video (opens in a new window)

This isn’t the first time we’ve seen a tech event full of demos of an advanced future with no real-world applications or ones touting amenities as more significant upgrades. Google, for example, has also teased its AR glasses in recent years. (Skydivers wearing Google Glass have even been flown into I/O, a project that began over a decade ago and has since been discontinued.)

After looking at I/O, it seems to me like Google sees AI as just another means of generating additional revenue: pay for Google One AI Premium if you want product upgrades. So maybe Google won’t make its first big breakthrough in consumer AI. As OpenAI CEO Sam Altman recently mused, OpenAI’s original idea was to develop the technology and “create all sorts of benefits for the world.”

“Instead,” he said, “it now looks like we create AI and then other people use it to create all sorts of amazing things that benefit us all.”

Google seems to be in the same boat.

Still, there were times when Google’s Astra AI seemed more promising. If it could correctly identify code or make suggestions for improving a system based on a diagram, it’s easier to see how it could be a useful work companion. (Clippy, evolved!)

Gemini in Gmail; Credit: Google
Photo credit: Google

There were other moments when the real-world practicality of AI became clear. Having Gemini’s AI in your inbox to summarize emails, create draft replies, or list action items may help you finally reach inbox zero, or something close to it, quicker. But can it delete your unwanted, non-spam emails, intelligently organize emails into labels, ensure you don’t miss an important message, and provide an overview of everything in your inbox that you need to take action on, as soon as you log in? In? Can it summarize the most important news from your email newsletters? Not quite. Not yet.

When thinking about how AI will impact the Android ecosystem – Google’s pitch to developers in attendance – there was a feeling that even Google can’t yet claim that AI will help Android lure users away from the Apple ecosystem. “When is the best time to switch from iPhone to Android?” we asked Google employees of various ranks. “This fall” was the general answer. In other words, Google’s fall hardware event, which is expected to coincide with Apple’s launch of RCS, an upgrade to SMS that will make Android messaging more competitive with iMessage.

Simply put, consumer adoption of AI in personal computing devices may require new hardware developments – perhaps AR glasses? a smarter smartwatch? Pixel Buds with twin drives? – but Google isn’t ready to reveal its hardware updates or even announce them yet. And as we’ve already seen, the hardware is still difficult given the disappointing launches of AI Pin and Rabbit.

Photo credit: Google

Although much can be achieved with Google’s AI technology on Android devices today, Google’s accessories like the Pixel Watch and the system that powers it, WearOS, have been largely overlooked in I/O, aside from some minor performance improvements. The Pixel Buds earbuds didn’t even receive praise. In Apple’s world, these accessories help engage users in the ecosystem and could one day connect them to an AI-powered Siri. They are crucial parts of the overall strategy and not optional add-ons.

In the meantime, it feels like you’re waiting for the other shoe to drop: namely Apple’s WWDC. The tech giant’s global developer conference promises to showcase Apple’s own AI agenda, perhaps through a partnership with OpenAI… or even Google. Will it be competitive? How can it be that AI can’t be integrated into the operating system as deeply as Gemini can on Android? The world is waiting for Apple’s answer.

With a hardware event in the fall, Google has time to vet Apple’s launches and then try to create its own AI moment that’s as powerful and immediately understandable as Steve Jobs’ introduction of the iPhone: “One iPod, one Telephone and an Internet.” Communicator. An iPod, a phone… can you get it?”

People got it. But when will they get Google’s AI in the same way? At least not from this I/O.

Sharing Is Caring:

Leave a Comment