Dropbox and Figma CEOs Back Lamini, a Startup Building a Generative AI Platform for Businesses | TechCrunch - Latest Global News

Dropbox and Figma CEOs Back Lamini, a Startup Building a Generative AI Platform for Businesses | TechCrunch

Lamini, a Palo Alto-based startup building a platform to help companies use generative AI technology, has raised $25 million from investors including Stanford computer science professor Andrew Ng.

Lamini, co-founded a few years ago by Sharon Zhou and Greg Diamos, has an interesting selling point.

Many generative AI platforms are far too universally applicable, argue Zhou and Diamos, and do not have solutions and infrastructure that are tailored to the needs of companies. In contrast, Lamini was built from the ground up for enterprises and is focused on delivering high generative AI accuracy and scalability.

“The top priority of almost every CEO, CIO and CTO is to leverage generative AI in their business with maximum ROI,” Zhou, CEO of Lamini, told TechCrunch. “But while it’s easy for a single developer to get a working demo on a laptop, the path to production is riddled with bugs everywhere.”

Zhou summed up how many companies expressed frustration with the hurdles to meaningfully adopting generative AI into their business functions.

According to a March survey from MIT Insights, only 9% of companies have adopted generative AI at scale, although 75% have experimented with it. The biggest hurdles range from a lack of IT infrastructure and skills to poor governance structures, inadequate skills and high implementation costs. Security is also an important factor – in a recent survey by Insight Enterprises, 38% of companies said security affects their ability to use generative AI technology.

So what is Lamini’s answer?

Zhou says that “every part” of Lamini’s technology stack is optimized for enterprise-scale generative AI workloads, from hardware to software, including the engines used to support model orchestration, fine-tuning, execution and training. “Optimized” is admittedly a vague word, but Lamini is pioneering a step Zhou calls “memory optimization.” It is a technique that trains a model on data so that it accurately retrieves parts of that data.

Memory optimization can potentially reduce hallucinations, Zhou claims, or cases in which a model makes up facts in response to a query.

“Memory optimization is a training paradigm – just as efficient as fine-tuning but goes beyond that – to train a model on proprietary data that contains important facts, figures and figures so that the model has high precision,” says Nina Wei, an AI designer at Lamini, told me via email, “and can remember and recall the exact match of important information rather than generalizing or hallucinating.”

I’m not sure if I buy this. “Memory tuning” seems to be more of a marketing term than an academic one; There’s no research on it – at least none that I’ve been able to find. I’ll leave it to Lamini to provide evidence that his “memory tuning” is better than the other hallucination-reducing techniques that are/have been tried.

Luckily for Lamini, storage optimization isn’t the only differentiator.

According to Zhou, the platform can operate in highly secure environments, including air-gapped environments. Lamini allows organizations to run, refine and train models in a range of configurations, from on-premises data centers to public and private clouds. And it scales workloads “elastically,” reaching over 1,000 GPUs when the application or use case requires it, Zhou says.

“Incentives are currently not aligned with closed-source models in the market,” Zhou said. “We’re aiming Put control back into the hands of more people, not just a few, starting with companies that care most about control and have the most to lose from their proprietary data that belongs to someone else.”

The co-founders of Lamini are well versed in the AI ​​space. They also met with Ng separately, which no doubt explains his investment.

Previously, Zhou was a lecturer at Stanford University, where she led a group focused on generative AI. Before earning her PhD in computer science from Ng, she was a product manager for machine learning at Google Cloud.

For his part, Diamos co-founded MLCommons, the engineering consortium dedicated to creating standard benchmarks for AI models and hardware, as well as the MLCommons benchmarking suite MLPerf. He also led AI research at Baidu, where he worked with Ng while he was chief scientist there. Diamos was also a software architect on Nvidia’s CUDA team.

The co-founders’ industry contacts appear to have given Lamini a head start in fundraising. In addition to Ng, Figma CEO Dylan Field, Dropbox CEO Drew Houston, OpenAI co-founder Andrej Karpathy and, oddly enough, Bernard Arnault, the CEO of luxury goods giant LVMH, have all invested in Lamini.

AMD Ventures is also an investor (somewhat ironic given Diamos’ Nvidia roots), as are First Round Capital and Amplify Partners. AMD got involved early on and supplied Lamini with data center hardware. Today, Lamini, bucking the industry trend, runs many of its models on AMD Instinct GPUs.

Lamini claims that the training and running performance of its model can keep up with that of comparable Nvidia GPUs, depending on the workload. Since we are unable to verify this claim, we will leave it to third parties.

To date, Lamini has raised $25 million in seed and Series A rounds (Amplify led the Series A). Zhou says the money will go toward tripling the company’s 10-person team, expanding its computing infrastructure and starting to develop “deeper technical optimizations.”

There are a number of enterprise-focused generative AI providers that could compete with aspects of the Lamini platform, including tech giants such as Google, AWS and Microsoft (through the OpenAI partnership). Google, AWS, and OpenAI in particular have aggressively courted the company in recent months, introducing features like optimized fine-tuning, private data fine-tuning, and more.

I asked Zhou about Lamini’s customers, revenue, and overall go-to-market dynamics. She didn’t want to reveal much at this somewhat early stage, but did say that AMD (through the AMD Ventures collaboration), AngelList and NordicTrack are among the first (paying) users of Lamini, along with several unnamed government agencies.

“We are growing quickly,” she added. “The biggest challenge is serving customers. We only managed the incoming demand because we were overloaded. Given the interest in generative AI, we are not representative of the overall technology downturn – unlike our competitors in the hyped AI world, we have gross margins and revenue that look more like a normal technology company.”

Mike Dauber, general partner at Amplify, said: “We believe there is a tremendous opportunity for generative AI in the enterprise.” Although there are a number of AI infrastructure companies, Lamini is the first company I have seen that do this takes business issues seriously and develops a solution that helps companies unlock the enormous value of their private data while meeting even the most stringent compliance and security requirements.”

Sharing Is Caring:

Leave a Comment