Exclusive: RunPod secures $20M from Dell and Intel as demand soars for AI clouds


Discover how companies are responsibly integrating AI in production. This invite-only event in SF will explore the intersection of technology and business. Find out how you can attend here.


As the race to harness AI accelerates in the enterprise, a key challenge remains developing and deploying AI applications into production quickly and at scale. RunPod, a startup offering a globally distributed GPU cloud platform for developing and deploying AI, today announced it has raised $20 million in seed funding from Dell Technologies Capital and Intel Capital to tackle this problem head-on.

The rise of purpose-built AI cloud platforms

RunPod’s traction points to a larger trend in the industry: the rise of specialized cloud services built specifically for AI. As AI becomes more central to business operations, the limitations of general-purpose cloud infrastructure are becoming apparent. Issues like latency, inflexibility in scaling resources, and a lack of AI-specific tools are hindering the development and deployment of AI applications.

In response, a new breed of AI cloud platforms has emerged, offering optimized compute resources, enhanced flexibility and scalability, and developer-centric environments. These specialized platforms are designed to handle the unique demands of AI workloads, from the high computational requirements of model training to the need for rapid scaling and efficient resource allocation.

RunPod’s $20 million seed round comes amidst a flurry of funding activity in the specialized AI cloud space. As the demand for GPU-accelerated infrastructure soars, several other startups have also attracted significant investment in recent months.

CoreWeave, a New Jersey-based provider of GPU-accelerated cloud infrastructure, recently secured $1.1 billion in new funding at a reported valuation of $19 billion. The company, which initially focused on crypto and blockchain applications, has been investing heavily in its AI and graphics rendering capabilities. In the past year, CoreWeave has expanded its data center presence, quadrupled its headcount, and secured substantial debt financing to fuel its growth.

Similarly, San Francisco-based Together Computer Inc. is reportedly seeking to raise over $100 million at a valuation exceeding $1 billion, double its previous valuation from November. Together’s cloud platform offers access to high-end Nvidia GPUs and includes software features designed to streamline the training of large language models. The company’s latest round is expected to be led by Salesforce’s venture arm and include participation from Coatue.

Another competitor, Lambda Inc., just announced a $320 million round at a $1.5 billion valuation for its AI-optimized cloud platform.

These substantial funding rounds highlight the growing demand for specialized AI infrastructure and the potential market opportunity for companies like RunPod. However, they also illustrate the competitive pressures RunPod will face as it seeks to scale its business and differentiate itself in an increasingly crowded market.

Driving developer focus

RunPod says it grew its user base to over 100,000 developers by focusing relentlessly on developer experience and iteration speed as the key to unlocking business value from AI.

“If your developers are happy and they feel like they’re using a tool that meets their needs, that’s what matters most,” said Zhen Lu, RunPod’s co-founder and CEO. “A lot of companies have lost sight of this in the current frothiness. They think they can just rack and stack GPUs and developers will come. But the real value is in enabling rapid iteration.”

VB Event

The AI Impact Tour – San Francisco

Join us as we navigate the complexities of responsibly integrating AI in business at the next stop of VB’s AI Impact Tour in San Francisco. Don’t miss out on the chance to gain insights from industry experts, network with like-minded innovators, and explore the future of GenAI with customer experiences and optimize business processes.

Request an invite

image 3e8a36
Image Credit: RunPod user experience

This focus on developer experience has fueled rapid bottoms-up adoption. What started as a free resource for indie hackers unable to afford GPU compute, quickly attracted prosumers, then funded startups and small- to medium-businesses (SMBs). Now RunPod is making inroads into the enterprise, offering Nvidia GPUs fractionally through both compute instances and serverless functions.

“In the early days it was the hacker and developer communities,” recounts Lu. “We launched two years ago with GPUs we were hosting in our basement, posted on Reddit, offered for free, users were people who couldn’t afford anything and were willing to give it a shot. Then we started getting prosumers using it for side gigs, then their main gigs. About a year ago is when we started penetrating SMBs and funded startups. And now we’re starting to get more of that enterprise motion.”

image aafcc4
Image credit: RunPod User Experience

A key pain point RunPod addresses is the need for businesses to deploy custom models they can own, control and iterate on. Too often, enterprise developers resort to “canned” models available via API that don’t quite fit their use case.

“There are a lot of vendors out there making it easy to deploy something you don’t want. But they make it hard to deploy what you do want,” says Lu. “Our customers are telling us they need more control and customization.”

RunPod shared two compelling case studies that highlight the platform’s developer-centric approach and ease of use. LOVO AI, a voice generation startup, praised RunPod’s intuitive network storage solution and superior developer experience, noting that the platform consistently shipped features that addressed their needs.

Similarly, Coframe, a startup building self-optimizing digital interfaces, emphasized the simplicity and flexibility of RunPod’s serverless solution, which allowed them to deploy their custom diffusion model on serverless GPUs in less than a week without hiring dedicated infrastructure engineers.

Overcoming the limitations of Kubernetes

Interestingly, to enable customization at scale, RunPod has eschewed Kubernetes in favor of building its own orchestration layer from the ground up. The startup’s initial pre-product architecture showed Kubernetes, built for more traditional workloads, was far too slow.

“A lot of people are like, I just want to do this, I don’t want to have to learn all the ins and outs of Kubernetes,” said Lu. “Kubernetes has a good experience for experts but a pretty awful experience if you just need to get value quickly. We wanted to achieve the speed and user experience we knew our customers needed.”

RunPod’s decision to build its own orchestration layer is rooted in the limitations of Kubernetes for AI workloads. While Kubernetes has become the de facto standard for container orchestration, it was designed for traditional applications, not the unique demands of AI.

“AI/ML workloads are qualitatively different from traditional applications,” says Lu. “They require specialized resources, faster scheduling, and more dynamic scaling. We found that Kubernetes just wasn’t fast enough for what our customers needed.”

This is a pain point felt acutely in the enterprise, where the need to deploy and iterate on custom AI models is paramount. Kubernetes’ complexity and overhead can slow down development cycles and hinder experimentation, becoming a bottleneck to AI adoption.

“A lot of the managed AI platforms out there are great for getting started, but they can be limiting when you need to deploy your own models or pipelines,” says Lu. “That’s where RunPod comes in. We give enterprises the infrastructure primitives they need to build and deploy AI their way, without sacrificing speed or ease of use.”

As more enterprises look to operationalize AI and differentiate through custom models, the demand for specialized AI infrastructure is only set to grow.

Scaling up for future growth

With the new funding, RunPod plans to scale up hiring to meet enterprise demand and add features such as support for CPUs in addition to GPUs. The company reports both headcount and revenue have already grown 10x in the past year, according to Lu.

With strong initial traction and backing, RunPod’s future looks bright. But in an increasingly crowded market, maintaining its developer-centric edge will be key. For now, the focus remains on helping customers move beyond the limitations of one-size-fits-all AI infrastructure.

“Developers don’t want canned solutions for this, they want something where they can onboard and then be given the tools to really improve things and iterate to get to the result they want,” says Lu. “That’s what we’re building towards.”



Source link

About The Author

Scroll to Top