Substrate Labs, an AI infrastructure startup founded by a team from Stripe, Pulumi, and Robust Intelligence, announced an $8M seed round to deliver elegant APIs that enable developers to build more modular AI systems, accelerating a new era of software development. The round was led by Lightspeed Venture Partners, with participation from South Park Commons, Craft Ventures, Guillermo Rauch (Vercel), Immad Akhund (Mercury), Will Gaybrick (Stripe), and others. Led by co-founder and CEO Rob Cheung and co-founder Ben Guo, the company also announced the official launch of its API optimized for running ensembles of models.
“The AI market will inevitably move from training models to using inference that puts models to work—a massive market that today is far too inefficient,” said Nnamdi Iregbulem, partner at Lightspeed Venture Partners. “Rob and Ben are strong engineers with over a decade of experience designing APIs and scaling systems at organizations like Stripe, Substack and Venmo. They’re in a strong position to abstract away the complexity around operationalizing the right AI model for each job, enabling engineers to optimize development with sophisticated tools previously available only at large tech giants.”
Developers integrating AI today are finding the most success when they split a problem into tasks, and deploy the best model for each task at hand. However, the vast majority lack access to internal infrastructure that can run dozens of AI models in modular pipelines, meaning they’re stuck either trying to use a single large AI model for every task, or stitching together solutions across various API vendors and GPU service providers. Both approaches result in AI applications that are unreliable, slow and expensive due to low GPU utilization.
With Stripe’s developer-first strategy in their DNA, the Substrate team is developing a powerful API for AI that harnesses curated open-source AI models across various modalities like text, images, audio, and semantic vectors, and optimizes these models to run at scale – with special emphasis on how quickly one model’s outputs can be used as another’s inputs. Substrate’s foundational infrastructure is deeply integrated with SDKs that make it easy to create “graph relations” between models on the Substrate platform, enabling developers to create complex multi-inference AI workflows in several lines of code, with fewer data round trips and maximum parallelism. Early customers include Substack and Maven.
“New AI models offer enormous potential, but the tools developers have access to today are impeding the deployment of intelligence into software systems,” said Rob Cheung, co-founder and CEO. “We built Substrate to help developers create AI-integrated programs intuitively, in the same way they create any other program: by relating small semantic tasks to each other to automate work. The tooling we’ve created will enable any company—not just tech behemoths—to run dozens of branching ML models in a single request.”
“We chose to partner with Substrate because we knew they would enable us to quickly leverage the capabilities of modern ML models without dedicating significant engineering resources to the task,” said Jairaj Sethi, Substack CTO. “We’ve integrated Substrate in our internal systems to categorize and recommend content as well as creator-facing tools, and we’re excited to continue exploring new use cases.”
“Over the past year, we’ve explored various tools to power course recommendations on Maven,” said Shreyans Bhansali, Maven CTO. “Substrate has been a breath of fresh air because their product is so clearly focused on providing the simplest possible developer experience.”
The post Substrate closes $8M round to build API to accelerate AI deployment appeared first on SD Times.
from SD Times https://ift.tt/qPINCHW
Comments
Post a Comment