Within the past few years, net-new markets, opportunities, and interest have been emerging from all manner of companies—financial services, healthcare, autonomous vehicles, consumer, and general developer-focused companies—in leveraging enterprise AI and ML for their customers as well as for themselves. While traditionally for enterprise customers, there is a tension between building solutions in-house versus buying off-the-shelf platforms, we are seeing a rise in interest in purchasing best-of-breed solutions, in part because of the complexity of the machine layer of artificial intelligence, deep learning, and machine learning.
Enter MLOps, or otherwise known as “Machine Learning Ops”. Similar to the shift of on-prem to SaaS that led to cropping up of lucrative solutions sold to IT departments from PagerDuty (NYSE: PD) to Datadog (NASDAQ: DDOG), the rise of artificial intelligence and machine learning has resulted in the pressing need for advanced infrastructure to simplify AI adoption for the end-user.
These MLOps solutions— better characterized as platforms—aim to lower the barrier to entry for developers by decreasing initial setup friction and offer a user friendly interface from Day 1. In our conversations with an early Youtube infrastructure engineer lead and the former Head of Infrastructure at Docker, we learned that the zero-cost entry that MLOps provides, allows even the most experienced developers the opportunity to tap into the emerging deep learning field that was previously siloed to academia and PhDs, and without having to engage in a full curriculum and coding language shift. Furthermore, these MLOps platforms have the potential to become the default knowledge repository and system of record for deep learning and machine learning experiments. In the long-run, as developers move across companies, but want to maintain the knowledge repository of their previous experiences, there is an opportunity to unlock inter- and intra- company collaboration and network effects.
At the application layer, OMEGA’s investment DataRobot is a good example of the successful implementation of MLOps in the core workflows of engineers, data scientists, business analysts, and even the C-suite. With customers ranging from Kroger to Humana to US Bank to PNC to the Boston Red Sox, Datarobot leverages MLOps to simplify the general complexity of data infrastructure for internal teams.
At the platform and infrastructure layer, the aggregation of broad datasets as well as implementing hyper-parameter search from the get-go has the opportunity to unlock new use cases for users across the board that both offer the flexibility in GPU/TPU/CPU hardware and the success of new types of machine-learning, AI, and deep learning models that are being developed.
At OMEGA we are pleased to see the creation of flexible end-to-end and platform-neutral solutions that democratize AI and deep learning. Whether you’re building something at the application layer or at the platform layer in MLOps, we would love to chat! Find us at twitter.com/omegapartners or via connect[at]omegavp.com.