Lessons From 18 Years Investing In AI – A Fireside Chat at Stanford

At Omega, we’ve had a front-row seat to the explosive growth and momentum around AI for far longer than most. Looking back, there are several key insights our team has gleaned when it comes to successfully investing in this rapidly evolving space. We shared these insights during a fireside chat at the Graduate School of Business at Stanford.

Answers have been edited for clarity.

Q: Give us some history about the fund and your experiences at a high level.

When we launched our Artificial Intelligence (AI) investment strategy years ago, there was still a lot of skepticism from sophisticated investors about the real-world applicability of AI. Even until recently, some equated AI to the speculative frenzy around cryptocurrencies. We educated investors about how AI represented a fundamental shift in how software is built and utilized, akin to the transformative power of the shift to cloud.

A mistaken science fiction-based image of what AI would become drove some of the skepticism. Through most of the 1980s, the most salient examples of AI in the cultural zeitgeist included figures like Arnold Schwarznegger as the Terminator. These created a pop cultural anchor for the general population that led to unrealistic expectations of what AI meant.

The more informed skepticism around AI we heard related to the cost of development and the pace of commercial uptake. AI lived more at MIT’s labs rather than in commercial applications. We stayed close to the research, through our relationships with academics and researchers, but we always focused on investing in companies who already solved real business problems.

Today, everyone wants to invest in AI. But we still can claim to be the first early growth fund thematically focused on specifically AI.

Q: What has changed about AI in recent years to catalyze the recent boom?

It used to be hard to deploy AI for a practical business application – let’s say a chatbot for customer service – you needed an executive sponsor to pay the bills, build up a team to manage the infrastructure, and then separately build a data science team to train the model. This complex AI/ML deployment model was not feasible for most organizations, and the end model was often so customized to a single organization’s use case that it would not easily scale as a standalone product. 

But increasingly, you started to see practical business applications of AI in specific domains start to proliferate, including early movers such as DataRobot, who helps users train models easily. This is representative of a growing corpus of research and development that made AI/ML solutions become more deployable. Foresight about this coming AI tsunami catalyzed our conviction to thematically focus on transformative business applications that are fueled by data and powered by AI. 

A critical change in the last two years has been the explosion in the capability of pre-trained foundation models. Prior to this, companies needed to train models themselves, which required both time and expertise. Today, AI is accessible to every developer and team through off-the-shelf models like GPT4. You can think of foundation models as engines that power other AI applications, and access to these models across modalities like text, image, voice, and other domains has created an explosion of pragmatic use cases. 

Q: What’s the key to success for companies building AI powered applications?

As AI has gained mainstream attention, we’ve also witnessed a surge of companies attempting to brand themselves as “AI-powered” or “AI-first”, often with little substance behind the claims. The rise of pre-trained models means that many companies are sharing underlying technology.

The key for success in this category is building robust proprietary datasets, domain expertise, and managed AI workflows tailored to specific industry verticals rather than generic one-size-fits-all solutions. We find the sweet spot is in companies that melded robust AI capabilities with traditional software traits like touchless selling, product-led growth, and delivering clear ROI to customers through process efficiencies or revenue expansion. It has also become very important to assess the true AI capabilities of founding teams and ensure the leadership of our companies are able to think critically about AI can be value adds to their organizations.

Ultimately, product-market fit and customer traction are the best and only proof a company has. Claims about capabilities without robust customer adoption and revenue, is an immediate red flag. Our goal has always been to back businesses with the potential to dominate their respective categories by delivering superior solutions to complex, real-world problems – AI just enhances our companies’ ability to solve those problems. One prime example is our investment in Otter.ai, whose knowledge graphing and querying capabilities for meetings far outpaced simple AI-based transcription. 

To sum up, successful AI investing in 2024 requires in-depth expertise and technical prowess to evaluate a company’s core data assets, unique IP, and differentiated applications.

Q: We’ve heard a lot about NVIDIA’s growth in recent months. Can you talk us through the infrastructure and hardware layer, and trends you see there?

Nvidia is the clear winner for manufacturing hardware and it is reflected in their valuation. CUDA has become the de facto architecture of advanced AI/Ml. Challenger startups in the chip space are capital intensive and at this point, a little bit speculative but have the potential to radically change the market.

One layer above the hardware are the cloud providers who have emerged to deliver compute to serve and power the intense computing needs of AI workloads. Startups like Lambda Labs challenge Amazon, Microsoft, and in turn are challenged by a long list of newer startups providing AI compute. While benefiting from the growing demand for AI acceleration, these firms do still face massive capital expenditure requirements, and are at the mercy of NVIDIA until new chips can launch. 

This sector has murky long term prospects. Amidst constrained supply of GPUs and a need for flexible compute, there has been a short term surge in demand and also in valuation. We think there will be a renormalization of supply to demand ratio over the medium term, and we will see the commoditization of delivering AI compute. These providers are still core to future AI workloads, but paying hardware costs and enduring depreciation of assets are still long term challenges for these players. Hyperscalers (Amazon, Microsoft, Google) will continue to represent the lions share of compute, but new players focused on specialized developer experience are also candidates for investment.

Q: What about foundation model developers. It seems like every week a new model is released.

Foundation model development represent another capital-intensive arena like hardware and infrastructure. Firms like Anthropic, OpenAI, and Cohere, and the tech giants required staggering investments into specialized hardware, compute resources, and highly skilled research teams.  

However, foundation model quality improvements may be trending towards diminishing marginal returns. We already see that OpenAI’s GPT4, Anthropic’s Claude3 and Google’s Gemini Ultra mostly see the same performance, with very specific benchmarks claiming very marginal winners among them. Investors in these foundation models might not see 1000x returns but this is a very positive trend for the broader AI ecosystem. The more foundation model players there are, the cheaper and more accessible it will be for builders/founders.

The most interesting development area and investments might actually be in some of the fine-tuning (taking a pre-trained model and customizing it) and model delivery space (speeding up how quickly the models return an answer). In the 1700’s, James Watt was able to improve the efficiency of steam engines 5x through his research, and there are startups attempting to do the same with AI models. The best models today have enormous requirements to deliver, even after training. If highly customized, smaller, possibly open-source models can match the performance of much larger models, then the moat for players like OpenAI could shrink.

Q: Any final thoughts about AI for our future founders and investors in the room?

Building the right technical and business team has been critical for high-quality diligence. Investor teams need to add technical entrepreneurs to complement our financial analysis skills. For instance, we at Omega benefit from the expertise of our partners who are serial entrepreneurs and academics at MIT. Having diverse skill sets and perspectives has allows you to spot differentiated opportunities ahead of time.

As the AI landscape continues evolving, having a rigorous investment strategy rooted in fundamentals while maintaining flexibility will be essential for long-term success. The AI era is just beginning.