At Omega, we know that when investing in Intelligent Software that leverages AI and ML, understanding the limitations of AI and Machine Intelligence is just as important as appreciating its capabilities.

A decade ago, the state of the art in AI consisted of Expert Systems.  Engineers tried to create machine intelligence by programming knowledge and facts directly into a computer.

Today, we’ve come a long way through Machine Learning, which lets computers themselves work out what to do based on the data they are trained on. Machine Learning has resulted in amazing breakthroughs.  With enough data examples provided as inputs, machine learning programs can extract statistical patterns from these inputs and then use those patterns to recognize and classify new problems.  Essentially, Machine Learning allows computers to program themselves.

“Machine learning allows software to transcend the limitations of its programmer”

                                                           — Omega Venture Partners

With machine learning systems like Alpha Zero, computers can be trained from scratch on a particular problem, such as learning how to play a videogame.  Called “zero shot learning,” this technique involves allowing the computer to experiment with game moves and see what score it gets.  After the computer plays millions or billions of games (something that computers can do radically faster than people), it can learn how to maximize that score, without explicitly being told about the strategies of the game it is learning to play.  How do you like them apples?!

Machine intelligence is a remarkable breakthrough – one that revolutionizes how machines can tackle previously intractable problems and develop ingenious solutions to new problems.

What most people get wrong about Machine Intelligence is that it is best suited for specific applications and use cases.  It is very good for solving specific problems, but not good at generalizing across problem domains in the same way people tend to do.


How Computers Learn vs. How People Learn

How Computers Learn How People Learn
Dominant Learning Style Supervised / Statistical Unsupervised / Conceptual
What is it good for? Exceptionally good at processing large and complex amounts of data, extracting incredibly subtle patterns and making accurate predictions Superior ability to generalize from specific examples and form conceptual abstractions
Strengths Very good at learning to do specific things extremely well, with a speed, scale and sophistication that surpasses the best humans. Resilience, creativity, and common sense. The ability to construct broad models or theories about how the world works from limited examples.
Weakness Knowledge is narrower and more limited, and can be fooled by “adversarial examples” (e.g., a mixed-up jumble of pixels could be labeled a cat by a computer if it happens to fit the right statistical pattern) Forming generalizations based on limited experience is useful for evolutionary survival and many life skills; when negatively expressed, it becomes bias, dogmatic behavior, and stereotyping
How Learning Happens Passive and Self-contained Learning – passive processing of data (usually fed to the computer by its operator) Active and Social learning — proactive curiosity and experimentation as an individual and, often, socially from other people
What it means to Learn Build statistical models that map a set of inputs to a set of outputs in a particular problem domain.  (The sets of inputs and outputs can be massive and much more complex than any human’s ability – or even all of humanity’s collective ability – to process.) Form generally reliable, but not always accurate, mental models or theories of how the world works, what people mean, and what others are trying to do
Amount of Data Need enormous amounts of data examples to learn (e.g., trained on hundreds of millions of images or games) Can infer categories from a small number of examples (e.g., a few storybook pictures can teach kids about cats, dogs, tigers, monkeys, and unicorns)
Type of Data Need curated datasets that make for good examples, clear categories, and / or defined rules (e.g., a label for each image they “see” or a score for each move in a game) Most real-world examples are spontaneous and self-motivated


*Note: The chart above speaks to conventional supervised machine learning techniques.  It does not represent areas of research into social learning, adversarial learning, collective learning, and common-sense reasoning – all of which are exciting frontiers of research in AI.  

Some researchers are trying to build common sense models into the AIs, an approach pioneered by the late great Marvin Minsky at the Massachusetts Institute of Technology.  Hybrid systems that combine models with machine learning as well as “active AIs”, which learn through serendipitous rewards and punishment (e.g., points and penalties), are an exciting area of research in AI.