No one wants to miss out on the A.I. revolution. Afraid of losing their edge or falling behind competitors, companies both big and small are hastening to deploy Artificial Intelligence and Intelligent Software applications. But this isn’t easy.
Omega Venture Partners recently conducted a survey of enterprise decision makers at Fortune 500 companies who are spearheading efforts to evaluate and deploy software applications that leverage AI (Intelligent Software). We were not entirely surprised to learn that at many large corporations AI has quickly become a top-three CEO topic.
Several of these executives noted that their companies are particularly interested in Intelligent Software applications that enhance business processes or customer experience with A.I. Simultaneously, these business leaders identified a number of challenges:
1) Good Data = Good Outcomes // Bad Data = Bad Outcomes
Data is the fuel that drives Intelligent Software capabilities. Good data used to train and fine-tune Artificial Intelligence applications results in good outcomes, while bad data has a tendency to drive ‘garbage-in, garbage-out’ modalities.
Older companies can be at a disadvantage when it comes to data. It is not unusual for these companies to have a bunch of data that is very poorly structured and hard to analyze. This is often an artifact of legacy IT systems and / or data silos that have developed as a consequence of prior acquisitions and mergers.
For business units within larger enterprises, the challenge can be finding an on-ramp for applications that are easy to use. These business units are spending time trying to think through how they evaluate historical data sets and the data that is being created on a daily and hourly basis at an ever-increasing pace.
2) Algorithmic Bias
Preventing skewed A.I. algorithms is another top concern. A financial services executive acknowledged that bias in A.I. is an issue the financial services industry is currently struggling with.
People often overlook the fact that algorithms are designed by humans who choose what data to use and how to use it. They tend to view an algorithm like a math equation—an objective process that always spits out the correct answer after doing its calculation. This bias seems especially strong in the area of consumer finance, where investors are constantly told to look at data objectively and not let emotions drive their decision-making. Algorithms don’t have emotions, so they must always be objective and rational—or so the thinking goes.
That perception, however, is misguided. Coders can consciously or unconsciously embed biases into algorithms.
It is important for companies to make sure that they don’t have bias in the process, and that they have diverse and inclusive teams that can avoid bias in A.I. Taking it a step further, companies may want to require the disclosure of the assumptions used by human coders who develop algorithms, as well as the data sets the algorithms use and don’t use, to make users more aware of any biases. Alerting users to the existence of competing algorithms would increase awareness that algorithms differ based on human choices.
3) Don’t Just Replicate!
Companies can also go wrong when they use A.I. to directly replicate what humans do. This risks reproducing the same biases, issues, and underperformance, and never realizing the potential of A.I. to unleash new revenue streams and use cases.
An executive at a large social media company commented that his company is working to identify and recruit more resilient content moderators.
Another company is working hard to avoid training machine learning software on how well to predict what normal human recruiters would recruit for, but rather looks at the outcomes.
Successful AI projects require fresh approaches, working across siloes and establishing the right roles. As technology becomes democratized via cloud environments, it makes it that much easier for companies to rapidly iterate, experiment, and innovate.