Three myths to unlearn before applying AI

The initial mania over artificial intelligence has stabilized, and AI models are now table stakes for software startups seeking investment to scale. While it is clear that robots will not teach themselves to take over the world any time soon, many of the misconceptions about AI spread by sensationalist headlines and overly enthusiastic marketing still persist, sowing confusion and creating distractions among startup entrepreneurs and their customers looking to successfully apply AI. To find the right opportunities to apply AI and improve your chances of success, you must unlearn the following three myths about AI.


Myth #1: AI can do things humans don’t know how to do themselves

Computers can do a lot of things we humans can’t do, like performing the same tasks over and over again without making a mistake, calculating complex equations very quickly, and beating the world’s leading Go player. As we marvel at AI’s superhuman abilities, it’s natural to extrapolate AI’s capabilities from things we find difficult to things we don’t really know how to do, like detecting liars, predicting the next recession, or predicting the next World Cup winner. So far, AI has not done a better job on any of those problems than if you were to randomly guess.


Without AI, you have to spell out every single step of a task if you wanted a computer to take over—just think about how many micro-adjustments a sales associate might make if you ask them to recommend a hat for your snowboarding trip, weighing your personal style, gender, and the context to point you to the knit caps section and not baseball caps. To teach a computer how to do the same thing, you would have to precisely describe every single adjustment for every single possible scenario. With enough time it might be possible to map out instructions for how to handle most scenarios; that’s how some stores customize their e-commerce search results for the biggest-ticket items, to increase the likelihood of a sale. For many problems that amount of work is not economical. AI enables us to show the computer what we want using examples, which is more comprehensive and succinct than spelling out all of the logic; similarly, Constructor.io uses AI to observe what search results online shoppers actually click on to dynamically rank the results and improve sales. In other words, AI makes it cheaper to give the computer more complex instructions. Note that the key word is “instruction,” however, which requires you to understand how to solve that problem.


Understanding how to get to the right answer enables you to generate the right examples for training. Despite what the best poker players claim, it’s very difficult to know from images or footage whether you’ve just met is lying, or whether they’re just socially anxious or uncomfortable for some other reason. On the other hand, it’s easy to digitally track what products are commonly bought together, and which brands are preferred in each market, to make the right recommendation as a good human sales associate would do—just faster and with a much broader range of products. When you look for a place to apply AI, think about where it can do more of the things you know how to train someone to do well, and steer clear of things that you can’t teach.


Myth #2: Taking humans out of a process and replacing them with an AI model removes bias

We humans know that we are very flawed creatures prone to prejudice, and it’s tempting to think that taking ourselves out of a process and ceding it to the control of an unfeeling, disinterested computer would protect it from bias. Unfortunately, AI models are inextricably tied to the humans who create them; where there is a risk of human bias, replacing humans with an AI could very well amplify those biases. Some of these instances of AI prejudice have been well covered: a Google computer vision algorithm labeled black users as gorillas, another algorithm misidentified men standing in kitchens as women, and an Amazon recruiting tool rated female candidates lower than male candidates for technical roles.

There are two common sources of AI model bias, and both are equally difficult to avoid. The first source is sample bias, where the set of data on which you trained your model does not accurately represent ground truth. Increasing the volume of training data may not be sufficient to “even out” the skewed distribution of the data if the data was not randomized.


The second source is encoded bias, where the bias is introduced through unconscious human prejudices; a model that predicts the gender of a person based on their occupation trained on data from the 1950s might (correctly) guess that a nurse is a woman and a doctor is a man, because most doctors at the time were men, and many women were encouraged to pursue nursing if they wanted to work in medicine. Blinding the AI model to variables such as race and gender that lead to problematic associations won’t protect the model from bias, because race and gender are latently encoded in many other non-obvious aspects of our lives.


AI models are trained on limited samples of the real world because it would be impossible to train it on every single bit of data available, just like cartoons can convey information to us by approximating the real world with only a small fraction of detail, instead of mapping out every single atom. Think of a cartoon character who is identifiably a grandmother; we have all met plenty of grandmothers who look nothing like one another, but conveying attributes common to grandmothers, like using a walker and knitting, make relaying that information more efficient even while it perpetuates a stereotype. We face similar problems when engineering an AI model for real world applications, trading off performance for accuracy, or vice versa: every single decision about how to design an AI model or connect it to an application is an opportunity to inadvertently introduce bias into model reflecting that engineer’s priorities, which may be different from the end user’s.


The most effective way to mitigate AI bias, given all these vulnerabilities, is the same way we manage bias with humans—deciding what an equitable outcome should look like and instituting checks and balances to make sure that those desired outcomes are maintained. We see this in consumer finance regulations that require lenders to explain why loan applicants were denied loans; such regulations may slow down full automation of lending, but they offer some protection to financially stable individuals who belong to groups the model deemed “too risky” and would have prevented from receiving a loan. Data scientists at Meetup went a step further by scrapping a model altogether when they realized that their event recommendation model did not recommend any technical events to woman users.


Myth #3: Data is the new oil and AI enables “winner takes all” dynamics

AI models require a lot of data to perform well; the more data an AI has to train with, the better it performs. It should follow that the company with most data will have the best model, which will help them get more data, ensuring that it will be impossible for competitors to ever catch up. Using data and AI, the winner could dominate their whole category.


In truth, using data to build a defensive moat is much less straightforward. Data can be used as a barrier to entry if the data is hard to collect quickly and not substitutable with other equally sized but different datasets, and if the AI model continues to benefit from additional data. As researchers at Microsoft Research have found, for any single, well-defined learning task, performance improvements from additional data diminishes pretty quickly one the model reaches its performance plateau. In other words, after a certain point, additional data sees diminishing returns.






Figure 1. The value of data as a function of the number of samples. Credit: Nicole Immorlica, Microsoft Research



Figure 2. The value of data as a function of the number of samples in a typical ML application, such as machine vision. Credit: Nicole Immorlica, Microsoft Research


Fortunately for many AI model-driven businesses, solutions built around AI are much more complex than single tasks. The same Microsoft Research team found that, for AI solutions composed of many different tasks, the performance curve and value of the data continues to increase with additional samples of data, as illustrated in Figure 2. For complex applications of AI, more data means more increases in performance, which makes the product more attractive to customers, leading to a compounding positive feedback loop and creating winner takes all dynamics. Businesses built around these AI models that continue to improve with additional data are great candidates to fund with aggressive capital raises, because a large amount of funding that enables them to quickly acquire a number of data sources will help them jump ahead of the competition, making it increasingly hard for new entrants to catch up.

Start your journey prepared

AI enables us to solve more complex problems than ever before, but that increased complexity also makes building an AI model-driven business much more complicated than what the well-worn startup playbooks cover. We are only seeing the earliest stages of the changes AI will bring on, and many of the strategies to build a successful AI business will continue to be developed by trial and error. Starting your journey well calibrated on the realities of AI will increase your chances of success.


Recent Posts

See All

AI startups take on risk as a business model

We've discussed at length about how data and AI can take software products to the next level, so we'll switch gears this time to explore ways it enables new business models altogether. Data-driven bus

CONTACT

Thanks for submitting!

 

©2018 by AI ROI TOOLKIT. Proudly created with Wix.com