Top 5 Places NOT to use AI

by | Jun 7, 2018

“AI can help tame America’s exploding healthcare costs”

“Ireland’s AI Revolution”

“AI and Big Data: The Future of the Digital World”

Unless you’ve been living under a rock, you probably know the hype around artificial intelligence (AI) is at extreme levels. AI is actually a complex field, so we’ll start by briefly defining it. At a high-level, AI is the use of computers to act “intelligent” without actually relying on a person with a brain to be involved at every step. Within AI, there are several related fields that strive to create different forms of intelligence that accomplish different things. Below are a couple of those fields along with one or more technologies used you may have heard of, but feel free to read this Wikipedia article for more information:

  • Knowledge representation – ontologies, knowledge graphs
  • Planning and design – evolutionary algorithms like this very fun example with vehicles.
  • Classification and regression – machine learning, supervised learning, unsupervised learning
  • Natural language processing – Amazon Echo, Apple Siri, Google Home
  • Perception – computer vision with Google Lens and Pixt (though these also use machine learning)
  • General – HAL-9000 and SkyNet

CVP first started using supervised learning to perform classification and regression a decade ago at a wireless telecom where we trained a computer to recognize a pattern in historical data with a known outcome. Using dozens of facts on each customer, we utilized this technique to create predictive models on customer behavior that helped predict which customers were:

  • at a higher risk of canceling their service in the next thirty days (classification) or;
  • how long their expected tenure was (regression)

Features of the models that were found to be significant subsequently helped drive changes to several business processes and greater customer retention.  Having done this type of work for so long, we have learned a few things about what AI in general can and cannot do, as well as gained experience to understand what are the right use cases for implementation for these different technologies.

Using that experience, here is a review of five scenarios you might have heard about in relation to AI, and why they aren’t ready for prime time yet.

Scenario 1 – Using AI and automated conversations to  eliminate the customer service department

The Facts:

AI cannot have an unbounded conversation, nor are we anywhere near it being able to work through the typical problems that someone calls into a call center for. The state of the art right now was demonstrated with Google’s Duplex, where they showed AI having a natural conversation for a very specific use case: to book a hair appointment. Google’s research department, now called “Google AI”, has a budget larger than the revenue of many public companies and they’re on the third generation of custom developed AI chips. If you don’t have hundreds of millions of dollars to invest in this use case, consider starting off with lower hanging fruit that can be accomplished using components of AI that are more achievable.

The Verdict:

  • Hold off on: AI for general customer service interactions for complex problems like taxes or credit card fraud.
  • Try: Natural language processing that does not require general knowledge and understanding. Use chatbots to handle simple interactions like checking the balance on an account.

Scenario 2 – Eliminating bias in a business process without a lot of planning

The Facts:

Bias based on gender, race, sexuality, disability, and other factors is a real problem around the world and technology is not immune to it. Some people think that by taking a person out of the loop, a computer can eliminate the problem. What they don’t realize is they may actually be hardwiring bias into the process without being able to monitor it.

As we discussed earlier, when organizations use machine learning, they often use supervised learning to train a computer. Unfortunately, this supervised learning can inadvertently lead to bias in several ways:

  • Copying human bias. Supervised learning requires training data. Training data should be as realistic and comprehensive as possible to get the best performance of our your model. If you train your model using data that was created by humans with bias, the computer will very easily copy this bias. Since many machine learning algorithms are non-interpretable, you may not even know this is happening.
  • Inadvertent correlations. Let’s say that you want to make an unbiased machine learning model for assigning creditworthiness to new customers. You remove names, ethnicity, and other factors that might be correlated with race, but leave in the customer’s current zip code because that is important in a credit decision. Suddenly, your model starts rejecting certain ethnic minorities at twice the rate of others. Your model discovered model redlining, a practice dating back 100 years where certain less affluent neighborhoods that happened to be predominantly minority occupied were denied credit. Not a good way to train the computer.

The Verdict:

  • Hold off on: Letting an unsupervised machine learning model pick up any data available in the system, without a person determining whether that data should be used in the process for ethical or business reasons. If you want an example, look at a recent study that found a model used to predict criminal behavior was not very accurate and highly biased against black offenders.
  • Try: Machine learning to assist in the process of classification or prediction, but regularly review and re-review the predictions for bias. This not only ensures you are acting ethically and legally, but that your model’s overall predictive power does not decay over time with a changing market.

Scenario 3 – Implementing AI in situations where not a lot of data is available

The Facts:

Generally speaking, most forms of AI work by:

  • analyzing lots of historical data on outcomes in the area you’re interested in or;
  • analyzing lots of historical data on outcomes in a close area that generalizes to your area.

The latter technique is called transfer learning and involves finding a nearby environment with training data to “jumpstart” your machine learning model. For example, assume you are building a system to drive a golf cart around a golf course. There are not a lot of companies in this business, though there are a lot of companies and datasets available for training a computer on how to drive a car. As many of the same problems like staying on the road, off the grass, and stopping at stop signs apply, you could use a model trained for recognizing automobile obstacles, snip off the final layer(s), and then train a model on golf cart obstacles.

If you are unable to get a large training set or do transfer learning, you should first figure out how you are going to generate a lot of training data before making a heavy investment in AI. On some of our projects, over 80% of the labor was spent in designing surveys, fine-tuning them, and them answering them en masse to create enough training data for a high quality model.

The Verdict:

  • Hold off on: AI in a new line of business or government program where you don’t know how your consumers/customers/citizens will react and where you don’t know what the right responses are to the problems. Try to collect some high quality data, analyze it, and then figure out if AI will help.
  • Try: Supervised learning where you have a lot of examples how things should have worked and a broad dataset on how to identify problems.

Scenario 4 – Creating complex responses

The Facts:

Similar to the problem with chatbots, if the business process you’re hoping to automate requires a response written in proper English from scratch, AI and natural language processing techniques are not good at free form writing.

As an industry,  we are not very close to AI that can think or generalize concepts even to the level of a first grader. There are several good examples of computers writing long-form articles, but those are all formulaic in nature such as a summary of a sports game or the movement of stock. You can see an example here from Google where the output is legible but repetitive. In other words, people have created AI that can do an advanced form of Mad Libs, but there is no AI, available today, that can write a believable editorial or a proposal for a new system.

The Verdict:

  • Hold off on: AI to write proposals or judicial opinions.
  • Try: Natural language processing and generation to respond to straightforward questions with a simple answer. Answering a customer’s question about their current account balance or whether a check for a certain amount cleared is perfectly reasonable.

Scenario 5 – Unilaterally automated actions in an adversarial environment

The Facts:

Deep learning is a special form of machine learning using complex neural networks and lots of data to perform impressive acts of classification on highly-dimensional data like images or audio. We’ve even worked with a customer to create a mobile app that can recognize the style of clothing (e.g. a red dress with a v-neck, 3/4 sleeves, mini length) better than a human in a fraction of a second. Just like a person doesn’t actually examine every photo their eye receives before recognizing an item, a computer uses shortcuts to bypass examining every pixel of an image to recognize it. These shortcuts are helpful, but because computers and humans use different (and perhaps less thorough) shortcuts, they can be fooled in creative ways.

Researchers from Google realized that by placing specially formed stickers into a picture, they could totally confuse or mislead formerly high-performing models in ways a human never would be. In the figure below, they demonstrate how the computer model went from being supremely accurate in identifying a banana, to thinking the banana was a toaster.

If you are using AI (especially non-interpretable deep learning) for a business process that is adversarial in nature and where there are incentives for data to be manipulated, you need to be very careful that your model cannot be easily be exploited. If your automation removes safety checks, you may create a process that is suddenly attacked in ways that would not have worked before because (most) humans have common sense that AI does not.

The Verdict:

  • Hold off on: AI that can be judge, jury, and executioner in a high stakes, adversarial environment. Instead, try making the AI your “first pass” at combing through mounds of data, but have a human verify the classification and push a button to confirm the final action.
  • Try: Machine learning to predict simpler problems or fraud, while making sure you use constantly updated, real-world data to train your model. Make sure humans review a sampling of the classifications over time for any exploits. Look into techniques like generative adversarial networks that can help to improve the resilience of your model by having it compete with another model.

0 Comments

Submit a Comment

Your email address will not be published.

Pin It on Pinterest

Share This