Creating Trust in Your AI System

Seemingly not a day goes by without a major brand claiming that they have included AI into their existing services to optimize their customer experience. Of course, one of the reasons for businesses is simply to cut costs by automating people away, but that’s a story for another article. Today, we focus on the issues that arise when icluding AI into your service offers and discuss ways how to handle the expectations of customers when they interact with your (hopefully) intelligent service.
There are three main issues people are having when they interact with your AI enriched service. All are related to what is called the ucanny valley when a system is almost behaving intelligently, but fails to do so completly. So, here are the three main guidelines:

  1. Transparency
  2. Expectation Handling
  3. Explanations

Transparency

Tell the customer that she is dealing with an AI/automated system. If you have a chatbot that is used to access your services, introduce the chatbot as artifical being that the customer interacts with. We recommend to use a persona. For example, we call our chatbots LEIA – logic enhanced intelligent agent – and LEO – logic enhanced operator.
In the case that you provide recommendations to your customers, explain that these were generated by an algorithm. For example, adding a phrase like “These recommendations where generated automatically based on your purchase history” tells your customers that you were using their data to come up with the recommendations. While some users might find them useful, some of them might find them wrong. This gives you the opportunity to learn by asking of the recommendations were appropriate.

Expectation Handling

Tell the customer what the system is capable. For example, if your have a chatbot for placing orders, tell your customer that she can do that. Often, customers try out other things as well, just out of curiosity. This is an opportunity for you because you can add “hidden” features that customers can discover. Here a fictional conversation of bookstore chatbot that provides an ordering feature.

Customer: owner book recommendations

Chatbot: Whoa! You discovered a feature that is not yet available to the public. Do you want to try? Can’t hurt… 😉

This creates additional engagement and makes interactions more interesting. This playful approach brings you also more insight about the way people are using your system. Basically, all edge cases you can think of are a way to engage your customers and learn from them and have some fun.

Explanations

Explain what the system is capable and what not. This is strongly related to expectation handling and transparency. Basically, you have to explain what is happening. There is no need to get too technical at this point, since most customers don’t care what kind of machine learning API you are using. Here’s an example:

Hello I’m LEIA. I’m your humble digital assistant. I can help you with:
[Getting a list of current bestsellers]
[Getting personal recommendations]
[Ordering a book]

This explains to the customer that she deals with a digital (non-human) tool and lists the capabilities of the service. In addition to introducing your persona, you are also transparent and manage the expectations. Another example are recommendations. A sentence “Customers who bough this book also bought these” tells customers that the recommendations are calculated by some service in the background using data from other customers.

Making AI products/service is an exciting new field and has lots of interesting challenges. If you keep these three principles in mind you have a good chance to succeed.

Scroll to top