There are two sides to the AI coin: opportunity & risk. If managed responsibly AI is a powerful technology that can drive profits and growth.
Generative Artificial Intelligence (or Gen AI) holds incredible potential for customer service environments. Businesses can use it as part of their solution to automate routine interactions, helping cut costs and enrich the customer experience (CX). However, like any technology, a haphazard or imprecise AI development can degrade customer communications in costly and reputationally damaging ways, e.g., the recent Air Canada chatbot ruling. By learning from the mistakes of Air Canada and utilizing Gen AI inside the customer service environment properly, companies can circumvent a similar PR debacle and safeguard CX.
In November 2022, Jake Moffatt booked a bereavement flight with Air Canada. As he researched flights, he used Air Canada’s chatbot, which advised that he could apply for bereavement fares and receive a discount by filing a claim after his flight. When Moffatt attempted to obtain the discount, Air Canada customer support employees told him that the chatbot’s replies were incorrect and nonbinding. Ultimately, the airline did not provide a discount because Moffatt did not follow proper procedure.
During the civil tribunal that followed, Air Canada argued that its official bereavement policy prohibited retroactive discounts. They also claimed that because the chatbot was a “separate legal entity,” they could not be held liable for the information it provided to Moffatt. However, in February 2024, the tribune ruled in favor of Moffatt, concluding that Air Canada committed “negligent misrepresentation.” The ruling explained that a chatbot was just as much a part of the airline’s website as a static webpage and that Air Canada is responsible for all of the information, incorrect or otherwise, on its website.
An easily preventable mistake companies make when their chatbots receive questions is having that bot search the Internet for information to generate a response. This presents several problems where the chatbot could create hallucinations or responses that contain false or misleading information. Also, like Air Canada, the chatbot has no ultimate authority on which to base its output, causing situations where the customer encounters conflicting or incorrect information.
Companies can avoid the same issue Air Canada experienced by implementing what’s known as a Retrieval Augmented Generation or RAG-based architecture. In this framework, the Gen AI-powered chatbot uses a private, secure and internal database, such as a policy manual, to construct its responses. Once it retrieves the information it needs, the chatbot will plug that data into a contextualized response that keeps the interaction with the customer conversational, engaging and enjoyable. These parameters ensure chatbots never generate a response that does not align with the internal database. Moreover, keeping the chatbot up-to-date on policy changes is as simple as refreshing the database.
Another issue with chatbots is that their internal database could contain biased or skewed data, resulting in outputs that favor certain groups over others. To overcome this challenge, companies should partner with a trusted AI vendor like Microsoft Azure, a leader in responsible AI providing features like bias adjustment that can minimize harmful outputs. Microsoft Azure also offers content filtering capabilities, which ensure the chatbot doesn’t use inappropriate language or leak sensitive customer information. Likewise, these guardrails can prevent hijacking, where someone attempts to get a chatbot to do something contrary to its purpose.
In addition to utilizing responsible AI principles, companies must use analytics and routine testing to optimize CX and efficiency within the customer service environment. One crucial metric is containment, or the rate at which chatbots can successfully take a customer interaction from start to resolution without human intervention. If the analytics reveal that containment is dipping and customers are abandoning interactions, perhaps there is something wrong with the policy manual in the RAG-based architecture that requires editing.
The truth is that the same mistakes chatbots make, human customer service representatives make all the time. Ultimately, at a macro level, the fault resides with the company for inadequately training their employee and/or chatbot. The difference, however, is that, unlike a human, when a chatbot makes a mistake, once the company implements the proper adjustments, the chatbot will never make that same mistake again. Nevertheless, if companies want to avoid ending up in the same mess as Air Canada, it is better to thoroughly train chatbots so that when they err, it is a minor blunder versus a brand-tarnishing experience or, worse yet, an outright catastrophe.
About the Author:
Matt Edic is CXO at IntelePeer.
Tune in to hear from Chris Brown, Vice President of Sales at CADDi, a leading manufacturing solutions provider. We delve into Chris’ role of expanding the reach of CADDi Drawer which uses advanced AI to centralize and analyze essential production data to help manufacturers improve efficiency and quality.