Lessons Learned from the Air Canada Chatbot Ruling - Industry Today - Leader in Manufacturing & Industry News

Industry’s Media Platform of Choice
Champion Your Brand in Front of Decision Makers and Extend Your Reach Get Featured in the SPOTLIGHT

 

April 9, 2024 Lessons Learned from the Air Canada Chatbot Ruling

There are two sides to the AI coin: opportunity & risk. If managed responsibly AI is a powerful technology that can drive profits and growth.

Generative Artificial Intelligence (or Gen AI) holds incredible potential for customer service environments. Businesses can use it as part of their solution to automate routine interactions, helping cut costs and enrich the customer experience (CX). However, like any technology, a haphazard or imprecise AI development can degrade customer communications in costly and reputationally damaging ways, e.g., the recent Air Canada chatbot ruling. By learning from the mistakes of Air Canada and utilizing Gen AI inside the customer service environment properly, companies can circumvent a similar PR debacle and safeguard CX.  

What Happened with Air Canada’s Chatbot? 

In November 2022, Jake Moffatt booked a bereavement flight with Air Canada. As he researched flights, he used Air Canada’s chatbot, which advised that he could apply for bereavement fares and receive a discount by filing a claim after his flight. When Moffatt attempted to obtain the discount, Air Canada customer support employees told him that the chatbot’s replies were incorrect and nonbinding. Ultimately, the airline did not provide a discount because Moffatt did not follow proper procedure.

During the civil tribunal that followed, Air Canada argued that its official bereavement policy prohibited retroactive discounts. They also claimed that because the chatbot was a “separate legal entity,” they could not be held liable for the information it provided to Moffatt. However, in February 2024, the tribune ruled in favor of Moffatt, concluding that Air Canada committed “negligent misrepresentation.” The ruling explained that a chatbot was just as much a part of the airline’s website as a static webpage and that Air Canada is responsible for all of the information, incorrect or otherwise, on its website.

Minimizing Errors and Boosting CX with a RAG-Based Architecture 

An easily preventable mistake companies make when their chatbots receive questions is having that bot search the Internet for information to generate a response. This presents several problems where the chatbot could create hallucinations or responses that contain false or misleading information. Also, like Air Canada, the chatbot has no ultimate authority on which to base its output, causing situations where the customer encounters conflicting or incorrect information.  

Companies can avoid the same issue Air Canada experienced by implementing what’s known as a Retrieval Augmented Generation or RAG-based architecture. In this framework, the Gen AI-powered chatbot uses a private, secure and internal database, such as a policy manual, to construct its responses. Once it retrieves the information it needs, the chatbot will plug that data into a contextualized response that keeps the interaction with the customer conversational, engaging and enjoyable. These parameters ensure chatbots never generate a response that does not align with the internal database. Moreover, keeping the chatbot up-to-date on policy changes is as simple as refreshing the database.

Responsible AI and Routine Testing 

Another issue with chatbots is that their internal database could contain biased or skewed data, resulting in outputs that favor certain groups over others. To overcome this challenge, companies should partner with a trusted AI vendor like Microsoft Azure, a leader in responsible AI providing features like bias adjustment that can minimize harmful outputs. Microsoft Azure also offers content filtering capabilities, which ensure the chatbot doesn’t use inappropriate language or leak sensitive customer information. Likewise, these guardrails can prevent hijacking, where someone attempts to get a chatbot to do something contrary to its purpose.

In addition to utilizing responsible AI principles, companies must use analytics and routine testing to optimize CX and efficiency within the customer service environment. One crucial metric is containment, or the rate at which chatbots can successfully take a customer interaction from start to resolution without human intervention. If the analytics reveal that containment is dipping and customers are abandoning interactions, perhaps there is something wrong with the policy manual in the RAG-based architecture that requires editing.

Both Chatbots and Humans Make Mistakes

The truth is that the same mistakes chatbots make, human customer service representatives make all the time. Ultimately, at a macro level, the fault resides with the company for inadequately training their employee and/or chatbot. The difference, however, is that, unlike a human, when a chatbot makes a mistake, once the company implements the proper adjustments, the chatbot will never make that same mistake again. Nevertheless, if companies want to avoid ending up in the same mess as Air Canada, it is better to thoroughly train chatbots so that when they err, it is a minor blunder versus a brand-tarnishing experience or, worse yet, an outright catastrophe.  

matt edic intelepeer
Matt Edic

About the Author:
Matt Edic is CXO at IntelePeer.

 

Subscribe to Industry Today

Read Our Current Issue

Made To Stay: Attracting Gen Z Into Manufacturing

Most Recent EpisodeAn Ambition To Be a Great Leader

Listen Now

A childhood in Kansas, college in California where she met her early mentor, Leigh Lytle spent 15 years in the Federal Reserve Banking System and is now the 1st woman President & CEO of the Equipment Leasing & Finance Association. Join us to hear about her ambition to be a great leader.