Airline held liable for its chatbot giving passenger bad advice – what this means for travellers

By | September 10, 2024
Surprise mechanics’ strike causes Canadian airline WestJet to cancel more than 400 flights

When Air Canada’s chatbot gave incorrect information to a traveller, the airline argued its chatbot is “responsible for its own actions”.

Artificial intelligence is having a growing impact on the way we travel, and a remarkable new case shows what AI-powered chatbots can get wrong – and who should pay. In 2022, Air Canada’s chatbot promised a discount that wasn’t available to passenger Jake Moffatt, who was assured that he could book a full-fare flight for his grandmother’s funeral and then apply for a bereavement fare after the fact.
According to a civil-resolutions tribunal decision last Wednesday, when Moffatt applied for the discount, the airline said the chatbot had been wrong – the request needed to be submitted before the flight – and it wouldn’t offer the discount. Instead, the airline said the chatbot was a “separate legal entity that is responsible for its own actions”. Air Canada argued that Moffatt should have gone to the link provided by the chatbot, where he would have seen the correct policy.
The British Columbia Civil Resolution Tribunal rejected that argument, ruling that Air Canada had to pay Moffatt $812.02 (£642.64) in damages and tribunal fees. “It should be obvious to Air Canada that it is responsible for all the information on its website,” read tribunal member Christopher Rivers’ written response. “It makes no difference whether the information comes from a static page or a chatbot.” The BBC reached out to Air Canada for additional comment and will update this article if and when we receive a response.
Gabor Lukacs, president of the Air Passenger Rights consumer advocacy group based in Nova Scotia, told BBC Travel that the case is being considered a landmark one that potentially sets a precedent for airline and travel companies that are increasingly relying on AI and chatbots for customer interactions: Yes, companies are liable for what their tech says and does.
“It establishes a common sense principle: If you are handing over part of your business to AI, you are responsible for what it does,” Lukacs said. “What this decision confirms is that airlines cannot hide behind chatbots.”
Air Canada is hardly the only airline to dive head-first into AI – or to have a chatbot go off the rails. In 2018, a WestJet chatbot sent a passenger a link to a suicide prevention hotline, for no obvious reason. This type of mistake, in which generative AI tools present inaccurate or nonsensical information, is known as “AI hallucination”. Beyond airlines, more major travel companies have embraced AI technology, ChatGPT specifically: In 2023, Expedia launched a ChatGPT plug-in to help with trip planning.
Lukacs expects the recent tribunal ruling will have broader implications for what airlines can get away with – and highlights the risks for businesses leaning too heavily on AI.

Airline held liable for its chatbot giving passenger bad advice - what this means for travellers

How air travellers can protect themselves
In the meantime, how can passengers stand guard against potentially wrong information or “hallucinations” fed to them by AI? Should they be fact-checking everything a chatbot says? Experts say: Yes, and no.
“For passengers, the only lesson is that they cannot fully rely on the information provided by airline chatbots. But, it’s not really passengers’ responsibility to know that,” says Marisa Garcia, an aviation industry expert and senior contributor at Forbes. “Airlines will need to refine these tools further [and] make them far more reliable if they intend for them to ease the workload on human staff or ultimately replace human staff.”
Garcia expects that, over time, chatbots and their accuracy will improve, “but in the meantime airlines will need to ensure they put their customers first and make amends quickly when their chatbots get it wrong,” she says – rather than let the case get to small claims court and balloon into a PR disaster.
Travellers may want to consider the benefits of old-fashioned human help when trip-planning or navigating fares. “AI has advanced rapidly, but a regulatory framework for guiding the technology has yet to catch up,” said Erika Richter of the American Society of Travel Advisors. “Passengers need to be aware that when it comes to AI, the travel industry is building the plane as they’re flying it. We’re still far off from chatbots replacing the level of customer service required – and expected – for the travel industry.”
Globally, protections for airline passengers are not uniform, meaning different countries have different regulations and consumer protections. Lukacs notes that Canadian passenger regulations are particularly weak, while the UK, for example, inherited the Civil Aviation Authority and regulations from the 2004 European Council Directive.
“It’s important to understand that this is not simply about the airlines,” he said. Lukacs recommends passengers who fall victim to chatbot errors take their cases to small claims court. “They may not be perfect, but overall a passenger has a chance of getting a fair trial.”

Leave a Reply

Your email address will not be published. Required fields are marked *