Generative AI, Self-Sovereign Identity (SSI), and Blockchain will have the biggest impact on global business travel posits a new white paper by CWT and BTN, Emerging Technologies in Corporate Travel. The report gleans from tech experts, global CEOs, travel managers and industry consultants the leading innovations set to change traveler experience, cost control and service delivery for corporate buyers and travelers while outlining the benefits, opportunities, and risks that each application presents.
Gen AI is hailed by some as a panacea for efficiency, transparency, and personalized service. Tools like Google Genesis and Chat GPT are transforming travel management by integrating disparate data strands like traveler preferences, corporate guidelines, and trip details to deliver personalized experiences like never before.
Developments in automation and data amalgamation could even lead to the Gen AI agent of the future who will be able to turn a wish list into a bookable itinerary optimizing for price, loyalty, cancellation policies, perks and conditions.
Like the internet in the 1990s, now is a defining moment for tech and a chance to reinvent how we do things in all areas of life. But as with any flashy new build, foundations matter.
“Algorithmic bias in AI-powered decision-making poses a significant threat to various aspects of our working and traveling lives,” says Matthew Newton, CWT’s VP IT architecture who is responsible for developing customer-centric digital strategies for clients in 139 countries, “tackling bias has become as important to me as modernizing core systems and enhancing differentiation. The consequences of ignoring the potential for bias are vast, creating disparities in budget allocation and reducing employee well-being, resulting in some employees feeling marginalized.”
We ask Newton to explain algorithmic bias in the context of corporate travel and how organizations can work to prevent it:
Playing favorites: How does algorithmic bias present in business travel?
To tackle algorithmic bias effectively, it's vital to first recognize the different types and then understand underlying causes.
Implicit bias leads to preferential treatment or discrimination in recommendations based on gender, ethnicity, and age. Statistical bias favors or disadvantages demographics based on historical data patterns, resulting in unequal access to travel choices. Training data bias reflects existing biases in past records which could limit options for individuals from certain regions.
These biases mirror societal discrimination, such as speech recognition systems being more accurate for male voices than female voices, with even more disparities across ethnicities.
I recently explored the complexities of algorithmic bias with the teams responsible for CWT’s Intelligent Display which uses machine learning to recommend relevant and policy-compliant hotels to travelers. We started with balanced, contextually relevant training data to minimize bias by curating diverse datasets and excluding sensitive attributes like age, gender, ethnicity, and socioeconomic status.
An algorithm like the one that powers Intelligent Display may favor hotels in certain high-end neighborhoods, assuming they are safer or more comfortable or flights with tight layovers for efficiency. However, if the algorithm fails to consider the requirements of employees with additional needs or those traveling with families, travelers might incur higher out-of-pocket expenses, feel stressed and miss connections, leading to out-of-program travel bookings. The program’s success hinges on fair and inclusive decision-making, which biased algorithms can undermine completely.
Bias applies to other systems too, like finance. A company’s AI-driven expense management system may approve higher travel budgets for senior executives more readily than for junior staff. Implicit biases — assuming senior roles require more expensive accommodation, for example — can lead to disparities. Some departments may consistently receive larger budgets, affecting overall cost management strategies.
Computer says ‘yes': Fostering equitable travel with data
Implementing ethical guidelines and conducting regular audits for AI use are the building blocks of equitable outcomes. Collaboration with diverse stakeholders and ongoing professional training from organizations like Partnerships on AI, AI Now, and the European Commission which offer algorithmic accountability templates and cross-collaboration frameworks, are just as important. Additionally, early intervention using bias analyzers, such as those offered by PWC, can help to identify and mitigate hidden biases.
A multi-faceted, multi-stakeholder approach underpinned by ongoing industry dialogue can ensure ethical AI design and evaluation and foster responsible AI use.
The risk of ignoring bias until it becomes untameable is to undermine diversity, equity and inclusion. By prioritizing fairness and training our algorithms, we can develop AI-powered travel systems to include everyone.