Search
Select region
Close filter
Humanity In; Bias Out
Technology Officers

Humanity In; Bias Out

How Diversity in the Workforce Can Remove Human Biases from AI

As the use of artificial intelligences expands in everything from hiring to health care, a pivotal question has emerged: How do we ensure these advanced algorithms generate unbiased, equitable results and decisions?

That was the subject of discussion at the recent CogX Festival panel, “Bias in, bias out: Getting the next 10 years of speaking to machines right.” Sven Petersen, who leads the Technology Officers Practice at Egon Zehnder, moderated the discussion on the topic, which he called “controversial” and “intense”—and also crucial for business leaders in all sectors to grapple with.

“Every human exhibits bias,” panelist Ryan Sweeney, vice president of business development at Rev, noted. He used a call center as an example, in which thousands of agents are exhibiting their own biases in a complex yet diverse process—callers can dial in for assistance, and if they don’t get that assistance because of a language barrier or some other factor, they can call again to talk to a different agent. Sweeney then took the analogy a step further: Introducing an algorithm to replace these thousands of call center agents could be a cost-effective change. “But if you’re not careful, it won’t have that glorious diversity that we had in the previous system,” he said. Thousands of independently biased agents will be replaced by whatever single bias is in the algorithm, and applied at scale.

Such complications have become evident in case after case of biases emerging in AI decisions, from racial profiling in criminal justice algorithms to AI recruiting tools that gave men an edge over women. Leaders must be aware of such unintentional hazards when moving to AI tools, but to do so, panelist Nayur Khan of McKinsey said organizations must first agree on what an optimal AI would look like—in other words, a common definition of “fair.”

“If I ask different individuals, I’ll get different answers—why is that?” asked Khan. “It’s because everyone’s got their own unconscious biases.” The concept of “fair” also cuts differently across cultures and geographies. Without a clear definition, Khan warned, it’s hard to anchor results.

One way to confront that issue is through a key idea: diversity. Panelist Suzanne Brink, IO psychology consultant at HireVue, explained her company’s efforts to build a bias-free AI for hiring and interviewing scenarios. “It starts with diverse data,” said Brink. Her company’s hiring data comes from all over the world and across ethnicities and providers. “But then we have to be realistic and not just assume everything’s going to be hunky-dory…We often do find some sub-group differences and then we go back in to mitigate that.”

Diversity in data is one step, but organizations must also have diverse teams to interpret that data and ensure the resulting AI processes are free of bias. Nayur Khan brought up the recurring issue of racial bias in AI-based health care. “We’ve seen Black patients discriminated against in algorithms at a huge scale,” he said. “That’s because no one every properly looked at the data—there was a lack of diversity or understanding.”

Khan continued with the example of racial bias in facial recognition technology—another recurring issue in AI, yet one that can be addressed by having a diverse team that can recognize this blindspot when working with the technology. “When you have diverse teams, you spot things very quickly,” he said.

In the end, leaders must understand that while artificial intelligence and its applications are revolutionary technologies, they cannot magically escape the flaws of their creators—human beings. As the builders, marketers, and users of these applications, we must be able to understand our own flaws and find ways to avoid replicating them in AI, whether through grappling with ideals of true fairness or following through on the values of diversity and inclusion in both data and people.

 “AI, in its imperfections, is mirroring our own imperfections,” Egon Zehnder’s Sven Petersen remarked. “As long as we look at AI to tell us what to do, we are also abdicating the very responsibility we need to acquire to work on ourselves.”

Topics Related to this Article

Related content

Why Your CMO Is Critical to Diversity and Inclusion

Why Your CMO Is Critical to Diversity and Inclusion

Diversity, Equity & Inclusion

As companies broaden their definitions of DE&I, they will increasingly view it as imperative to their business success. Read more

Changing Leadership with the Internet of Behaviors

Changing Leadership with the Internet of Behaviors

Technology Officers

The world is becoming one big data problem, and the IoB is the key technological synapse to unscramble it. Read more

Driving Digital in Media

Driving Digital in Media

Technology Officers

Julia Beizer, Chief Digital Officer for Bloomberg Media, has a deep understanding of her product and its users. She spoke with us about where… Read more

Changing language
Close icon

You are switching to an alternate language version of the Egon Zehnder website. The page you are currently on does not have a translated version. If you continue, you will be taken to the alternate language home page.

Continue to the website

Back to top