Skip to main content

When an AI algorithm is labeled 'female,' people are more likely to exploit it

Vector of a man working with a robot sitting at table. Symbol of future cooperation and technology advance
(Image credit: Feodora Chiosea/Getty Images)

People are more likely to exploit female AI partners than male ones — showing that gender-based discrimination has an impact beyond human interactions.

A recent study, published Nov. 2 in the journal iScience, examined how people varied in their willingness to cooperate when human or AI partners were given female, nonbinary, male, and no gender labels.

Researchers asked participants to play a well-known thought experiment called the “Prisoner’s Dilemma,” a game in which two players either choose to cooperate with each other or work independently. If they cooperate, both get the best outcome.

But if one chooses to cooperate and the other does not, the player who did not cooperate scores better, offering an incentive for one to “exploit” the other. If they both choose not to cooperate, both players score low.

People were about 10% more likely to exploit an AI partner than a human one, the study showed. It also revealed that participants were more likely to cooperate with female, nonbinary and no-gender partners than male partners because they expected the other player to cooperate as well.

People were less likely to cooperate with male partners because they didn’t trust them to choose cooperation, the study found — especially female participants, who were more likely to cooperate with other "female" agents than male-identified agents, an effect known as "homophily."

"Observed biases in human interactions with AI agents are likely to impact their design, for example, to maximize people’s engagement and build trust in their interactions with automated systems," the researchers said in the study. "Designers of these systems need to be aware of unwelcome biases in human interactions and actively work toward mitigating them in the design of interactive AI agents."

The risks of anthropomorphizing AI agents

When participants didn’t cooperate, it was for one of two reasons. First, they expected the other player not to cooperate and didn’t want a lower score. The second possibility is that they thought the other person would cooperate and so going solo would reduce their risk of a lower score — at the cost of the other player. The researchers defined this second option as exploitation.

Participants were more likely to "exploit" their partners when they had female, nonbinary, or no-gender labels than male ones. If their partner was AI, the likelihood of exploitation increased. Men were more likely to exploit their partners and were more likely to cooperate with human partners than AI. Women were more likely to cooperate than men, and did not discriminate between a human or AI partner.

The study did not have enough participants identifying as any gender other than female or male to draw conclusions about how other genders interact with gendered human and AI partners.

According to the study, more and more AI tools are being anthropomorphized (given human-like characteristics such as genders and names) to encourage people to trust and engage with them.

Anthropomorphizing AI without considering how gender-based discrimination affects people’s interactions could, however, reinforce existing biases, making discrimination worse.

While many of today’s AI systems are online chatbots, in the near future, people could be routinely sharing the road with self-driving cars or having AI manage their work schedules. This means we may have to cooperate with AI in the same way that we are currently expected to cooperate with other humans, making awareness of AI gender bias even more critical.

"While displaying discriminatory attitudes toward gendered AI agents may not represent a major ethical challenge in and of itself, it could foster harmful habits and exacerbate existing gender-based discrimination within our societies," the researchers added.

"By understanding the underlying patterns of bias and user perceptions, designers can work toward creating effective, trustworthy AI systems capable of meeting their users’ needs while promoting and preserving positive societal values such as fairness and justice."

Damien Pine
Live Science contributor

Damien Pine (he/him) is a freelance writer, artist, and former NASA engineer. He writes about science, physics, tech, art, and other topics with a focus on making complicated ideas accessible. He has a degree in mechanical engineering from the University of Connecticut, and he gets really excited every time he sees a cat.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.