Building fairness into AI from the ground-up

By Professor Tim Baldwin, School of Computing and Information Systems

In recent years, it has become widely accepted that many of our technologies are biased. Or, to be more accurate, that the AI models powering their applications are.

There have been some particularly egregious instances of AI-fuelled bias. Amazon’s sexist recruitment program made headlines around the world, and an algorithm widely used in American hospitals has been found to systemically discriminate against black people.

I had my own ‘road to Damascus’ moment in 2017, when a paper came out showing a widely-used AI library I had developed was systematically biased.

Side profile of a man's face with a grid of red light projected over his eye

Stanford researchers analysed our language identification model, which is used, for example, by web browsers to determine when to translate webpages written in languages the user doesn’t speak, into their preferred language. They found a strong positive correlation between the Human Development Index of the country in which a variety of English is spoken, and the accuracy of the model.

That is, the model performed much better for American and Australian English, say, than Nigerian or Indian English.

The unfortunate truth about AI in its current form is that, too often, it makes existing social disparities worse, and its beneficiaries are almost always the privileged

It’s not uncommon for AI to be biased. In fact, any non-trivial AI model is almost guaranteed to be biased to some degree. But learning about large-scale bias in my own model was a confronting experience. It prompted me to start thinking more deeply about how biases emerge in AI, particularly in my area of natural language processing, and how we can overcome them.

The promise of AI is that it will make people’s lives better by accelerating workflows and enhancing decision-making. And for people in the ‘have’ camp, AI is increasingly delivering on that promise, in areas ranging from financial services to healthcare.

But the unfortunate truth about AI in its current form is that, too often, it makes existing social disparities worse, and its beneficiaries are almost always the privileged. Most models work best for middle-aged white heterosexual men, and if you deviate on any dimension from that over-represented demographic, they won’t work as well for you.

That’s because white men are almost always overrepresented in the data AI models are trained on, and also because these models often bake-in prejudices that already exist in society. So we end up with facial recognition technology working better for white men than women and minorities, or criminal justice applications that mislabel African American defendants as ‘high risk’ twice as often as they do white defendants.

Two lines of software code on a black background

Some people would argue that we should accept these biased models because, after all, they are simply reflections of the ‘real-world’, and humans would be no less biased. But that argument is unacceptable; not just because discrimination itself is unacceptable, but also because AI models don’t just reflect existing prejudices – they amplify them.

For example, we have been working with a predictive model trained on self-written professional profiles, which has learned that ‘if in doubt’ a medically-related profile written by a man should be assumed to be a doctor, and one written by a woman should be assumed to be a nurse.

The good news is that we don’t have to roll over and accept these biases. We can, instead, create new, fairer models, often without sacrificing gross accuracy. For example, in a project I am working on with legal services charity Justice Connect, we are developing a model to process online requests for legal help in a way that is fair and unbiased (as much as possible).

The charity receives tens of thousands of requests for legal assistance every year, but the current manual process can leave at-risk people waiting too long for a response. AI can speed parts of the process up, but it needs to be employed with care to avoid inadvertently making the system biased.

AI models don’t just reflect existing prejudices – they amplify them

We are developing a natural language processing solution that can understand the language people use to describe their legal problem, and suggest the appropriate areas of law for assistance. This can then be presented to a human (usually a lawyer or an intern) for review.

In doing so, we are training our models on data gathered through the charity’s online intake tool. This presents challenges as the tool collects minimal demographic information (a person’s sexuality, for example, is not usually relevant to their request for legal help). This means we have had to work out other ways to ensure fairness across dimensions where we don’t have the data.

For example, if an enquiry relates to migration law we can reasonably assume the person wasn’t born in Australia and adjust the model accordingly. In other cases, the model may predict that its output is at risk of being biased, and flag that for human assessment.

We know our Justice Connect model still needs more work for some groups of people, particularly the elderly and Indigenous peoples. But building fairness in from the ground up means we will hopefully avoid the biases we see in other AI models, which were built on the naïve assumption that the model won’t discriminate.

For anyone else building an AI model from scratch, my advice is this. Don’t assume (as the field has in the past) that if you’re not requesting race or gender or any other kind of demographic information, your model won’t discriminate along those lines. The model will learn all sorts of correlations that more-often-than-not will lead to biases, most likely producing a situation that benefits the privileged and penalises those who are less well-off.

Fairness in AI doesn’t happen accidentally by obfuscation. Developers need to start from the assumption that biases will arise; this requires them to pay very careful attention to their training data sets, and to audit new models for relative fairness.

Fortunately, they don’t have to do so on their own. There are more and more toolkits being released to help with this like Fairlearn and AllenNLP, and the literature offers increasingly sophisticated recommendations for building fair models even with partially-labelled datasets.

If you are interested in exploring this and other issues in AI further, join us for our free Harnessing the Power of AI to Transform your Business webinar on Tuesday 17 August at 7pm.

  • AI and data