Algorithmic bias: when AI discriminates againstpeople

In 2018, Amazon discovered that its AI-powered resume screening system, developed to
automate candidate selection, was systematically discriminating against women. The
model had been trained on resumes of employees hired over the previous ten years —
predominantly men. The algorithm learned that male was preferable and began
penalizing applications that mentioned the word “women’s” or came from all-female
colleges.
Amazon shut down the system. But the problem it revealed is far larger and more
widespread than a single case.
What is algorithmic bias?
Algorithmic bias occurs when an AI system produces systematically unfair or
discriminatory results for certain groups of people. The system isn’t malicious — but
human prejudice and historical inequalities in the training data get automatically
reproduced.
Garbage in, garbage out. If historical data reflects discrimination — and it almost always
does — the model learns and perpetuates that discrimination at scale.

Real examples that should concern you

Facial recognition: Studies showed that facial recognition systems from major companies
had much higher error rates for Black women’s faces than for white men’s. In law
enforcement applications, this can lead to incorrect identifications with serious
consequences.
Credit and insurance: Credit approval algorithms reproduce historical inequalities.
Residents of certain neighborhoods — that historically suffered financial exclusion — may
have applications denied not because of their individual history, but because of where
they live.
Healthcare: A study in the US showed that an algorithm used to identify patients needing
additional care was systematically underestimating the need for Black patients, because
it used historical healthcare cost as a proxy for need — and Black patients historically
received less care, therefore generating lower costs.

Why it’s so hard to detect

Because the systems are complex and opaque. Nobody can look at a deep learning
model and see exactly why it made a particular decision. And when the bias is subtle —
not “reject women” but “prefer candidates with a certain golf club as a hobby” — it can
take a very long time to notice.
What can be done?
More representative data: ensuring training data adequately includes all affected groups.
Regular auditing: testing systems for disparities in outcomes between groups.
Transparency: requiring companies to explain how their algorithms make decisions.
Regulation: the European Union, with the AI Act, is creating legal obligations for high-risk
systems.
Algorithmic bias is a mirror of society. Fixing the algorithms without fixing the structures
that generated the biased data is treating symptoms without addressing the cause.

Leave a Reply

Your email address will not be published. Required fields are marked *