The Big Issue: Does AI Mitigate or Reinforce Bias?
The most pressing question surrounding AI in recruitment is whether it truly promotes fairness, or whether it risks reinforcing existing inequalities.
AI systems learn from historical data. If that data reflects societal or organisational bias, the algorithm may replicate or even amplify those patterns.
One widely cited example outside recruitment is the COMPAS algorithm, which was found to disproportionately misclassify Black defendants as higher risk compared to white defendants. While this case relates to criminal justice rather than hiring, it illustrates a critical point: algorithms are not neutral simply because they are automated [ProPublica, 2016].
Recruitment provides its own cautionary example. Amazon famously discontinued an internal AI hiring tool after discovering it systematically disadvantaged female candidates. Because the algorithm was trained on historical hiring data from a predominantly male workforce, it interpreted male-dominated profiles as indicators of success, reinforcing imbalance rather than correcting it [Reuters, 2018].
These cases highlight an uncomfortable truth: AI reflects the data it is trained on. Without careful oversight, transparency and regular auditing, automated systems can entrench bias under the appearance of objectivity. The EU AI Act now tackles this head-on, classifying recruitment tools as “high-risk” from August 2026 and mandating exactly these safeguards [European Commission 2026; Yarrow, 2025].
Candidates are right to ask: how can fairness be guaranteed in opaque systems?