recruitment ai biases

The Difference Engine

What does it mean when a recruitment AI shows prejudice? Amazon has scrapped its recruiter robot after it taught itself to make sexist hiring choices.

The ASA upholds a complaint against a misleading job advert. Are your listings compliant?


Amazon ditches prejudiced recruitment AI

In a time of equality, diversity, and worker rights, artificial intelligence is often hailed as a great solver of disputes over fairness. The recruitment industry can barely go a month without a new think-piece tackling unconscious bias and unfair hiring practices. These articles usually resolve with the idea that unconscious biases are impossible to avoid – and that recruitment AI will herald a new era of equality at work.

But what happens when your robot recruiter is an obvious sexist?

This was the awkward position that global retailers Amazon found themselves in, when they started to develop a recruitment AI, back in 2015.

The project – praised as a “holy grail” of recruitment – aimed to sift resumes and produce an in-house shortlist of top candidates. One commentator described the aim was to input high volumes of resumes; “it will spit out the top five. We’ll hire those.”

Instead, this recruitment software taught itself to omit candidates who shared common traits with previous, failed applicants. The machine then learned from its own results, and reinforced its pre-existing rules as time went on. Uncomfortably, these rules often focused on traits that singled out women. Eventually, it led to candidates who listed attendance at female-only universities, or activities like “women’s soccer” being removed from shortlists.

When Amazon discovered this, they ended the project. A spokesperson said that the recruitment automation software was “never used by Amazon recruiters to evaluate candidates.”

A problem with programming

The Amazon process also failed to weight individual traits and qualifications. It meant that wholly unsuitable candidates were often shortlisted, simply because they shared similar extracurricular activities with previous success stories.

Human recruiters might welcome the Amazon revelations, following years of being told we’d all be replaced by machines. But this revelation does not expose inherent flaws in the concept of automated recruitment software.

The principle of “garbage in, garbage out” – bad programming reaps bad results – is fundamental to machine learning. The algorithms which informed the AI decision-making were creating prejudiced decisions. A sort of bias-by-design, if you like. The machine operated to rules that a human had provided it. When those rules contained flaws – or were not thoroughly tested – it generated errors. The machine then used these errors to learn from, over time.

Teaching a machine to match prospects to previous success stories is simple. Teaching them how to learn which factors are significant and which are superficial is proving a more arduous task. Nihar Shar, computer science lecturer at Carnegie Melon University says the self-teaching machine is still a pipe dream – for now. “How to ensure that the algorithm is fair, how to make sure the algorithm is really interpretable and explainable – that’s still quite far off.”

But the Amazon incident provides us with two important market insights. Firstly: human oversight remains indispensable in today’s recruitment processes. Secondly: big business is already investing time and money, looking for ways to replace traditional recruiters.


ASA: inaccurate listings could cost you

How accurate are your jobs listings? Are you ever tempted to polish up a less than appealing position? Then you might have to change your approaches to jobs listings to stay on the right side of the law.

The Advertising Standards Agency (ASA) has upheld a complaint made against recruitment agency Ashton James, for misleading its candidates.

The case involves a misleading communication which was sent directly to candidates – not one which was publicly listed. The message, titled “Job Application Request” was sent out by Ashton James on February 20th, 2018. In fact, the placement was a position on a training course – not an offer of employment. One recipient challenged the content of the message on learning that there was no paid position on offer. The offending part of the message appears below.

‘It’s been a while since we last connected so I hope you have been keeping well.

The job below has just come in and matches the skills and experience of the latest CV we have on file for you. If it’s something you are interested in please click the job title to apply directly through the website: Trainee Software Developer.’

The ASA ruled that the agency had misled its candidates with the email, and demanded the firm they ensure it would not use deceptive practices in future. Ashton James did not comment on the judgement.

Proper scrutiny for recruitment practices

The ASA ruling underlines the degree of openness and accuracy that is expected of recruiters when communicating with candidates. Rules apply to both publicly listed messages, and direct mail. Due care and attention must be paid to ensure titles are appropriate; that content is clear and not open to misinterpretation. And there must be clarity in the way recruiters list positions.

In a competitive jobs market, the fight for skills can, at times, get heated. But it is helpful to be reminded that cutting corners costs in the long run.

Click for more News