You’re sitting in a plush reception waiting nervously for your job interview. Sophie, you are told, is now ready to see you. You walk into the meeting room and are greeted by a two-foot-tall robot. As she asks you her questions, she compares you against data from successful workers. If there’s a match, the job is yours.
Sci-fi?
Nope. Sophie, who looks like a cross between R2D2 and a penguin, is a real product made by NEC Corp, and has been interviewing trial candidates since 2013. She is part of a growing wave of AI technology which is becoming increasingly commonplace in our day-to-day lives (just ask Siri). Using technology to help recruitment is already a widespread practice. But should robots ever be given the reign entirely, and be put in charge of hiring decisions?
Robots aren’t biased
Human beings like to think that they are rational creatures. They are not. The biases shaping the job market are numerous and profound, and range from the fact that physically attractive candidates are more likely to be hired (according to Harvard research) to evidence that candidates with foreign-sounding names are less likely to be asked to interview (according to DataColada).
Some of this prejudice is deliberate, and can be tackled through anti-discrimination regulation and changing social mores. Worryingly, however, quite a lot of bias is unconscious, the result of psychological tics which we could not control even if we wanted to. Unlike us, robots are not susceptible to the Halo Effect, where our first impression of someone determines our subsequent assessment of their abilities, or the Affect Heuristic, where hiring managers can be put off a candidate because they have the same name as someone they dislike. Clearly, making the hirer process fairer and meritocratic is an incredibly worthwhile aim. Score one for the robots.
Robots have no social intelligence
In the film I, Robot, Will Smith explains his hatred of machines by recounting an accident where a robot, calculating that Smith’s chance of survival was greater, rescued him while leaving a child to die. Referring to the girl’s statistical chance of survival, Smith says: “11% is more than enough. A human would have known that.”
Job interviews aren’t quite life and death, but the ethical problem of relying on data alone to make decisions remains. For example, many autistic people (who make up about 1.5% of the population, according to this CDC research) have fantastic skills which could benefit a workplace but struggle with the sort of social conventions, such as maintaining eye contact, which are valued in an interview. Robots designed to measure such social cues and reject those who do not conform to standard would not take into account exonerating circumstances in the same way a human would.
Robots can crunch big data
Did you know that, according to this article from The Economist, job applicants who use Google Chrome are better workers than those who use Internet Explorer? It’s true, but we only know about it because computers were able to crunch massive reams of data quickly and infer patterns that human beings couldn’t spot.
Not only could big data allow robots to shortlist candidates quickly and efficiently, they could also pick up on promising traits in candidates who initially appear a bad prospect. A talented person whose background dictated they received poor education results, for example, might be highlighted by a robot when they would have been dismissed by a human recruiter.
But there are problems with this data driven approach. For one, correlation and causation are notoriously difficult to disentangle. The conclusion drawn from the internet browser finding is that people who can be bothered to deliberately install extra software on their computers are also the sort of people who can be bothered to go that extra mile in the workplace. Even if that’s true, it’s highly likely that there are some very dedicated workers out there who just happen to prefer Internet Explorer, and it seems incredibly unfair to dismiss them on a metric they didn’t realise they were being tested on.
Robots can go wrong
One word: Skynet.
In all seriousness, the worry that clever robots pose a danger to humanity is not restricted to Hollywood – it has been articulated by scientific luminaries such as Stephen Hawking. Even putting apocalyptic destruction aside, software is notoriously liable to malfunctions, as anyone whose laptop has suddenly shut down in the middle of an important project can testify.
Errors aren’t the only concern regarding robots given sole hiring discretion – software is susceptible to hacking. The job market is competitive and good positions are both highly desirable and highly lucrative. Consequently, there is a clear criminal incentive to manipulate hiring software. Pessimistic, maybe, but certainly plausible.
Robots are creepy
The fact is that most people do not want to be hired by a robot. It’s impersonal, it’s difficult to interact with, and we might justifiably question whether a company uninterested in meeting us before they hire us is actually invested in us at all.
Instead, robots are likely to continue their role as an aid to human recruiters, helping them shortlist candidates rather than picking hires. Yet as people become familiar with robots in the workplace, their use is likely to rise. As robot designer David Hanson points out “People become used to the robots. The less startling they become, the more commonplace they get.”
So recruiters shouldn’t get too complacent. Their robotic counterparts may have been outsmarted for now, but they’ll be back.
About the author: Beth Leslie is a careers advice writer for InspiringInterns, a graduate recruitment agency which specialises in finding candidates their dream internship.