Dealing with algorithms - AI Interviews

05 May 2020

Code

image: Photo by Markus Spiske https://unsplash.com/photos/466ENaLuhLY

How to successfully interact with an algorithm?

How to ensure you get your point through while interacting with it?

I don’t know.

Algorithms are part of our daily lives but few of us have honed the skills to “talk” with them.

The advice to tweak a presentation based on the audience is ubiquitous and it is something even less than average speakers are used to do naturally, without any second thought. Now, how do you adapt your speech while presenting to an algorithm? You get no cues if your message is getting through. You do not have enough info on what the algorithm “cares” about. You do not know which data was used to train the algorithm.

Clearly you can learn, but it takes time and multiple interactions. And the algorithm can/should learn in the process as well, so the target is moving.

A nice reading on the topic, mainly related to the usage of algorithms in HR environments, is the article “My boss the algorithm: an ethical look at algorithms in the workplace”.

“…that algorithms can be used to improve recruitment quality and reduce bias, or they can be used to reduce recruitment time and cost and improve efficiency. While not entirely mutually exclusive, these 2 aims are often in tension and firms need to think carefully about why they are using such algorithms and what they hope to achieve before adopting them”.

Personally I am quite happy with the usage of algorithms to get a first screening of candidates providing the assessment base is adequately defined - it is easy to capture algorithmically keywords and nearby terms, but it is far more challenging to define, program and spot learnability, perimeter width, potential for growth, etc..

I’m far more concerned and pessimistic when using AI in other aspects like virtual interviews.

How fair it is to use algorithms to conduct remote video interviews with candidates? Is it fair to train a model on a certain portion of the population and then match it against another?

How fair is to let AI analyses the tone of the voice and the facial expressions? In many cases intonation is cultural driven. In many cases facial expressions are cultural driven AND peculiar to each of us depending of a tons of factors (yes, I’m crying, my eyes are red and popping out of the orbits because of allergy, not because I’m emotionally charged!) that IMHO are not accounted for by the algorithm. Furthermore to many of us it is still quite unnatural to talk with a camera without having a human counterpart, so to be stiff, to seem lost in thought, to look for the right word a bit longer than usual is normal and is not a sign of weakness.

Contrary to what many vendors promote, I feel the process is unfair towards the candidates, asking them a lot without the company providing much: referring to the previously quoted article, I do feel the focus is more on reducing recruitment time and cost than in improving recruitment quality and candidate experience.