Technology

An artificial intelligence will let them know

An artificial intelligence will let them know

The selection and hiring of personnel by large companies is one of the many areas in which artificial intelligence has found growing spaces and methods of application. The measures introduced by many countries to limit the COVID-19 epidemic, which have increased the volume of remote work, have accelerated this trend, providing companies with tools to select and evaluate a number of candidates more quickly and efficiently. in many cases higher than previous averages. Parallel to the presence of artificial intelligence in personnel selection, an extensive debate has developed on the effects, benefits and risks of these increasingly widespread procedures, which in turn is part of a broader debate on the limits of technology. , understood both as the current perfectibility of those tools and as ethical boundaries to be identified and possibly traced in everyday practice.

In an article for the BBC, the journalist Andrea Murad recounted her direct and recent experience in submitting a job application. “Frankly, it was a bit stressful knowing that my application was being evaluated by a computer and not a human being,” he clarified immediately. As part of the online selection process, Murad had to complete a series of “main games”: counting all the points in two boxes, for example, or matching the expressions of a series of human faces to their emotions. It was a 25-minute test with 12 games developed to measure emotional and cognitive skills. The program then assessed the candidate's personality and, Murad points out, without any direct human intervention in that first phase. Since then Murad has not yet been contacted by the company to continue the interview.

Pymetrics and HireVue
The software used for evaluating Murad's job application is called Pymetrics and is developed by a New York company. As they put it themselves on the software presentation page, available in 20 languages, it is a tool made to “collect unbiased data” and to “measure potential, not pedigree.” In the stages leading up to the interview with a human resources manager, he is used, among others, by large multinational companies such as the JP Morgan bank, McDonald's, the consulting firm PricewaterhouseCoopers and the food sector group Kraft Heinz. The goal of Pymetrics is to favor initial assessments based not on candidates' curriculum vitae but on “objective behavioral data”, against a previous customized calibration of the algorithm by the employer, based on the needs and priorities of the company.

According to founder Frida Polli, Pymetrics is developed to help companies expand their pool of candidates and collect “signals” to understand if a certain person could be successful in that job. The use of artificial intelligence systems in this field should, from his point of view, favor the interaction between companies and candidates, doing the good of both parties. “Everyone wants the right job and all companies want to hire the right people: it doesn't pay off for anyone if the game is interrupted,” Polli said.

The decision whether to continue or interrupt the selection process at the end of a first evaluation phase entirely managed by software is clearly up to the company. This also applies to another widely used recruiting software, HireVue, developed by a South Jordan company near Salt Lake City, Utah. It is used, among others, by the Dutch-British multinational Unilever.

The main function of HireVue, available since 2016, allows you to collect video recordings of initial interviews with candidates, who are asked by appointment to answer a series of standard questions via laptop, tablet or smartphone. The audio of the interview is then automatically transcribed and analyzed by the system based on some criteria previously approved by the employer: it searches for keywords such as “I” instead of “we” in the answers to questions on teamwork, to example. The assessment is also based on other criteria, such as the frequency of eye contact with the camera and facial expressions during the interview.

HireVue estimates that of a total of 19 million video interviews collected among candidates, about 20 percent are conducted without the need for a human operator on the other side. “It allows us to shorten the entire hiring process,” said the head of selections for the soft drink company Dr Pepper in 2017, explaining that their procedure, which could take three weeks, ended on average in two days, often even less, thanks to HireVue.

Regarding the generally discussed risks of integrating artificial intelligence into recruiting, HireVue CEO Kevin Parker argues that analyzing an interview using software is more “impartial” than analyzing a human operator. Polli, founder of Pymetrics, also thinks the same when she argues that “each algorithm is rigorously tested to track down any biases” and that relying on this type of software analysis is better than basing them only on the evaluation of the candidate's previous experiences. The curricula are useful for providing information on “hard skills” (basic technical skills), according to Polli, but they say nothing about “soft skills (interpersonal skills), fundamental for success at work”.

Other software based on artificial intelligence systems are also used by companies for the active search of personnel. The platform developed by the Seattle-based company Textio, also used by well-known companies such as Spotify, Tesco and Dropbox, allows you to compile job advertisements using an inclusive and easy to understand language for as wide a range of candidates as possible. Another software, developed by the Californian company Korn Ferry, allows companies to thoroughly analyze various types of content on the Internet to select potentially useful professional profiles and contact them before they even contact the company.

– Read also: This artificial intelligence solves a big problem

The discriminatory criteria
In 2018 a Reuters investigation which was then much revived helped to disclose some defects of an experimental artificial intelligence system previously developed by Amazon for personnel selection processes and based on machine learning techniques. It gave candidates a score of one to five and initially seemed like a very powerful and comfortable tool. However, it turned out that the candidates were not evaluated fairly with respect to their gender. In essence, the system had been “trained” by implementing the resumes received from Amazon in the previous ten years, most of which came from male candidates as a reflection of the male dominance in the company's hiring of technical staff.

For the selection of software developers, for example, the artificial intelligence system had therefore followed patterns identified in the information entered by the programmers. And machine learning had led to penalties for resumes that included the expression “female” (as in “female chess club captain,” for example). Sources within the company consulted by Reuters at the time cited the case of downgrading of two groups of qualified candidates due solely to the fact that they had attended “women's” colleges.

Human interventions following the discovery of the error – necessary in any case in these machine learning processes – could not guarantee that the system would not identify and apply other discriminatory classification criteria based on the acquired data, and Amazon abandoned that project in the end. of 2017. However, the company specified that it had never based the personnel selections on the rankings obtained through that tool.

Gender bias wasn't the only problem that emerged during the system development team's testing. Among the approximately 50,000 terms identified in the “training” curriculum databases, machine learning had led to the assignment of less relevance to terms that denoted fairly common skills for each specific search for personnel (knowledge of certain programming languages, for example ). Instead, he had favored the resumes of candidates who in their description used verbs more frequently recurring in the resumes of male engineers. With the result of a disproportion between the importance of technical skills, however basic, compared to the choice of some terms over others in the curriculum.

The limits of artificial intelligence
According to James Meachin, occupational psychologist for the British consultancy firm Pearn Kandola, the current limits of artificial intelligence systems applied to Personnel selection processes are essentially of two types. First of all, there is a basic limit to be considered, common in general to audiovisual content recording and analysis software, Meachin told BBC. Even well-known and improved voice assistants such as those from Google, Amazon and Apple still have difficulty, for example, in understanding all the words spoken by people with a strong regional accent (a well-known example in the UK is the Scottish one). In the case of a correct transcription of the text, the second difficulty is of a semantic type and concerns the meaning to be assigned to the words in the absence of a context, whereas a human evaluation would be able to grasp all the nuances quite easily.

Sandra Wachter, associate professor of law at the University of Oxford, spoke to the author of the article on the BBC about the problem perhaps best known and discussed in the debate on artificial intelligence. According to Wachter, which specializes in data protection and anti-discrimination laws, at present the risk of basing the recruitment of personnel on advanced selection algorithms – without adequate testing phases and error correction procedures – is to neglect some categories of people who are exceptions or minorities in the starting data.

Wachter believes that a selection system based on machine learning but not adequately tested would end up reinforcing rather than eliminating the effects of those biases already present upstream of the data used for training. “Who were the directors in the past? Who were the teachers at Oxford in the past? The algorithms will then select more men », warns Wachter, co-author of a scientific article that proposes notions of“ algorithmic equity ”which has also been used for some time by Amazon. The tests proposed in the article, useful for identifying unintended cognitive biases present in data sets, are in fact part of a suite available for customers of Amazon Web Services, the division of the company that deals with providing cloud systems.

In New York, the city council recently discussed a law proposed by Democratic councilor Laurie Cumbo that would at least partially regulate the use of artificial intelligence systems in personnel selection. If approved, it would oblige companies to openly declare the possible use of such software in evaluating candidate profiles. Companies should also undergo annual reviews of technology tools, to ensure that selection processes are not conditioned by bias, and to make the results available to customers.

– Read also: The ethics of artificial intelligence

The bill – viewed favorably by several companies, including Pymetrics – has not found unanimous support and has indeed fueled some perplexity regarding the issues it would leave unresolved or aggravate. Some artificial intelligence experts – also supported by authoritative groups such as the National Association for the Advancement of Colored P eople (NAACP), one of the first civil rights associations in the United States – find the measure suggested by Cumbo impractical. Granted that a shared auditing standard can guarantee the absence of biases in the automated selection processes, any flaws or imperfections of that auditing model would only strengthen the existing discriminatory structures by giving them a sort of illegitimate quality certificate.

To be opposed to Cumbo's proposal are the advisors who believe that the software that uses artificial intelligence can identify and correct any biases present in the personnel selection procedures, rather than accentuating them. Therefore, they fear that introducing too strict regulations could make their use unfavorable, thus paradoxically making selections more unfair.

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Most Popular

To Top