Technology

Does facial recognition have to worry us?

Does facial recognition have to worry us?

A group of US activists used software to photograph and recognize the faces of some 14,000 people in Washington to demonstrate the risks and limitations of facial recognition related to security and surveillance, and the dangers of this technology in the absence of specific legislation.

On Thursday, November 14, three people in white overalls and with a smartphone on their foreheads walked the streets and halls of Capitol Hill, the seat of the United States Congress, framing the faces of the people they passed. Their phones were connected to Amazon's facial recognition software, Rekognition, a very refined system sold to, among other things, numerous police forces in the United States. The program employs algorithms that improve as more and more faces are cataloged and recognized, and Amazon recently announced that it has further improved the system by adding a feature to detect people's mood. In a few hours, the three activists collected thousands of faces and compared them with a database that allows them to be identified, which in some cases managed to identify the people in the picture immediately. The whole process was streamed.

In their action the activists, who are part of the online rights organization Fight for the Future, said they identified a congressman, seven reporters, 25 Amazon lobbyists and even a celebrity, singer Roy Orbison, who died in 1988, thus highlighting one of the main problems of facial recognition: in many cases, the technology is wrong. Washington residents will also be able to upload their photos to the organization's linked site to see if they were recognized during the action.

Although the activists' suits were marked with a “Facial recognition in progress” notice, they did not ask for consent from people and their phones automatically recognized anyone who passed. The legality of their operation is the main problem: at the moment, there is no law that prevents people from storing facial recognition data without the consent of the person being taken. The organization said that all collected data will be deleted after two weeks, “but there is no law on this. At the moment, sensitive data on facial recognition can be archived forever “.

In a press release, the group said their message to Congress is simple: “Make what we have done today illegal.” “It's terrifying how easy it is for anyone – a government, a company or just a stalker to set up large-scale biometric monitoring,” said Evan Greer, deputy director of Fight for the Future. “We need an immediate ban on the use of face surveillance by law enforcement and government, and we should urgently and severely restrict its use for private and commercial purposes as well.” Their action was part of the BanFacialRecognition.com campaign, which was endorsed by over thirty major civil rights organizations including Greenpeace, MoveOn and Free Press.

Several cities in the United States have already openly banned facial recognition technology, including San Francisco, Somerville (Massachusetts), Berkeley and Oakland (California), and the issue also entered the presidential election campaign when Bernie last August. Sanders called for a total ban on the use of facial recognition software for police activities. Elizabeth Warren, Kamala Harris and Julián Castro – other candidates in the Democratic Party primaries – have said they want to regulate it.

Rekogniton and other similar systems have already created a number of controversies in the United States. Amazon employees protested the sale of the technology to authorities and frequent identification errors were reported, particularly with regards to recognizing the faces of people of certain ethnicities. There are studies that have found that recognition in major software has a 1 percent error rate when analyzing fair-skinned men, and up to 35 percent in cases of black women. This is because worldwide the artificial intelligences behind artificial recognition systems have been developed mainly with data from white or Asian men. According to the New York Times, one of the major databases used by facial recognition systems contains over 75 percent of men's faces and over 80 percent of white faces.

This asymmetry has already caused notable incidents for companies operating in the sector: in 2015, Google had to apologize publicly after its Google Photos app recognized gorillas in the photos of some black people. The main concern, of course, has to do with person-to-person exchanges and the fact that technology could increase discrimination against some groups.

Supporters of the use of these systems also by the police say there could be several advantages in investigating and searching for missing persons, but the arguments against are more numerous and solid. Woodrow Hartzog and Evan Selinger, respectively professor of law and professor of philosophy, argued in a 2018 article that facial recognition technology is inherently harmful to the social fabric: “The mere existence of facial recognition systems, which are often invisible , harms civil liberties, because people will act differently if they suspect they are being monitored “.

Luke Stark, a digital media scholar who works for Microsoft Research Montreal, also brought up another argument in favor of the ban. He compared facial recognition software to plutonium, the radioactive element used in nuclear reactors: for Stark, as the radioactivity of plutonium comes from its chemical structure, the danger of facial recognition is intrinsically and structurally embedded within the technology itself because attributes numerical values ​​to the human face. «Facial recognition technologies and other systems for visually classifying human bodies through data are inevitably and always means by which 'race', as a constructed category, is defined and made visible. Reducing humans into sets of legible and manipulable signs has been one of the hallmarks of racial science and administrative technologies dating back several hundred years ”. The simple fact of classifying and numerically reducing the characteristics of the human face, according to Stark, is therefore dangerous, because it allows governments and companies to divide people into different races, to build subordinate groups, and then to reify that subordination by referring to something. of “objective”: finally claiming subordination “as a natural fact”.

China is already using facial recognition with this objective in Xinjiang, an autonomous region in the north-west of the country inhabited mainly by Uyghurs, an ethnic Muslim minority accused by the Chinese government of separatism and terrorism whose members are systematically persecuted and locked up in camps. “Re-education”. The system has been called “automated racism” by the New York Times. Instead of assuming that facial recognition is permissible, says a Vox article on the subject, “we would do better by assuming that it is prohibited, and then isolate rare exceptions for specific cases where it might have justifications.”

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Most Popular

To Top