Internet

Facebook AI labels black people as “primates”

Facebook AI labels black people as “primates”

Once again, artificial intelligence applied in social networks is the subject of debate and dispute. This time it was the turn of Facebook , which was in trouble after becoming aware of a new bias in its algorithms that had discriminatory results. In this case, the company was controversial because its AI tagged a video with black people as content related to “primates” .

According to The New York Times (via PC Mag), Facebook has apologized for this situation and has promised to find the root of the problem and fix it. What is striking is that the clip that sparked the controversy has been published for more than a year , but it was only recently that the gross labeling error caused by artificial intelligence was detected

According to the report, the video in question belongs to the British newspaper Daily Mail. It was uploaded to Facebook on June 27, 2020 and, as the NYT describes it, shows “black men in altercations with white civilians and the police.” . Below the player, the algorithm of recommendations of the social network asked users if they wanted to “continue watching videos about primates” .

Apparently Facebook didn't find out about the issue until a few days ago , when Darci Groves, the company's former content design manager, shared a screenshot (provided by a third party) on a forum of employees and former employees. It is not specifically known how long this error was visible for, but the firm deactivated the recommendations based on artificial intelligence as soon as it became aware of the matter.

Pixabay Although we have made improvements to our AI, we know that it is not perfect and we have more progress to make. We apologize to anyone who has seen these offensive recommendations.

Dani Lever, Facebook spokesperson to The New York Times

Facebook joins the controversy over algorithmic biases that favor discrimination

The problem that puts Facebook in trouble is not new. Several companies have come under scrutiny in recent years due to algorithmic biases that favor discrimination . This has been mainly seen in facial recognition technologies, which are especially erratic when processing dark skin tones.

Twitter was one of the first companies to openly acknowledge that its artificial intelligence-based algorithms did not behave fairly. In fact, said social network awarded users who publicly exposed the problems in their algorithms to work with greater awareness about them.

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Most Popular

To Top