The great controversy of Facebook continues. A new article published by The Wall Street Journal highlights that Facebook knows that its AI is not capable of eliminating even a minimum part of hateful content or violence. Specifically, because algorithms cannot detect or differentiate a series of content, such as videos recorded in the first person, racist speeches, car accidents or even cockfights.
Facebook, with its objective of minimizing the damage, has confirmed that the social network has managed to reduce hate speech by almost 50% during the last three years . According to Guy Rosen, Facebook's VP of Integrity, much of this drop has been due to “improved and expanded AI systems.”
The documentation leaked by the US newspaper, however, reveals that Facebook dispensed with part of the human team in charge of detecting fake news, violent content or hate speech. Instead, it carried out a series of actions that reduced the number of content and attributed it to the use of artificial intelligence . However, some Facebook employees estimate that the use of these technologies barely removes 3-5% of the content that incites hate.
Now, what happens with the rest of the content that clearly violates the rules of the social network, but that the AI is not able to detect correctly? Simply is displayed less frequently, but is not removed. Facebook confirms it.
Facebook AI shows suspicious content less frequently
Rosen, responding to WSJ's statements, assures that “focusing only on content removal is the wrong way to see how Facebook fights hate speech.” It also highlights that its technology can reduce the distribution of suspicious content . But is it enough?
We have a high threshold to automatically delete content. If we didn't, we would risk making more mistakes in content that looks like hate speech but isn't, harming the very people we're trying to protect, such as those who describe experiences with or condemn hate speech.
Guy Rosen, Vice President of Integrity at Facebook. WSJ alleges that there is content that even the Facebook AI is not able to differentiate and labels it incorrectly. For example, the Christchurch shooting in New Zealand was recorded live by the perpetrator in the first person. The AI detected some videos posted by different users such as “Paintball games” or “car wash”, causing them to be shown in the feed of Internet users.