Internet

Platforms grappling with disinformation about Ukraine

Platforms grappling with disinformation about Ukraine

An important part of the reports on the invasion of Ukraine by Russia derives from the large amount of photographs, videos, testimonies and updates published on social networks, where, as often happens in these situations, a bit of everything turns around and it is not always easy. recognize reliable sources from those on social media to spread propaganda and disinformation.

Twitter, Facebook, YouTube and other large platforms are working to eliminate misleading or blatantly false content, but their moderation activities are not always effective, with all the risks that come with it in the midst of a large-scale military operation. Add to this the pressure from the Russian government, which has ample legislative means to control the media: according to the news agency RIA Novosti, the government could restrict access to Facebook in Russia after the social network refused to suspend all controls on Russian state media pages.

Meta, the company that controls Facebook and repeatedly accused of not having sufficiently supervised the phenomenon of “fake news”, has recently announced that it has organized a “Special Operations Center”, which has the task of moderating content in real time. One of the managers explained that the initiative involves the use of: “Experts (including native speakers) who allow us to keep the situation under control and to intervene as quickly as possible”.

The moderation activity is mainly aimed at removing misleading content spread by the Russian side to put the Ukrainian government in a bad light. Even before the invasion, some posts were circulating on Facebook claiming that the Ukrainian government was carrying out a “genocide” in the so-called Donbass, the region of eastern Ukraine in which the self-proclaimed republics of Donetsk and Luhansk are located, formally territories. Ukrainians run by separatists and recognized by Russia.

The posts were derived from an article on the English site of RT, a Russian newspaper funded by the government of Russia, and had circulated a lot, garnering millions of views. Videos were then circulated on Facebook claiming to show phases of the invasion, but which had actually been shot elsewhere even years earlier, and photographs of Russian soldiers dressed in Ukrainian uniforms in various activities and passed off as authentic.

Visuals purportedly from #Ukraine are already circulating online an hour after Putin's announcement, but not all of them are relevant so please try to verify before you share.

This one, viewed nearly 200,000 times on Twitter alone, shows an air show in 2020 pic.twitter.com/BNQc7ddEY2

– Esther Chan (@estherswchan) February 24, 2022

Similar images with “false flag activities” – how tactics are defined to make particular military operations appear as organized by adversaries – have circulated widely on social networks since Thursday not only on Facebook, but also on Twitter, whose simpler and more immediate system of publication encourages the circulation of a greater number of contents.

In the hours surrounding the invasion, Twitter made some mistakes in moderating content and suspending various accounts. Various analysts and researchers who deal with open source intelligence (OSINT, i.e. the collection and analysis of information freely available on social media and beyond) have reported that their account has been blocked, after having disseminated images and videos of what was happening in Ukraine, with comments and ratings on the content shown. Analyst Nick Waters of Bellingcat, the investigative journalism site that relies heavily on OSINT, was among the first to report errors in Twitter moderation.

Initially, it appeared that the suspensions of the OSINT experts stemmed from a coordinated campaign conducted by Russia through a network of bogus accounts whose only function was to report various profiles to Twitter in order to have them removed. However, a Twitter spokesperson explained that the suspensions were due to some moderation errors, and not a coordinated attack.

As on other social networks, content moderation on Twitter involves the use of automatic systems and manual reviews, through people who evaluate the reports. It may happen that in moments of great circulation of reported tweets, errors of evaluation occur, especially if the moderators do not master the language in which the contents are written, with the consequent suspension of accounts that have not violated the rules.

Over the years, Twitter has developed rather precise rules to combat disinformation, as described in the section “Regulations on artificial and manipulated multimedia content”:

It is not allowed to share artificial, manipulated or decontextualized multimedia content that could deceive or confuse people and cause harm. Additionally, we may flag tweets that include misleading media content to help users recognize their authenticity, as well as to provide additional information.

Always distinguishing the activities of OSINT from those of propaganda is not easy, also because there may be cases in which profiles of fake analysts are created who then spread false news or to make propaganda. On the Russian side, disinformation activities of this kind had already been carried out during the annexation of Crimea in 2014, with problems on social networks in moderating them similar to those encountered in recent days.

YouTube is not exempt from these problems, but in its case the activity of moderation clashes in a more marked way with commercial problems and with freedom of expression and information. The service is very popular in Russia, where it is used both by individuals to upload their videos, and by broadcasters to broadcast content often live.

RT, the satellite channel with close ties to the Russian government, has channels in several languages ​​on YouTube and is often accused of propaganda for Russia's President Vladimir Putin and his causes, including the latest that led to the invasion of Ukraine. The same happens with Sputnik, another news service that receives funding from the Russian government.

Thanks to the use of other social networks and the dissemination of various types of news, including strange-but-true ones, RT and Sputnik have gathered a huge following, which is also used for propaganda purposes as reported in the past by various investigations. As with other publications, YouTube reports that their channels hosted on its platform are funded by the Russian government, but according to various observers there are still numerous ambiguities about the relationship between Google and these organizations.

The criticism often focuses on the presence of advertisements before the start of the videos and live broadcasts on those channels, on whose revenues YouTube gets a percentage. Also for this reason, in light of the new economic sanctions against Russia, many have asked the platform to review their trade relations in the country.

On YouTube there are also numerous Russian channels which, in addition to supporting Putin, say they are in favor of the occupation of Ukraine, spreading false or misleading information on the military operations in progress or on the reasons behind the invasion. YouTube moderators can take action to add warnings before videos play, or block advertisements to prevent content publishers from getting anything.

In the past, YouTube had responded to such accusations by claiming to host not only channels in favor of the Russian government and Putin, but also numerous channels that make a harsh criticism of them. The platform claims to want to protect freedom of expression and consequently censor content only if it blatantly violates its terms of service.

However, the main broadcasters financed by Russia have great resources and an ability to reach hundreds of millions of users outside the country, thanks to their channels in different languages ​​and accounts on other social networks such as Twitter and Facebook. In recent weeks they have published news, then denied, of alleged attacks by the Ukrainian army against pro-Russian militiamen in the Donbass, as well as spread false news about NATO's preparation for attacks with chemical weapons, to then accuse the Russia to have led them.

Other false information circulates on Telegram and TikTok, platforms on which it is more difficult to moderate. Telegram hardly suspends and removes the channels created by its users, although systems are available to report them and to indicate any violations of the terms of service.

The phenomenon of propaganda and fake news is relatively new for TikTok, a social network that has recently existed compared to Twitter and Facebook, and the policies followed for the removal of content are not always clear. The company says it has invested heavily in algorithms and other moderation systems, with the aim of removing videos that promote violent behavior or make misinformation.

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Most Popular

To Top