How Meta disrupted propaganda moderation during Russia’s invasion of Ukraine
A few days after the March 9 bombing of a maternity and children’s hospital in the Ukrainian city of Mariupol, comments claiming that the attack never took place began flooding the app owner, Facebook and Instagram content control workers on behalf of the meta platform.
Ukrainian President Volodymyr Zelensky has publicly said that at least three people, including a child, were killed in the bombing. Images of bloody, heavy pregnant women fleeing through the rubble, their hands clasped in their stomachs, have sparked immediate outrage around the world.
Among the most recognized women was Mariana Vyshegirskaya, a Ukrainian fashion and beauty influencer. Pictures of him navigating under a hospital staircase in polka-dot pajamas were widely circulated after the attack, captured by an Associated Press photographer.
According to the two contractors who directly control the content of the conflict on Facebook and Instagram, the expression of online support for being a mother has quickly turned into an attack on her Instagram account. They spoke to Reuters on condition of anonymity, citing anonymity agreements that prevent them from discussing their work publicly.
The case involving Beauty Influencer is an example of how meta content policies and enforcement processes enabled pro-Russian propaganda during the Ukraine invasion, moderators told Reuters.
Russian officials seized the photos, setting them aside for glamorous Instagram photos in an attempt to convince viewers that the attack was fake. On state television and social media, and in the UN Security Council chamber, Moscow has alleged – falsely – that Miss Vyshegirskaya wore make-up and multiple outfits in a widely staged deception organized by Ukrainian forces.
Moderators say many of the comments accusing the influential man of duplicity and being accused of being an actress appeared at the bottom of an old Instagram post posing with her makeup tube.
At the height of the attack, comments containing false allegations about the woman are responsible for most of the elements in a moderator’s content row, which usually contain a mixture of posts suspected of violating the meter’s countless principles, the person recalls.
The moderator told Reuters, “The posts were disgusting,” and it seemed sorted. But many were within the company’s rules, the individual said, because they did not mention direct attacks. “I couldn’t do anything about them,” the moderator said.
Reuters could not be reached for comment.
Meta declined to comment on the conduct of activities involving Ms Vishegirskaya, but told Reuters in a statement that multiple parties were working to resolve the issue.
The statement said, “We have separate, expert teams and outside partners who review misinformation and unsubstantiated behavior and we are enforcing our policies to deal with that activity throughout the war,” the statement said.
Meta Policy chief Nick Clegg told reporters separately on Wednesday that the agency was considering new measures to deal with misinformation and fraud without details from Russian official pages.
Russia’s Ministry of Digital Development, Communications and Media and the Kremlin did not respond to requests for comment.
The Ukrainian delegation did not respond to a request for comment.
‘Spirit of Principle’
Based on a restraint center of hundreds of people reviewing content in Eastern Europe, the two contractors are foot soldiers in the battle of Meta with police content from the conflict. They are among the thousands of low-paid workers in outsourcing firms around the world that Meta contracts to enforce its rules.
Technology has tried to position itself as the steward of online speech during the giant attack, which Russia has called a “special operation” to disarm and “blockade” its neighbor.
Just days after the war, Meta Russia imposed restrictions on state media and removed a small network of integrated fake accounts that it said was trying to erode confidence in the Ukrainian government.
It later said it had pulled another Russia-based network that was falsely reporting people for violations such as hate speech or bullying, while beating the efforts of disabled networks before returning to the platform.
Meanwhile, the agency sought to create space for users in the region to express their anger over the Russian invasion and to issue calls for weapons in ways that Meta does not normally allow.
In Ukraine and 11 other countries across Eastern Europe and the Caucasus, it has created a series of temporary “spirit of policy” concessions to its rules, excluding hate speech, violent threats and more; The changes, according to the meta-guidelines of moderators seen by Reuters, were intended to respect the general principles of those policies rather than their literal wording.
It has, for example, approved “inhumane rhetoric against Russian troops” and called for the death of Russian President Vladimir Putin and his ally Belarusian President Alexander Lukashenko, unless those calls are considered credible or targeted, according to Reuters.
The changes became a flashpoint for Meter as it navigated pressures both within the company and in Moscow, which launched a criminal lawsuit against the firm after a March 10 Reuters report revealed the engraving-outs. Russia has also banned Facebook and Instagram within its borders, with a court accusing Meta of “extremist activity.”
Meta withdrew the elements of the exceptions after the Reuters report. According to documents interviewed with Reuters, Meter Public Statement and two Meta staff, two moderators in Europe and a third moderator who manages English-language content in other regions, it first limits them to Ukraine alone and then a complete cancellation. Saw the advice.
The documents offer a rare lens of how Meta interprets its principles, called community values. The company says its system is neutral and rule-based.
Critics say it is often driven by reactionary, policy-like business considerations and news cycles. It is an allegation that has kept Meta in other global conflicts, including Myanmar, Syria and Ethiopia. Social media researchers say the system allows the company to avoid liability for how it affects the 3.6 billion users of its services.
The guidelines transferred to Ukraine have created confusion and frustration for moderators, who say they have an average of 90 seconds to decide whether a given post violates the policy, as the New York Times first reported. Reuters has independently confirmed such frustrations with the three moderators.
After Reuters announced the March 10 release, Meta policy chief Nick Clegg said in a statement the next day that Meta would only allow such speeches in Ukraine.
Two days later, Mr. Clegg told employees that the company was completely reversing the waiver that allowed users to call for the death of Putin and Lukashenko, according to an internal company post seen by Reuters on March 13.
Documents show that at the end of March, the company only increased the remaining rebates in Ukraine by 30 April. Reuters was the first to report on the extension, which allows Ukrainians to engage in certain types of violent and inhumane speech that are usually out of bounds.
Inside the company, written on an internal social platform, some meta workers expressed frustration that Facebook was allowing Ukrainians to make statements that were thought to be out of bounds for users who posted about previous conflicts in the Middle East and other parts of the world. A copy of the message seen by Reuters.
“The policy seems to say hate speech and violence are okay if it targets the ‘right’ people,” wrote one employee, one of 900 comments in a post about the changes.
Meta moderators, meanwhile, have not been instructed to increase their ability to deactivate posts that promote false statements about Russia’s invasion, as civilian deaths have been denied, people told Reuters.
The company declined to comment on its guidelines to moderators.
Denying violent tragedy
Theoretically, the Meter was a rule that should have enabled moderators to deal with a multitude of unfavorable vitriol-directed commentators on Mrs. Vishegirsky, the influence of pregnant beauty. She survived the bombing at Mariupol Hospital and gave birth to her child, the Associated Press reported.
The Meter Harassment Policy prohibits users from “posting content about a violent tragedy, or a victim of a violent tragedy that claims that a violent tragedy did not occur,” according to the Community Standards published on its website. It cites the rule when it removes posts from the Russian embassy in London that made false claims about the Mariupol bombing after the March 9 attacks.
But since the rule is narrowly defined, the two moderators say, it can only be used lightly to combat online hate campaigns against subsequent beauty influencers.
Posts that explicitly complained that the bombing was staged deserved to be removed, but comments such as “You’re such a good actress” were considered too vague, and despite the subtext being clear, they said.
Guidelines from Meta can help commenters consider context and apply the spirit of that principle, they added.
She declined to comment on whether the rule applies to comments on Mrs. Vishegirsky’s account.
At the same time, even explicit posts have proved elusive for the system that applies the meter.
One week after the bombing, versions of the Russian embassy post are still circulating on Facebook on at least eight official Russian accounts, including the embassies of Denmark, Mexico and Japan, according to an Israeli surveillance agency, FakirPorter.
A Mariupol has labeled the photos of the Associated Press with a red “fake” label, where the text claims that the attack on Mrs. Vishegirsky was a hoax and directs readers to condemn her for “more than 500 comments from real users” on her Instagram account. Participate in manipulation.
Meta removed those posts on March 16 just hours after Reuters asked the company about them, a spokesman confirmed. Meta declined to comment on why the posts avoided its own identification system.
The next day, March 17, Meta designated Mrs. Vyshegirskaya as an “involuntary public figure,” meaning that moderators could eventually begin deleting comments under the company’s bullying and harassment policy, they told Reuters.
But change, they said, came too late. The flow of posts about women has already slowed to a beat. – Katie Paul And Munsif Vengatil / Reuters
Leave a Reply