[ad_1]
Russia’s invasion of Ukraine dredged up concerns that the country’s aggression would extend to social media, and that the Kremlin’s long-running campaign to use the internet to stir doubt and division in democracies would confuse public opinion on the war.
Instead, social media has become an unexpectedly effective vehicle for galvanizing public opinion across many countries against Russian President Vladimir Putin’s actions, while at the same time silencing much of his propaganda.
In the five years since Russia meddled in the 2016 U.S. presidential election, companies including Facebook parent Meta Platforms Inc. and Twitter Inc. have built systems to ensure they wouldn’t be blindsided the next time. Their jobs were made easier by warnings from U.S. intelligence about the planned attack — making it more difficult for Putin to disseminate false pretexts for aggression — as well as directives from the European Union about banning Russian state media.
“There has probably been a miscalculation on Putin’s part about what the response of the West would be,” said Joshua Tucker, a professor at New York University who runs the school’s Center for the Advanced Study of Russia. In the U.S., opposition to Putin is a rare point of bipartisan agreement, making it an easy decision for Facebook and Twitter to act, he added. “It might be more dangerous for (social media companies) not to take decisive action.”
For years, Russian operatives have embedded themselves into democracies, including the U.S., through social media, running fake accounts that masqueraded as real citizens with polarizing opinions. Most famously, ahead of the U.S. election that made Donald Trump president, Russia created accounts that exchanged memes online about hot-button issues such as gun control and immigration, while encouraging Black people not to vote.
Social media companies were scolded by regulators worldwide for not catching such campaigns sooner, causing them to invest in more sophisticated content-moderation systems. The efforts also raised questions about Putin’s goals, and whether he was aiming to cause chaos or was working toward something more specific, like weakening the global response to an invasion.
“A war in Europe, preceded by a years-long propaganda and influence campaign that destabilized, captured and divided European and U.S. populations,” Emily Bell, the director of the Tow Center for Digital Journalism, said in a Feb. 23 Twitter post. “This does not seem a random path.”
But this time, social media companies were better prepared to crack down on Russian propaganda, moving more quickly at the request of multiple governments and the EU. Facebook and Instagram have banned ads from Russian state-backed media, and Twitter isn’t showing ads in Russia at all. Snap is blocking ads from all Russian advertisers.
Facebook and Twitter have also started labeling posts that include links to Russian state-backed media outlets so people know what they’re reading. Facebook also removed a pro-Russia disinformation network that was targeting users in Ukraine.
YouTube, Google’s video site, which is hugely popular in Russia, hosts a number of news outlets and online personalities close to the Kremlin. RT, the state-backed network formerly called Russia Today, bills itself as the “most watched news network on YouTube.” This past week YouTube removed ads from channels run by RT and other state-backed networks, blocked them in Europe and limited the amount they recommended them to viewers. TikTok did the same.
Laura Edelson, a researcher and misinformation expert at New York University, noted her surprise in a tweet thread. “TECH PLATFORMS: They’re not totally beefing it!” she wrote. Companies were assisted by the U.S. government’s unusual approach of sharing intelligence information in order to combat false narratives coming out of Russia. “Not leaving an information vacuum for your opponent to fill makes their job much, much, harder,” she added.
Also flooding the sites: Ukraine’s messaging. President Volodymyr Zelenskyy has become a popular hero for his on-the-ground selfie videos during the war, providing a contrast with the imagery of Putin’s speeches from vast opulent ballrooms. Ukrainian government accounts have been posting videos, photos and even memes to build support for the country’s fight.
Experience may be the most important factor guiding the social media companies’ response this time. When Meta removed a disinformation network targeting Ukrainians, it was the same type of disinformation campaign the company has been taking down with regularity since discovering them in 2017. The technology and process needed to label user posts was also well-established — Facebook, Instagram, Twitter and YouTube have been labeling posts for years for various reasons, including for misinformation.
Not everyone is ready to applaud social networking companies, though, especially given their track record of failing to act on problems until they become full-blown crises. Despite blocking Russian media in the EU, YouTube has kept most of the channels up, and some have published videos with millions of views.
The battle is not just against Russia’s content. On TikTok, for example, people have used old audio clips on top of new video to share fake “war footage,” according to a report from Media Matters. Such videos can help accounts gain followers, or solicit donations from sympathetic viewers. Even when videos are removed within hours they can garner millions of views.
“We continue to respond to the war in Ukraine with increased safety and security resources to detect emerging threats and remove harmful misinformation,” TikTok said in a statement.
Others have said the social media companies’ blocking of state media is too little, too late.
“The platforms should get no credit for taking temporary steps against some of Vladimir Putin’s disinformation websites and popular YouTube channels,” said Gordon Crovitz, co-CEO of NewsGuard Technologies, a startup that tracks news credibility. “They have known for years that their users were seeing Putin’s disinformation without warning them.”
In 2014, Russia Today was one of several state-controlled news outlets that amplified government claims that Ukraine shot down Malaysia Airlines Flight 17. Overwhelming evidence showed that the plane was shot down by Russian-backed separatists, likely by accident.
Articles from Russia Today promoting the fake story were liked and shared thousands of times on Facebook, with one post reaching nearly 6,000 likes and 4,800 shares. On Twitter, conspiracy theorists would continue linking to RT and Sputnik’s stories to justify their wild claims.
It wasn’t an isolated incident. Crimea’s 2014 disputed referendum to become a part of Russia was covered as an exercise of democracy by RT. Three years later, Sputnik claimed a Ukrainian language law was “linguistic genocide.” And in 2018, RT aired an interview with the Skripal poisoning suspects who claimed they were tourists just visiting Salisbury Cathedral in the U.K.
Edelson noted that while it’s positive for social networks to block Russia-backed media in Europe, it should also be blocking those outlets worldwide. “Not blocking long-time spreaders of misinformation globally when the government that controls them is actively trying to lie about their war atrocities is… NUTS,” she tweeted.
Following the annexation of Crimea by Russia in 2014, English-language media outlets owned by the Russian state, such as RT, have been producing large amounts of content regarding Ukraine. Over the past year, such articles have received over 500,000 likes, comments and shares from Facebook users, according to research by the Center for Countering Digital Hate.
Now, the companies have political cover to be aggressive against such content. Russia’s invasion of Ukraine has been almost universally condemned, and a lot of the restrictions on Russian state-backed media have been implemented at the request of governments, including the European Union, which voted over the weekend to block Russian state media across the bloc. Opposing an invasion by an authoritarian leader isn’t the kind of thorny policy or political decision that Meta and Twitter typically face.
Social media platforms are rolling out many of the strategies developed during other challenges, such as the spread of COVID-19 conspiracy theories, or the violent rhetoric that bubbled ahead of a 2021 insurrection at the U.S. Capitol.
Meta recently “amplified” its cybersecurity team with a special operations center, and stopped Ghostwriter, a known threat actor with a history of spreading Kremlin-friendly propaganda, from posting content including a YouTube video supposedly showing Ukrainian soldiers waving a white flag and surrendering to Russian troops.
“Each passing incident probably updates their thinking,” Tucker said.
Government officials are still calling for platforms to do more to tackle Russian disinformation. The prime ministers of Poland and the Baltic countries also urged Google, YouTube, Meta and Twitter to take a stand against Russia by taking down the accounts of the Russian and Belarusian governments, their leaders and associates.
There may be risks that Russia will retaliate against Western tech companies’ bans on their state media. Companies and experts are worried that Russia could cut off its citizens’ access to credible information.
After Berlin banned RT’s German operation last month, Russia responded by revoking Deutsche Welle’s accreditation, prompting its Moscow bureau to shut. Russia has also been throttling the performance of social media sites in its country, where citizens organized protests in the early days of the war.
“If the Russians now kick out all the Western media, I mean, then what’s the net loss or benefit?” said Stephan Lewandowsky, a cognitive scientist and misinformation expert at the University of Bristol. “There are always flow-on consequences of this.”
In a time of both misinformation and too much information, quality journalism is more crucial than ever.
By subscribing, you can help us get the story right.
SUBSCRIBE NOW
[ad_2]
Source link