Meet Antibot4Navalny: the mysterious researchers exposing Russia’s war on truth

JUL 10, 2024-1 MIN
Science & Technology Archives - The World from PRX

Meet Antibot4Navalny: the mysterious researchers exposing Russia’s war on truth

JUL 10, 2024-1 MIN

Description

Days ago, a story started making the rounds on social media. It claimed that Olena Zelenska, the first lady of Ukraine, had recently purchased a $4.8 million Bugatti Tourbillon while she was visiting Paris for D-Day celebrations in June. 

An unnamed source in the story said she used American military aid money to pay for the car, and the story included what it said was an invoice for the vehicle. The Bugatti dealership in Paris said it was a lie, but by the time they released a statement, it was too late. The story had already gone viral.

These are the kinds of disinformation campaigns that Antibot4Navalny, an anonymous group of disinformation researchers, have been flagging since last fall in a bid to blunt Moscow’s efforts to confuse and misinform.

The “Click Here” podcast from Recorded Future News spoke recently by encrypted app with one of the leaders of the group about efforts to unmask Russian bots, their work with global researchers on disinformation and why some people are saying Antibot4Navalny is punching way above its weight as it takes on the Kremlin.

“Click Here”: What’s the best way to describe Antibot4Navalny?
Antibot: Most people describe us as an anonymous group of analysts tracking Russia-related influence operations on X, formerly Twitter. We’ve been in operations since November 2023, but I personally have been researching Russian disinformation since March 2018.
What makes you different from other anti-disinformation groups?
In a nutshell, we don’t focus on exposing or debunking fake narratives individually — in order to avoid getting on the wrong side of the Brandolini law. You can’t take aim at individual stories and be effective. That’s why we chose to expose the channels that are pushing these stories … and dig deeper to explain what the disinformation is trying to do — its underlying agenda — on a regular, systematic basis.
How many of you are doing this?
We’re a small group. I’m the only one working full-time on this. We also count on what I would call enthusiasts, who contribute their research on a regular basis. And then in addition to that, we have dozens of loyal followers who give us specialized help when we need it.
And what made you go from disinformation researcher to leading the organization?
Before October 2023, when we really began in earnest as a group, there hadn’t been an occasion to research how Russian influence campaigns were targeting other countries. Our key focus at the time was looking at disinformation targeting Russia and Ukraine. And those were campaigns driven by troll farms, paid humans. 

Then in late October of last year, we uncovered a massive bot campaign. Bots [computer software] were posting and reposting a highly produced Russian-language video that was clearly aimed at changing the narrative of the war in Ukraine. 

It was saying two things at the same time: One, that Russia and Ukraine were brothers, and two, that the fighting was essentially breaking up a family. We assumed that it was targeting Russian and Ukrainian audiences.

But a short time later, we could see that the very same bots had widened the aperture and had started to target France, Germany, the US, Israel and Ukraine all at the very same time. They started promoting fake articles that were meant to convince people to stop sending Western aid to Ukraine.

This seemed to present an opportunity to use all our experience tracking internal Russian information campaigns and help Western audiences know what to expect. 
Antibot4Navalny has been tracking Doppelgänger, one of these Russian disinformation groups, can you talk about them a little bit?
Doppelgänger started operating in mid-2022. Back in October, when we saw these viral posts on X claiming Ukraine’s defeat was imminent, we began to investigate. The articles were being shared on fake websites that looked like well-known news outlets in the West.

We identified the bots behind the campaign, found some unique photos that had not previously been published, and made everything public. That helped us connect to media outlets like Le Monde and Liberation, and other researchers working on the Doppelgänger problem began contacting us.

We discovered all kinds of funny details about the campaign like the way they developed these accounts. They were alphabetic. All the US-associated bots started with D names; French ones used names that began with J, and German ones started with R.
What does a typical day look like for you? 
Eighty percent of my time is promoting the work we do. I compile new findings, pitch stories to media outlets, and post detailed X threads for our followers. The other 20% of my time is spent on what I think I do best: find patterns, analyze content, and automate our day-to-day routine.

However, for the past several months, 0% of my time has been spent on what I think I do best: expose new bot and troll crowds and build automated detectors.

The team spends most of its time collecting data on bots’ nightly runs. They would most benefit from automation, but we cannot afford it yet.
How do you expose bots and trolls? Is technology changing the way you do it?
Overall, there are two streams of work: exposing a new “crowd” of bots and following the new accounts joining it to analyze trends, narratives and priorities. We focus on finding a few “species” that we suspect are inauthentic in some way and then we find what’s common between them. Then we gather sufficient evidence to prove that the accounts are inauthentic and let the world know. 

Because we track and record the content they promote and/or the topics they comment on, we get a lot of coverage. 

To try to make this work at scale, machine learning used to help dramatically, until Twitter discontinued free access to their Application Programming Interface (API). We are still struggling to recover.

What’s important to understand is that the point isn’t really just to look at what bots are writing about or what their specific talking points are. What they are trying to accomplish is more subtle than that. Bots are about introducing uncertainty and confusion — to undermine, not a particular story, but news more generally, to disrupt the conversation itself. That’s why they bring in as many talking points and perspectives as possible, even if they are contradicting each other. It adds to the confusion.
How have disinformation groups, like Doppelgänger, transformed over the past few years?
Doppelgänger and other influence operators are constantly experimenting in order to work around social media abuse protection measures (and X is struggling to catch up with those changes); X is becoming increasingly less transparent and accessible for researchers; and Doppelgänger seems to be learning from its own mistakes.

For example, the recurring pattern is: A few citizens of a third country are hired to do something on the ground that favors the Kremlin’s interests or agenda; a few days later, Doppelgänger bots are focusing on massively promoting it. It might be taking aim at an official or to chip away at support for Ukraine or some other targeted country.

Now, it seems like Doppelgänger is learning from its own experience when covering on-the-ground influence operations.

Last fall, Doppelgänger bots promoted unique photos of Stars of David in Paris that were never published before. That showed very strong evidence of connection between Doppelgänger operators and people behind the offline operation.Their bots promoted a publication by Doppelgänger’s original site (artichoc[.]io), which used a broadly circulated photo of red handprints at a memorial by AFP — which helped with “plausible deniability.” Bots promoted publication by Le Figaro, a legitimate, reputable media outlet — which made the tweets posted by the bots look more authentic.
What have people gotten wrong about bots and their operations?
The most common misconception is that bots’ key goal is to promote a specific set of talking points to make an audience believe something specific.

In reality, the biggest achievement of influence operations based on trolls-for-hire is, in our opinion, that regular users suspect each other to be pro-Russian, pro-China, pro-Iran, what have you. Once they encounter someone from an opposing point of view, they prefer to stop the conversation altogether. In a sense, the Godwin’s law is not there any more. It was replaced with “you’re a troll-for-hire.”

The biggest achievement of FIMI (Foreign Information Manipulation and Interference), as well as of domestic troll farms in Russia, is that it ruined the benefit of doubt. Regular users stopped trusting each other, especially with those holding views different from theirs. Polarization and atomization improved; it became increasingly difficult to seek tactical allies for the sake of common goals among those bearing differing views. It’s “divide and conquer” at its best.
How do we fix it?
There are some options to explore: Make user-generated data of social media companies as widely and freely available to researchers as possible; stimulate third-party developers to build an ecosystem of third-party analysis tools and libraries; social networks providing users multiple tools helping to analyze the accounts they never encountered before.
What are your proudest achievements?
There are several. Among some of them, we exposed Matryoshka, a completely new influence operation that was never researched before us. Following our initial exposure, it was further researched by other organizations. 

We also collected what we believe is a top-3-largest dataset on Doppelgänger bot activity that can be made available for journalists for analysis and reporting. We collected over 3,500 articles that were promoted by social media bots on X, along with every relevant evidence out there.
What do you make of all the media interest in the work you’ve done?
We were surprised to see how incredibly interested the media is in Russian disinformation influence campaigns. In just over six months, we were quoted in about 60 stories by non-Russian media in relation to the Russian state’s FIMI alone.

At the same time, it turned out that most media outlets are not used to being paying customers for researchers; they typically trade exposure to researchers for viral stories from them, unlike photo agencies, stringers or paparazzi.

The interview has been edited for clarity and length.

An earlier version of this story appeared on the “CLICK HERE” podcast from Recorded Future News. Additional reporting by Sean Powers and Jade Abdul-Malik.

The post Meet Antibot4Navalny: the mysterious researchers exposing Russia’s war on truth appeared first on The World from PRX.