Mar 03, 2026 · 6 min read
The Kremlin Used ChatGPT to Plan a $600,000 Election Interference Campaign in Africa—OpenAI Just Caught Them
A pro-Kremlin media outlet used ChatGPT to generate propaganda, plan covert influence campaigns, and plot election interference across three African nations. OpenAI identified and banned the accounts on March 2, 2026.
The Network Behind the Campaign
The operation was run by Rybar, a pro-war Russian military blog with hundreds of thousands of followers on Telegram. Rybar was founded by Mikhail Zvinchuk, a former Russian Defense Ministry press officer, and his associate Denis Shchukin. Russian investigative media have reported that Yevgeny Prigozhin, the Wagner Group founder who died in a plane crash in 2023, was involved in financing Rybar's operations.
The group became known as a reliable mouthpiece for Russian military narratives during the invasion of Ukraine. What OpenAI's threat intelligence team discovered is that Rybar was quietly expanding that operation—using ChatGPT as its production infrastructure.
How ChatGPT Became a Content Farm
The network used ChatGPT accounts to generate batches of social media content in multiple languages: Russian, English, and Spanish. The content praised Russia and Belarus while criticizing Ukraine and Western governments. The posts were designed to appear authentic—crafted to look like they came from real users in different parts of the world, not from a Moscow-linked media operation.
OpenAI's researchers described the ChatGPT activity as serving as "a content farm" for the network. The accounts didn't attempt to exploit ChatGPT technically or jailbreak it—they simply used it as a professional tool, submitting prompts to generate propaganda at scale, much the way a legitimate PR firm would use AI to accelerate content production.
Beyond social posts, the accounts used ChatGPT to draft commercial proposals Rybar could offer unnamed clients: managing X and Telegram accounts, creating a bilingual investigative journalism website focused on African affairs, securing paid placements in French language media, and managing amplification networks. They also used Sora to generate videos. In other words, they weren't just automating content—they were building a full service influence operation infrastructure, with ChatGPT as the operational planning tool.
Africa as the Target
The specific geographic focus of the operation is striking. The proposals sought to plan election interference in Burundi, Cameroon, and Madagascar. ChatGPT was asked to explain electoral processes in those countries and to sketch out campaign options—including, in Madagascar's case, tactics aimed at inflaming protests.
The budget for the Africa operations reached $600,000 annually. This is not a hobbyist effort. It is a professional information operation with an organizational budget and a client-facing services model.
Africa has become an increasingly important theater for Russian influence operations. The Wagner Group's military and political presence across the continent—most visibly in Mali, Burkina Faso, Niger, and the Central African Republic—created a foundation for information operations. Russia has strategic interests in weakening Western influence across African governments and in securing political support in multilateral forums. Covert influence campaigns targeting elections in Burundi, Cameroon, and Madagascar fit that pattern directly.
What OpenAI Did—and What It Cannot Fix
OpenAI identified the accounts, banned them, and published a threat intelligence report detailing the operation. The company noted it "could not verify how material was ultimately distributed" once the accounts were banned—meaning the full downstream impact of the content farm remains unknown.
That caveat matters. The accounts were caught during the planning and content generation phase. Some volume of content had already been produced. Whether it was distributed before the ban, and whether it influenced specific audiences in the targeted countries, OpenAI cannot say with certainty.
This is the structural problem with AI assisted influence operations: the detection challenge is fundamentally asymmetric. Creating multilingual propaganda using AI is cheap, fast, and scalable. Detecting it requires active monitoring of how a platform is being used and correlating activity across accounts—work that is expensive, labor intensive, and inevitably incomplete.
AI Lowers the Floor for State Actors
The Rybar operation was notable for what it reveals about how sophisticated state actors use AI: not as a replacement for human strategists, but as an accelerant. The humans behind Fish Food designed the strategy, identified the targets, and determined the objectives. ChatGPT made the execution cheaper and faster.
Russian information operations have previously been linked to troll farms with large human workforces—the St. Petersburg-based Internet Research Agency employed hundreds of people during the 2016 US election cycle. AI dramatically lowers that threshold. The same scope of operation that required a building full of people can now be planned and partially executed by a small team with access to a commercial AI subscription.
The Rybar case is not an isolated incident. In the same threat report, OpenAI documented disruption of accounts linked to Chinese state actors using ChatGPT to research surveillance tools and translate documents. AI platforms are now a standard part of the state-level influence and espionage toolkit.
What This Means for Journalists and Activists
For journalists covering African politics, the implications are direct. The media ecosystem in Burundi, Cameroon, and Madagascar is not immune to information operations—and with AI assisted campaigns, distinguishing authentic sources from manufactured content becomes harder. A fake investigative journalism website with AI generated content and paid media placements in French language outlets is designed to look like journalism. Vetting sources in that environment requires more caution than ever.
For privacy advocates and digital rights researchers, the Rybar case is a benchmark for what AI enabled influence operations look like at scale. OpenAI's intervention demonstrates that AI companies can play a role in disrupting misuse at the platform level. But the Rybar operation will not be the last. As long as AI tools offer leverage to anyone attempting to manipulate information at scale—and as long as authoritarian governments have political objectives that benefit from covert influence—the intersection of AI and election interference will remain one of the defining challenges for anyone who relies on accurate information to do their work.