Mar 18, 2026 · 5 min read
Iran Built 300 AI Powered Instagram Personas to Manipulate American Voters—Here's How They Did It
Meta dismantled an Iranian influence network that spent over a year building fake identities on Instagram to shape American public opinion on Middle East policy.
The Long Game of Fake Trust
The operation did not start with propaganda. It started with patience. According to Meta's threat disruption team, an Iranian influence network spent over a year building what appeared to be authentic American voices on Instagram before introducing any political messaging. The accounts posed as journalists, political commentators, and ordinary citizens. Each was given a detailed backstory and occupation. One pretended to be a political scientist. Another claimed to be a women's rights activist. A third operated as a satirical cartoonist.
All of them used AI generated profile photos. All of them were fake. And all of them were designed with a single objective: to build enough trust with American users that when the political messaging eventually arrived, it would feel like an organic opinion rather than a foreign government's talking point.
How the Network Operated
The operation began on X in 2024 before expanding to Facebook and Instagram in the summer of 2025. By the time Meta detected and dismantled it, approximately 300 accounts and pages were operating across both platforms. The Instagram personas had collectively attracted about 41,000 followers, though Meta noted that engagement from authentic users remained minimal.
The network's content strategy was deliberately gradual. Accounts spent months posting general news, lifestyle content, and nonpolitical commentary to build the appearance of grassroots media brands. Only after establishing a baseline of credibility did they begin introducing messaging critical of Israel and U.S. Middle East policy, themes consistent with Iran's longstanding information warfare objectives.
David Agranovich, Meta's director of global threat disruption, described the approach directly. "These types of operations often try to build credibility first. They engage with people, build relationships and then introduce messaging designed to influence public conversations." The strategy treats social media manipulation like a long term intelligence operation rather than a blunt propaganda campaign.
AI as an Influence Weapon
The use of AI generated profile photos is not new, but the sophistication of their deployment in this campaign marks an escalation. Each persona had multiple AI generated photos to create a consistent visual identity across posts. The images were good enough to pass casual inspection, which is all that most social media users perform before deciding to follow an account.
The broader implication is that the barrier to creating convincing fake identities at scale has effectively collapsed. Generating a photorealistic face costs nothing. Building a coherent backstory with the help of large language models requires minimal effort. The operational cost of running 300 convincing personas is now a fraction of what it would have been even two years ago, which means the volume and variety of influence operations will continue to increase.
For platform defenders, this creates an asymmetry that is difficult to resolve. Detecting AI generated images requires specialized analysis that cannot be performed at the speed users scroll. And even when the photos are identified as synthetic, the accounts have often already served their purpose by building relationships with real users who may continue to share the messaging even after the originating account is removed.
Why Instagram Is the New Battlefield
The choice of Instagram as the primary platform is strategic. Instagram's visual format makes it easier to establish perceived authenticity through curated photos and lifestyle content. Its algorithmic feed can amplify content to users who engage with related topics, providing organic distribution that paid advertising cannot match. And its younger demographic skews toward users who may be less practiced at identifying coordinated inauthentic behavior compared to users on more text heavy platforms.
This campaign joins a growing pattern of state sponsored influence operations targeting visual social media platforms. Meta's quarterly threat reports have documented similar networks linked to China, Russia, and Romania, all using Instagram as a primary or secondary distribution channel. The platform's combination of reach, algorithmic amplification, and visual trust signals makes it an ideal environment for influence operations that depend on perceived authenticity.
How to Spot State Sponsored Fakes
Identifying coordinated influence operations as an individual user is difficult by design, but several patterns can raise suspicion. Accounts that post with unusual consistency, maintain a narrow thematic focus despite claiming to be personal accounts, or pivot abruptly from lifestyle content to political messaging deserve scrutiny. Profile photos that appear overly polished or have subtle inconsistencies around hairlines, earlobes, or background objects may be AI generated.
More broadly, the campaign is a reminder that the person behind a social media account may not be a person at all. Trust on social platforms should be earned through verifiable identity, not accumulated through engagement. When an account you do not know personally shares political content that aligns precisely with a foreign government's messaging objectives, the alignment may not be coincidental.
Meta removed the network before it achieved significant reach, but the playbook is now public. The next operation, from Iran or any other state actor, will iterate on what worked and adjust what did not. The infrastructure for AI powered influence operations is cheap, scalable, and improving faster than the defenses designed to catch it.