Trump Posts AI Slop of Himself Dumping Feces on Protesters as "Slopaganda" Floods Social Media
The Trump administration and foreign adversaries are weaponizing AI-generated garbage to manipulate public opinion and erode trust in reality itself. From White House videos mixing real footage with video game clips to Trump posting himself as a crowned fighter pilot defecating on Americans, this "slopaganda" aims to flood the zone with so much bullshit that shared truth becomes impossible.
The White House Is Now in the Business of Posting Fake War Footage
In early March, the White House posted a video promoting US-Israeli strikes on Iran. The problem? It wasn't real. The administration mixed actual military footage with clips lifted from movies, TV shows, video games, and anime.
This wasn't a mistake. It was strategy.
Iran responded in kind, flooding social media with outdated war footage passed off as current, AI-generated attacks on Tel Aviv that never happened, and fabricated strikes on US bases in the Persian Gulf. More recently, Iranian creators produced viral Lego figurine videos depicting Trump, Jeffrey Epstein, Satan, Benjamin Netanyahu, and Ayatollah Khamenei in various scenarios.
Welcome to the slopaganda wars -- where AI-generated garbage serves as the new frontier of propaganda.
Trump Can't Stop Posting AI Slop About Himself
In October 2025, Donald Trump posted an AI-generated video of himself piloting an F-16 fighter jet while wearing a crown and literally dumping feces on American protesters below. He later posted another AI fantasy depicting his presidential library as an enormous gaudy skyscraper with a golden elevator.
These aren't deepfakes meant to deceive -- they're expressive propaganda designed to create emotional associations and normalize absurdity. No one believes Trump can actually fly a fighter jet. The point is the message: Trump as warrior-king, his critics as deserving of literal shit.
Researchers Mark Alfano and Michal Klincewicz coined the term "slopaganda" in a paper published in Filisofiska Notiser to describe exactly this phenomenon: AI-generated slop that serves propagandistic purposes. Their definition covers communication intended to manipulate beliefs, emotions, attention, and memory to achieve political ends -- now turbocharged by generative AI.
How Slopaganda Slips Past Your Defenses
Slopaganda works because it exploits how we consume information. When you're scrolling social media or toggling between browser tabs, your mental defenses are down. The content is designed to be attention-grabbing and emotionally arresting -- usually in negative ways that trigger anger, fear, or disgust.
The Iranian Lego videos aren't trying to convince anyone that plastic figurines are real. They're creating associations: Trump equals Satan equals evil. The repetition and emotional charge do the work, not factual accuracy.
This is what philosophers call "bullshit" in the technical sense -- content that is indifferent to truth. ChatGPT and other generative AI tools are essentially bullshit machines, churning out plausible-sounding text without regard for whether it's accurate. Slopaganda takes that bullshit and weaponizes it for political ends.
The Real Danger: When You Can't Trust Anything
The immediate problem is that some slopaganda is genuinely misleading. During conflicts, crises, and emergencies -- when people desperately want information but authoritative sources are scarce -- AI-generated deepfakes and fabricated footage can spread quickly. Even if only a small percentage of people are fooled, that can be enough to influence election results, protest movements, or public sentiment about an unpopular war.
The deeper problem is what happens when slopaganda becomes ubiquitous. People will get better at spotting AI-generated content, but they'll also start misidentifying authentic content as fake. Public trust in genuinely trustworthy sources will collapse.
When it becomes impossible to distinguish real from fake, you can just choose to believe whatever makes you feel good, angry, or vindicated. In polarized societies already struggling with economic, political, and environmental crises, the breakdown of shared truth makes everything worse.
Alfano and Klincewicz call this the "nihilistic doubt in really knowing anything." It's not just that you can't trust this specific video or that specific claim. It's that you can't trust anything at all.
What Can Actually Be Done
The researchers propose interventions at three levels, though none are silver bullets.
First, individuals can become more digitally literate -- learning to spot telltale signs of AI generation, checking sources instead of just reading headlines, and blocking accounts that routinely spread slopaganda. This helps people avoid falling for fake content while still trusting legitimate news sources.
Second, tech companies and regulators can implement watermarking requirements for AI-generated content and remove the most egregious slopaganda from platforms where people get news.
Third, the companies that created these tools -- OpenAI, Google, X -- need to be held accountable through taxation and regulation. The money raised could fund both enforcement efforts and digital literacy education.
Slopaganda is probably here to stay. The question is whether we can adapt to it before shared reality becomes impossible -- or whether we're already too late.
When the President of the United States is posting AI-generated videos of himself shitting on protesters, the answer might already be clear.
Comments (0)
No comments yet. Be the first to share your thoughts.
Sign in to leave a comment.