Ask Runable forDesign-Driven General AI AgentTry Runable For Free
Runable
Back to Blog
Technology7 min read

How the Internet Broke Everyone’s Bullshit Detectors | WIRED

From AI-generated images to restricted satellite data, the systems used to verify what’s real online are struggling to keep up. Discover insights about how the

propagandaartificial intelligenceopen sourcesatellite imagesiran+2 more
How the Internet Broke Everyone’s Bullshit Detectors | WIRED
Listen to Article
0:00
0:00
0:00

How the Internet Broke Everyone’s Bullshit Detectors | WIRED

Overview

How the Internet Broke Everyone’s Bullshit Detectors

Lego-style propaganda videos alleging war crimes are flooding online feeds, echoing the White House’s own turn toward cryptic teaser clips and meme-native visuals. This is not just content drift. It is a new front in the information war, one where speed, ambiguity, and algorithmic reach matter as much as accuracy.

Details

One Iran-linked outlet, Explosive News, can reportedly turn around a two-minute synthetic Lego segment in about 24 hours. The speed is the point. Synthetic media does not need to hold up forever; it only needs to travel before verification catches up.

Last month, the White House added to that confusion when it posted two vague “launching soon” videos, then removed them after online investigators and open source researchers began dissecting them.

The reveal turned out to be anticlimactic: a promotional push for the official White House app. But the episode demonstrated how thoroughly official communication has absorbed the aesthetics of leaks, virality, and platform-native intrigue. Even when official accounts adopt the aesthetics of a leak, questioning whether a record is real or synthetic is the only defensive move left.

A zero digital footprint used to signal authenticity. Now, it can signal the opposite. The absence of a trail no longer means something is original—it may mean it was never captured by a lens at all. The signal has inverted. Truth lags; engagement leads.

Automated traffic now commands an estimated 51 percent of internet activity, scaling eight times faster than human traffic according to the 2026 State of AI Traffic & Cyberthreat Benchmark Report. These systems don’t just distribute content, they prioritize low-quality virality, ensuring the synthetic record travels while verification is still catching up.

2026 State of AI Traffic & Cyberthreat Benchmark Report

Open source investigators are still holding the line, but they are fighting a volume war. The rise of hyperactive “super sharers,” often backed by paid verification, adds a layer of false authority that traditional open source intelligence (OSINT) now has to navigate.

“We’re perpetually catching up to someone pressing repost without a second thought,” says Maryam Ishani, an OSINT journalist covering the conflict. “The algorithm prioritizes that reflex, and our information is always going to be one step behind.”

At the same time, the surge of war-monitoring accounts is beginning to interfere with reporting itself. Manisha Ganguly, visual forensics lead at The Guardian and an OSINT specialist investigating war crimes, points to the false certainty created by the flood of aggregated content on Telegram and X.

“Open source verification starts to create false certainty when it stops being a method of inquiry—through confirmation bias, or when OSINT is used to cosmetically validate official accounts or knowingly misapplied to align with ideological narratives rather than interrogate them,” Ganguly says.

While this plays out, the verification toolkit itself is becoming harder to access. On April 4, Planet Labs—one of the most relied-upon commercial satellite providers for conflict journalism—announced it would indefinitely withhold imagery of Iran and the broader Middle East conflict zone, retroactive to March 9, following a request from the US government.

The response from US defense secretary Pete Hegseth to concerns about the delay was unambiguous: “Open source is not the place to determine what did or did not happen.”

That shift matters. When access to primary visual evidence is restricted, the ability to independently verify events narrows. And in that narrowing gap, something else expands: Generative AI doesn’t just fill the silence—it competes to define what’s seen in the first place.

Generative AI platforms have been learning from their mistakes. Henk van Ess, an investigative trainer and verification specialist, says many of the classic tells—incorrect finger counts, garbled protest signs, distorted text—have largely been fixed in the latest generation of models. Tools like Imagen 3, Midjourney, and Dall·E have improved in prompt understanding, photorealism, and text-in-image rendering.

But the harder problem is what van Ess calls the hybrid.

In these cases, 95 percent of an image is a real photograph: real metadata, real sensor noise, real lighting physics. The manipulation sits in a single detail—a patch added to a uniform, a weapon placed into a hand, a face subtly swapped in. Pixel-level detectors often clear it because they are scanning what is, in most respects, a genuine image. The fake can be one square inch.

“Every old method assumed the image was a record of something,” says van Ess. “Generative media breaks that assumption at the root.”

Henry Ajder, a deepfake researcher and AI adviser who has tracked synthetic media since 2018, goes further. AI is no longer obvious, he says, it is embedded. The volume of high-quality synthetic content now circulating online means the era of visible errors is ending. What replaces it is content that looks entirely credible.

The tools built to detect it have their own limits. Detection systems are not truth engines, Ajder says. Even the strongest tools fail often enough to matter, and most return a confidence score without explaining how that score was reached. “Detection tools should never be used as a sole signal to determine action,” Ajder says.

That infrastructure doesn’t yet exist at scale. Until it does, the burden shifts elsewhere—onto the people consuming the images in the first place.

Van Ess breaks it down into five steps anyone can apply—not as guarantees, but as ways to slow the spread.

Look for Hollywood. If an image feels too cinematic—too dramatic, too evenly lit, too composed—that’s a signal. Real catastrophe is rarely symmetrical. If everyone looks ready for their close-up, that’s your first tell.

Run multiple reverse image searches. Google Lens, Yandex, and Tin Eye each surface different results. A lack of matches no longer proves originality. It may mean the image was never photographed at all.

Zoom into the margins. Not the landmark, but the parking sign, the manhole cover, the shadow angle. These peripheral details are often where inconsistencies show up—the parts no one generating a fake is paid to perfect.

Treat detection tools as prompts, not verdicts. A percentage score without explanation is not evidence. Tools that show where an image first appeared, or whether it exists in fact-checker databases, are more useful than a single confidence rating. Image Whisperer is one free tool that combines these signals.

Find “patient zero.” Trace the image to its earliest appearance. Authentic material usually arrives attached to a person—a witness, a photographer, a location. Synthetic content often appears frictionless: anonymous, polished, and already formatted for sharing.

Ajder, who's advised companies including Adobe and Synthesia, argues that the long-term solution is not better detection alone, but provenance—systems that can verify origin rather than endlessly chasing what is fake. Until that infrastructure exists at scale, the burden doesn’t disappear—it shifts.

In a system where synthetic content moves faster than it can be verified, the only real defense may be behavioral: hesitation. A pause before the repost. A few minutes of scrutiny in a system designed to reward none.

This story was originally published by WIRED Middle East.

In your inbox: Upgrade your life with WIRED-tested gear

In your inbox: Upgrade your life with WIRED-tested gear

What you need to know about the foreign-made router ban

What you need to know about the foreign-made router ban

Big Story: Anduril wants to own the future of war tech

Big Story: Anduril wants to own the future of war tech

How Trump’s plot to grab Iran's nuclear fuel would actually work

How Trump’s plot to grab Iran's nuclear fuel would actually work

Key Takeaways

  • How the Internet Broke Everyone’s Bullshit Detectors

  • Lego-style propaganda videos alleging war crimes are flooding online feeds, echoing the White House’s own turn toward cryptic teaser clips and meme-native visuals

  • One Iran-linked outlet, Explosive News, can reportedly turn around a two-minute synthetic Lego segment in about 24 hours

  • Last month, the White House added to that confusion when it posted two vague “launching soon” videos, then removed them after online investigators and open source researchers began dissecting them

  • The reveal turned out to be anticlimactic: a promotional push for the official White House app

Cut Costs with Runable

Cost savings are based on average monthly price per user for each app.

Which apps do you use?

Apps to replace

ChatGPTChatGPT
$20 / month
LovableLovable
$25 / month
Gamma AIGamma AI
$25 / month
HiggsFieldHiggsField
$49 / month
Leonardo AILeonardo AI
$12 / month
TOTAL$131 / month

Runable price = $9 / month

Saves $122 / month

Runable can save upto $1464 per year compared to the non-enterprise price of your apps.