Skip to main content

AI deepfakes: The latest weapon in Pakistan’s disinfo wars

I have spent the past few weeks staring at what seem to be digital ghosts; they look human, speak the way we do, and cause anger and despair. But they are not real. They are machines wearing human skin.

Something is shifting in Pakistan’s social media landscape. It is not merely a rumour. It is not the old cycle of misinformation. It feels darker. Artificial intelligence is reshaping reality frame by frame until the internet resembles a hall of mirrors.

For me, it began on November 8, when an account called ‘PakVocals’ posted a video on X that claimed to show journalist Benazir Shah dancing in a nightclub. The caption was cruel. It tried to mock her by using derogatory comments against her professional credibility. At the time of writing this piece, that video had garnered more than half a million views.

It did exactly what it was designed to do: turn a journalist into a target. Mission accomplished!

To most viewers, the clip probably passed as real, but for me, something felt off.

When the mask slips

I opened the clip in DaVinci Resolve — a video editing software that can be downloaded online — and watched it frame by frame because deepfakes are rarely perfect; they usually leave behind little crumbs of evidence.

And at the 30th frame, I found it!

For a split second, the skin tone flickered. The outline of the face rippled like water. The mask, it seems, had slipped — a clear sign of face manipulation had thus been identified.

 Screengrab from deepfake video showing clear signs of face manipulation.
Screengrab from deepfake video showing clear signs of face manipulation.

I then used Google Lens, another free online tool, to scan multiple screenshots of the deepfake video. The aim was to identify the source of the video, and success was clearly in sight when several search results popped up, one after the other.

The body in the video didn’t belong to Benazir Shah but to Indian actress Jannat Zubair Rahmani — the clothes were the same, and so was the lighting. The only thing the fakers had changed was the face.

A screengrab of the original video of Indian actress Jannat Zubair.
A screengrab of the original video of Indian actress Jannat Zubair.

On X, Shah called out the deepfake video and noted that the account was followed by the information minister of the incumbent government. That is when the story took a weird turn. ‘PakVocals’ issued an apology, citing religious fear of slander. But despite the pious language, the video is still there, which shows that their true intention was to damage someone’s reputation.

On November 18, another X account by the name ‘Princess Hania Shah’ posted a new deepfake of Benazir Shah and accused her of being a “traitor”. This deepfake edit was a lazy attempt, I thought to myself. But the intent, again, was dangerous.

This time, I didn’t even need editing software. I used Google Lens to put the fake video side-by-side with the original footage I found online. They were perfect mirrors — same moves, same lighting, just a different face. It took five minutes to prove it was a lie. Yet, more than 180,000 people had already watched it.

 A screengrab of the Nov 18 X post.
A screengrab of the Nov 18 X post.

Fact-checking — a necessity

X, formerly Twitter, has a system called “Community Notes” designed to flag fake content. But it has failed too many times. Deepfake videos are still up on the social media platform without a warning label.

What’s worse is that when users asked X’s own AI chatbot, Grok, to verify the videos, it failed to identify them as AI-generated content, revealing the limitations of billion-dollar algorithms. Simply put, we can’t rely on tech giants to protect us. If third-party fact-checkers fail to do their job, nobody will.

With every passing day, fact-checking is becoming a necessity, especially during times of conflict and wars, because AI-manipulated content is not just consumed by ordinary people but also by mainstream news media.

During the recent Israel-Iran conflict in mid-2025, a viral post on X allegedly showed analysts at an Israeli news channel fleeing their studio after a purported strike by Iran. While the footage initially seemed convincing, closer inspection revealed ghostly movements, immaculate camera panning and flat audio. But many Pakistani news outlets aired the clip as an authentic event.

A screengrab of the AI video that went viral during the Israel-Iran war.
A screengrab of the AI video that went viral during the Israel-Iran war.

After the Benazir Shah deepfakes, just a few hours later, I found another deepfake video. It claimed to show Pakistani soldiers abusing a Baloch woman in a desert setting. The account that posted the clip, namely ‘Yousuf Bahand’, added a taunt: “Some will say this is fake.”

He was right. It is a fake!

I went back to the frame-by-frame analysis. The clues were in plain sight. The name tags on the soldiers’ uniforms read “PRMRACCH” and “BAMY” — a classic AI mistake in writing text, hallucinating words while producing gibberish.

Screengrab of visual inconsistencies in the video.
Screengrab of visual inconsistencies in the video.

The clip was crammed with other visual inconsistencies as well. For one, one of the hands of the supposed soldier in the video appeared to be merged into the hands of the woman.

Not just the video, but the audio too gave it away. When the soldier yelled “damn it”, it came out in a clean American accent, nothing like what you’d expect in that setting. So I ran the clip through FFMPEG, a command-line tool that converts audio or video formats.

The data showed something you never see in a real desert recording: 0.21 seconds of total silence at the very start. In a windy, open environment, the mic always picks up ambient noise the moment it switches on. That dead air only happens when someone pastes the audio track onto the video later.

And this is not a one-off episode of playing on real issues. I remember, for instance, the wide circulation of clips showing cows drowning during the 2025 Punjab floods. Instead of authentic chaos, the videos presented a sterile calm: unnaturally still, perfectly framed, and entirely lacking the expected camera tremor or human sounds.

Unfortunately, most viewers appeared to stop at the initial shock of the disaster, arguably failing to question the engineers behind the synthetic content.

The ‘AI Slop’

A recent BBC investigation revealed that Pakistan has become a global hub for “AI Slop”. Creators appear to be mass-producing fake content, using real-world settings to spread lies, just to game algorithms and earn money. The incentive is simple: virality pays, truth does not.

Until early 2024, AI videos were mostly a glitchy novelty shared by enthusiasts, but the landscape arguably shifted with Sora (OpenAI), which created a sense of competition in the market with heavy hitters such as X’s Grok and Google’s Veo.

But to their credit, Google just stepped in with a partial remedy: its Gemini chatbot can now flag images created with Google’s own AI tools by detecting invisible SynthID watermarks, a technology that embeds invisible watermarks into AI-generated content like images, audio, and text to identify it as machine-created.

The tools I used to expose these lies are free. Anyone can use them. But unfortunately, the tools to make these lies are just as accessible. That is the frightening symmetry: anyone with an ulterior motive, whether financial or otherwise, can fabricate a reality, but not everyone will have the incentive to break through such fabrications.

AI isn’t a futuristic threat anymore. It is a street-level weapon. The next viral clip you see might not be real at all, yet the truth still sits there beneath the distortions. You only have to slow down long enough to find it. Sometimes it is a flicker on a cheek. Sometimes it is a melting hand. Sometimes there is a silence in the waveform.

The ghosts can be unmasked, but only if we choose to look closely.



from Dawn - Home https://ift.tt/o54Jzc6

Comments

Popular posts from this blog

Dodgers’ Shohei Ohtani skipping home run derby

Baseball’s biggest star is skipping the home run derby. Shohei Ohtani confirmed after Tuesday’s win over the Diamondbacks that he will not be participating as he continues to rehab an elbow injury that has prevented him from pitching this season. “There’s been some conversations going on,” Ohtani said, according to Juan Toribio of MLB.com . “I’m in the middle of my rehab progression, so it’s not going to look like I’ll be participating.” Manager Dave Roberts said Ohtani and the club reached the decision together. Ohtani signed a historic 10-year, $700-million contract with the Dodgers after winning his second AL MVP award last season with the Angels. Despite his elbow injury, he has served as the Dodgers’ primary DH this season and been one of the most productive hitters in baseball. Ohtani entered Tuesday hitting .316/.399/.635 with a 1.034 OPS. He hit his NL-leading 27th home run in the win. Ohtani had previously participated in the Derby in 2021. Last season’s champion, Vlad...

Pakistan flag installed at UNSC as country becomes non-permanent member for 8th time

The Pakistani national flag was installed in front of the United Nations Security Council chamber, as the country began its eighth term as a non-permanent member (2025-26) of the 15-member body, according to a press release issued by the Permanent Mission of Pakistan to the United Nations on Thursday. Pakistan on Wednesday began a two-year term as a non-permanent member of the United Nations Security Council (UNSC). Elected in June to replace Japan, Pakistan now occupies one of the two Asia-Pacific seats on the UNSC. It will preside over the council in July, a key opportunity to set the agenda and foster dialogue. View this post on Instagram This marks Pakistan’s eighth term on the council, providing an opportunity to shape discussions on pivotal international issues, but also posing significant challenges. “As part of the joining ceremony, flags of the five new incoming non-permanent members — Pakistan, Denmark, Greece, Panam...

Heathrow resumes operations as global airlines scramble after shutdown

London’s Heathrow Airport resumed full operations on Saturday, a day after a fire knocked out its power supply and shut Europe’s busiest airport, causing global travel chaos. The travel industry was scrambling to reroute passengers and fix battered airline schedules after the huge fire at an electrical substation serving the airport. Some flights had resumed on Friday evening, but the shuttering of the world’s fifth-busiest airport for most of the day left tens of thousands searching for scarce hotel rooms and replacement seats while airlines tried to return jets and crew to bases. Teams were working across the airport to support passengers affected by the outage, a Heathrow spokesperson said in an emailed statement. “We have hundreds of additional colleagues on hand in our terminals and we have added flights to today’s schedule to facilitate an extra 10,000 passengers travelling through the airport,” the spokesperson said. The travel industry, facing the prospect of a financial ...