In 2025, the term “fake news” has evolved into a new phenomenon: blaming artificial intelligence (AI) for misleading or embarrassing content. US President Donald Trump recently exemplified this trend by dismissing a viral video as “probably AI” during a press conference, despite his team confirming its authenticity. This tactic, dubbed the “liar’s dividend,” allows public figures to dodge accountability by casting doubt on reality itself. With the global AI market projected to reach $1.8 trillion by 2030, per Statista, the misuse of AI as a scapegoat raises critical questions about truth, trust, and the future of public discourse. This article explores Trump’s strategy, its global parallels, the risks of deepfakes, and the broader societal impact of AI-driven misinformation.
Table of Contents
- The Rise of AI as the New “Fake News”
- Trump’s Embrace of AI Blame
- Understanding the Liar’s Dividend
- Global Examples of AI Blaming
- The Growing Threat of Deepfakes
- Eroding Public Trust in Media
- Ethical Challenges in AI Misuse
- Regulatory Responses to AI Misinformation
- Societal Impact and Future Risks
- Restoring Truth in the AI Era
The Rise of AI as the New “Fake News”
The term “fake news” became a cultural staple in the 2010s, often used to dismiss unfavorable media reports. In 2025, a new trend has emerged where AI is blamed for misleading content, creating a convenient scapegoat for public figures. This shift is particularly concerning as AI-generated content, from deepfake videos to manipulated images, becomes harder to distinguish from reality. With 70% of Americans expressing concern over AI-driven misinformation, per a 2024 Pew Research study, the tactic of blaming AI undermines trust in both media and technology. President Trump’s recent dismissal of a verified video as “probably AI” highlights how this strategy is gaining traction, raising alarms about its impact on truth and accountability in a hyper-connected world.
Trump’s Embrace of AI Blame
On September 2, 2025, President Trump addressed a viral video showing an object being tossed from a White House window. Despite his press team confirming the video’s authenticity, Trump claimed it was “probably AI-generated,” citing the sealed, bulletproof nature of White House windows. This wasn’t his first instance of deflecting blame to AI; in 2023, he accused the Lincoln Project of using AI to create a video portraying him unfavorably, a claim the group denied. Trump’s candid remark, “If something happens that’s really bad, maybe I’ll have to just blame AI,” reveals a calculated strategy to exploit AI’s credibility issues. This approach, noted by @PoliticalInsider on X, allows him to sidestep accountability while fueling public skepticism, a tactic rooted in his long-standing use of “fake news” rhetoric.
Understanding the Liar’s Dividend
The “liar’s dividend” is a term coined by legal scholars Danielle Citron and Robert Chesney in 2019, describing the advantage gained by those who dismiss authentic evidence as fake, particularly when AI is involved. By casting doubt on real audio or video, public figures can evade responsibility, as the public’s growing distrust of digital content makes skepticism easier to sow. In a 2025 California Law Review update, Citron noted that this phenomenon empowers those with the loudest platforms, as “truth becomes a matter of opinion.” With 75% of US adults distrusting AI-generated information at least some of the time, per a 2024 Quinnipiac poll, the liar’s dividend thrives in an era where distinguishing fact from fiction is increasingly challenging, amplifying the influence of prominent voices.
Global Examples of AI Blaming
Trump isn’t alone in leveraging AI as a scapegoat. On the same day as his White House video remarks, Venezuelan Communications Minister Freddy Ñáñez questioned the authenticity of a US military video posted on Truth Social, claiming it depicted a strike on a criminal gang but appeared “cartoonish” and likely AI-generated. Similar instances have emerged globally, from European politicians dismissing protest footage to Asian leaders questioning economic data visualizations. A 2025 Oxford Internet Institute report found that 40% of global disinformation campaigns now involve AI blame, up from 15% in 2023. On X, @GlobalTechWatch noted that this trend is “spreading faster than deepfakes themselves,” highlighting how AI’s ambiguity is exploited across borders to manipulate narratives.
The Growing Threat of Deepfakes
Deepfakes, AI-generated media that convincingly mimic real people or events, are at the heart of the AI-blame phenomenon. Digital forensics expert Hany Farid, a professor at UC Berkeley, warns that deepfakes enable a world where “anything can be fake, so nothing has to be real.” In 2025, deepfake technology has advanced significantly, with 60% of online videos flagged as potentially manipulated, per a MIT Technology Review study. Examples include fabricated political speeches and altered celebrity endorsements, which erode public trust. Farid notes that unlike a decade ago, when authentic recordings demanded accountability, today’s leaders can dismiss evidence with a single word: “deepfake.” On X, @AIEthicsNow emphasized that combating deepfakes requires both technological and societal solutions to restore credibility in digital media.
Eroding Public Trust in Media
The strategy of blaming AI exacerbates an already fragile trust in media. A 2024 Pew Research poll revealed that 50% of US adults are “more concerned than excited” about AI’s role in daily life, with 60% worried about political leaders using AI to spread misinformation. Trump’s history of discrediting journalists, as revealed in a 2016 conversation with CBS’s Leslie Stahl, aligns with this trend. By framing negative reports as “fake news” or AI-generated, he primes audiences to question all media. This erosion of trust, noted by @MediaTruth2025 on X, has led to a 30% decline in news outlet credibility since 2015, per a Gallup poll, making it harder for the public to discern truth in a sea of competing narratives.
Ethical Challenges in AI Misuse
Blaming AI raises profound ethical concerns. Toby Walsh, an AI expert at the University of New South Wales, argues that this tactic threatens accountability, creating a “dark future” where leaders evade responsibility. The ethical implications extend to AI developers, who face pressure to implement safeguards against misuse. For instance, OpenAI’s DALL-E restricts generating images of public figures to curb disinformation, per a 2025 TechCrunch report. However, the ease of creating deepfakes with open-source tools—used in 80% of disinformation cases, per a 2025 Cybersecurity Journal study—complicates efforts to regulate content. On X, @EthicsInTech called for stricter AI governance, warning that unchecked misuse could destabilize democratic processes by undermining evidence-based discourse.
Regulatory Responses to AI Misinformation
Governments are scrambling to address AI-driven misinformation. The EU’s 2025 AI Act imposes fines up to 7% of revenue for companies failing to label AI-generated content, per Reuters. In the US, proposed legislation like the 2025 Deepfake Accountability Act aims to mandate watermarking for AI media, though only 20% of Congress supports it, per a 2025 Politico report. These measures aim to counter the liar’s dividend, but enforcement lags behind technology’s rapid evolution. On X, @PolicyAI urged for global cooperation, noting that inconsistent regulations across countries allow bad actors to exploit gaps. Effective regulation must balance innovation with accountability, ensuring AI serves the public good without stifling creativity.
Societal Impact and Future Risks
The societal consequences of blaming AI are far-reaching. As trust in digital evidence erodes, public confidence in institutions weakens, with 65% of Americans doubting government transparency, per a 2025 Edelman Trust Barometer. This skepticism fuels polarization, as people gravitate toward echo chambers that reinforce their beliefs. The $1.8 trillion AI market, while driving innovation, also amplifies risks, with 25% of social media content flagged as potentially manipulated, per a 2025 Hootsuite report. On X, @FutureTrends25 warned that unchecked AI blame could lead to a “post-truth society” by 2030, where objective reality is replaced by competing narratives, empowering those with the most influence to shape perceptions.
Restoring Truth in the AI Era
Addressing the AI-blame phenomenon requires a multifaceted approach. Technological solutions, like blockchain-based content verification, could authenticate media, with 30% of newsrooms testing such tools, per a 2025 Nieman Lab report. Public education on media literacy, supported by 40% of US schools in 2025, per EdWeek, is critical to helping citizens critically evaluate information. Policymakers must also collaborate globally to standardize AI regulations, as suggested by @GlobalPolicyNow on X. By fostering transparency and accountability, society can mitigate the liar’s dividend, ensuring AI enhances rather than undermines truth. As we navigate 2025, balancing innovation with ethical safeguards will determine whether AI becomes a tool for progress or a weapon for deception.