AI Detection Risk Management

Forged Memos: AI in the Service of Internal Sabotage

Matteo Chevalier

This article is written for exclusively informational and educational purposes. It does not constitute legal advice and should not be relied upon as a substitute for professional legal counsel. The information presented reflects the state of applicable laws as of the date of publication and is subject to change.

Mémos Falsifiés : L'Intelligence Artificielle au Service du Sabotage Interne

Your internal documents can be forged in two minutes—and your teams won’t see it

What you don’t see can destroy your organization

A memo lands in a journalist’s inbox. It bears the official header of a government agency, the right names, the right fonts. It looks urgent. It is entirely fake—produced by a machine in less than two minutes.

This is no longer a hypothetical scenario. Malicious actors are doing it today, every day, against newsrooms, financial institutions, ministries, and compliance teams. And the threat has changed in nature: it no longer comes only from foreign states. It also comes from partisan networks, operators funded from abroad through opaque schemes, and actors who use the rules of democracy—right to information, freedom of the press—as attack tools.

The real scale of the problem

Fraud involving AI-forged documents crossed a critical threshold between 2024 and 2025. It is no longer a phenomenon reserved for state espionage operations. It is a daily threat for any organization that handles documents, identities, or information.

+200%: increase in document fraud using synthetic faces in one year [Resemble AI — Deepfake Incident Database (2025)]

How public opinion became an industrial target

Elections: a full-scale laboratory

In 2024 and 2025, more than 4 billion voters in over 60 countries voted [Hany Farid — Deepfakes in the 2024 US Presidential Election (2024)]. This historic concentration turned fake-creation tools into weapons accessible to everyone. Political action committees, national campaigns, and foreign powers used the exact same technologies.

Fake production has become industrial

Malicious actors no longer work alone in the shadows. The Russian campaign known as DoppelgängerA documented Russian disinformation operation that uses hundreds of fake sites and coordinated accounts to flood journalists and fact-checkers with forged documents. mobilized 673 messaging channels and more than 11,000 accounts to overwhelm journalists with forged documents [CheckFirst — Operation Overload 2 (2025)]. The goal was not to convince—it was to exhaust verification teams.

Tools are now accessible to anyone

What required professional resources in 2020 is free in 2025. A voice cloneTechnology that reproduces a real person’s voice from a short audio recording, making them say sentences they never uttered. only requires a few seconds of recording. Generating a fake official document takes a few minutes. Your organization can be targeted by anyone with an internet connection.

The three attack mechanisms to know

  1. Coordinated partisan amplification

Accounts affiliated with radical political movements use fabricated content not as an end in itself, but as fuel. The mechanism is simple: mass-produce audio, video, or documentary fabrications designed to incriminate, inject them simultaneously into hundreds of coordinated accounts, then comment on and share each piece while presenting themselves as independent journalists.

Producing is almost free. Detecting at scale is not.

That is precisely where the dangerous asymmetry lies: attackers don’t need to bypass detection—they need to make it economically unsustainable. The right response is therefore not to pretend to scan everything, but to identify the priority entry points in your organization and concentrate controls there.

  1. Fake local newspapers (“pink slime networks”)

Hundreds of websites are designed to look like independent local newspapers, while they only publish algorithmically manufactured ideological content.

In June 2024, the number of these fake sites in the United States had reached 1,265—surpassing for the first time the number of real local newspapers still operating, estimated at 1,213 [NewsGuard — Pink Slime Report (2024)]. These sites adopt innocuous local-sounding names, calculated to imitate the reassuring typography of local press built over decades.

According to the Tow Center for Digital Journalism, one of these networks published more than 1.3 million articles over 2025, and its founder said he wanted to expand the infrastructure to ten thousand sites [Tow Center for Digital Journalism — Pink Slime Journalism (2025)]. More than 90% of the articles distributed are computer-generated with no real editorial work.

Until 2023, this production was limited by its marginal cost. Integrating large language models into these pipelines removed that brake. The result: not only biased articles about real events, but credible articles about events that never happened—supported by fabricated visual and audio documents.

  1. Foreign funding hidden behind domestic relays

On 4 September 2024, the US Department of Justice indicted two Russian nationals, employees of a state media outlet, for funneling about $10 million to a US company to spread Russian government messages to American audiences [DOJ — Two Russian Nationals Charged (2024)].

The company in question, Tenet Media, is a network of commentators based in Tennessee. The funds were concealed behind the fictional identity of a “wealthy European investor”—a character entirely invented to hide the origin of the money from the recruited influencers. The produced content generated more than 16 million views, carried by personalities with several million followers.

Buying real influencers is a far better investment than creating fake profiles—they bring their own trusted audience.

The latest shift: using your own rules against you

The newest and least documented vector is not technical. It is legal.

Two days after the November 2024 elections, a network of fake local newspapers began filing thousands of requests for access to public records—1,114 requests addressed to each municipality in Wisconsin in a few days [SEJ — How Pink Slime Publishers Are Weaponizing FOIA (2025)]. Over the year, that same network reportedly filed more than 9,000 requests, while offering its clients a full catalog of services—site creation, ads, public-record requests, robocalls, SMS—billed to political action committees [Columbia Journalism Review — Metric Media FOIA (2025)].

Hostile actors no longer attack institutions only through illegal channels. They use the rules of those institutions as an attack surface. No existing regulatory framework is currently designed to respond to this tactic.

Why your current defenses are no longer enough

Human verification has reached its limit. An experienced analyst cannot reliably identify a high-quality fake by eye or by ear. Today’s generation tools produce content whose anomalies are only visible through algorithmic analysis.

A study conducted by the European Broadcasting Union across 14 languages shows that 45% of the answers provided by AI assistants on political news contain at least one significant accuracy problem [EBU — AI assistants misrepresent news content 45% of the time (2025)]. The tools your teams use to stay informed can themselves steer them toward incorrect data.

The US government has tools to fight foreign influence campaigns, but no clear mechanism to address disinformation produced by entities that appear to be domestically funded. In Europe, the DSADigital Services Act: European regulation that imposes obligations on large digital platforms to fight illegal content and disinformation. imposes obligations on very large platforms—but not on the thousands of mid-sized sites that form the backbone of fake local newspaper networks.

Legal framework and remedies: what matters most

Applicable obligations, takedown timelines, and available remedies vary depending on the platform, the electoral or media rules involved, the national law, and the competent jurisdiction. The decisive point is the ability to document quickly whether content is authentic or altered and to preserve usable evidence.

In that context, forensic analysis should be presented as support for qualification, reporting, and evidence-building, not as a substitute for a judicial decision, regulatory ruling, or formal legal advice.

Concrete scenario: a fake official memo intercepted before publication

This scenario illustrates an attack technique documented in the 2025 reports of the European External Action Service (EEAS) and US Cyber Command [US Cyber Command — Russian Disinformation Campaign Doppelgänger Unmasked (2024)].

Context

A newsroom specialized in geopolitical news receives a document presented as an internal memo from a US government agency. It claims the agency secretly funds communication programs targeting the public opinion of a foreign country. The sender imitates the email address of an independent journalist known to the newsroom. Two senior journalists consider it potentially publishable.

What algorithmic analysis detects

Anomaly 1 — Synthetic generation: the file’s internal data reveals a nighttime creation with a synthetic document production tool, incompatible with working hours at the agency’s headquarters in Washington.

Anomaly 2 — Non-compliant font: the header characters show micro-deformations invisible when reading. The font is a generated approximation, not the agency’s certified institutional font.

Anomaly 3 — No archive reference: the reference number on the memo matches no indexed document. The document structure shows similarity with templates identified in documented disinformation campaigns.

Without automated controls

  • Publication of a fake, with the associated legal and reputational risks
  • Potential editorial liability engaged
  • Internal crisis, emergency handling
  • Risk of non-compliance with the DSA

With automated controls integrated into the pipeline

  • No fraudulent publication, no remediation cost
  • Documented proof of an active verification process
  • Team focused on editorial work
  • DSA compliance demonstrated through forensic traceability

How to integrate detection into your critical processes

Detecting synthetic content is significantly more effective upstream than reacting after publication or after an incident.

DeepForgery integrates directly into your workflows for handling critical documents, through two integration modes:

Via APIProgramming interface that allows two IT systems to communicate with each other, here to connect DeepForgery to your existing moderation or document-verification tools.: direct connection to your existing moderation, document-verification, or content-management tools. Each submitted document, image, video, or audio file is analyzed before reaching validation or publishing. The result takes the form of a confidence score with a readable explanation—what anomaly was detected, on which part of the content, with what level of certainty—without slowing your processing flow.

In on-premiseDeployment of software directly on your organization’s internal servers, with no data transit to external servers, for environments subject to strict confidentiality requirements. mode: for environments subject to strict confidentiality or data-sovereignty requirements (financial sector, defense, public institutions), DeepForgery can be deployed entirely on your internal infrastructure, with no data transit outside.

What integration brings in practice

  • A clear signal for every incoming piece of content, before any human decision
  • Documented traceability of every analysis, usable as proof of compliance
  • Models continuously updated based on new generations of generation tools identified in documented incidents

A control layer that complements human editorial judgment without replacing it or slowing it down

What this threat tells you about what comes next

Synthetic content and AI-driven document fraud are no longer emerging risks. They form an operational attack infrastructure, used daily by actors ranging from a malicious competitor to states orchestrating documented interference campaigns on an international scale.

What the Tenet Media affair reveals—and what fake local newspaper networks confirm—is that the threat now has a hybrid geography: it is born inside legal borders, funded from the outside, and uses democratic institutions as a vector. It does not bypass the rules—it exploits them.

The framework around synthetic content, platform governance, and internal controls continues to evolve. Organizations should align their procedures with the rules applicable to their sector and have the concrete obligations validated by their legal or compliance teams.

When an official-looking document is flagged before publication or validation, the organization can pause the workflow, document the alert, and decide the next steps before operational, legal, or reputational exposure increases.

Start for free right now Sign up in 2 minutes and test DeepForgery on your first documents. 5 free analyses per day No credit card Instant activation Try for free
#IA #Deepfake #Official Documents #RiskOps