AI Detection Risk Management

Deepfake-as-a-Service: What the Underground Market Changes for Companies

Matteo Chevalier

This article is written for exclusively informational and educational purposes. It does not constitute legal advice and should not be relied upon as a substitute for professional legal counsel. The information presented reflects the state of applicable laws as of the date of publication and is subject to change.

Deepfake-as-a-Service : Le Marché Noir de l'IA Contre Votre Entreprise

Deepfakes cost up to 400× less than in 2020 and fool humans 55% of the time according to third-party studies. An analysis of the DFaaS market and the risks for companies.

The Dark Web sells ready-to-use deepfakes: can your company still detect them?

Introduction

A compliance manager receives a video of an executive asking for urgent approval. The face is familiar, the voice too. The request seems consistent with an ongoing operation. A few minutes later, a financial transfer is triggered. The problem appears only afterwards: the person on the screen never took part in the call.

The scale of the criminal deepfake market

The development of deepfakeSynthetic media generated or modified by artificial intelligence to imitate a real person in an image, a video, or an audio recording. is no longer a marginal phenomenon. It is now a structured economy.

The dominant model today is called Deepfakes-as-a-Service. This means fraudsters can buy or rent deepfake creation tools without any particular technical skills.

In practice, the barrier to entry has collapsed.

According to a Kaspersky publication dated 2025, the production cost of a deepfake would have been divided by about 400 between 2020 and 2025. [Kaspersky — Deepfake services are now 400 times cheaper (2025)]

According to data from cybersecurity platforms, a voice clone usable for phone fraud could be generated for USD 1 to 30 on some clandestine platforms. [DeepStrike — Deepfake Statistics 2025 (2025)]

In addition, according to a study published by James Cook University in 2025, the average human accuracy for distinguishing a deepfake from real content would be about 55%, i.e., barely more than chance. These figures come from third-party research and do not constitute a validation of our own tools' performance. [James Cook University — Human Performance in Deepfake Detection (2025)]

For companies, the consequence is direct: sophisticated fraud becomes cheap and industrializable.

Why companies are becoming prime targets

Mass digitization of identity processes

Companies now rely on digital identification procedures such as KYCKnow Your Customer: regulatory process used by financial institutions to verify customers' identity and prevent fraud and money laundering.. These automated processes speed up account opening or customer onboarding.

But they also introduce an attack surface: forged documents, manipulated videos, and synthetic identities.

Social engineering boosted by AI

Deepfakes make it possible to strengthen already-known attacks, for example:

  • CEO fraud
  • phone scams
  • fraudulent approval of a payment
  • access to internal systems

In these scenarios, the image and the voice serve as apparent proof of authenticity.

The convergence between document fraud and generative AI

Dark Web markets now offer synthetic identity packs combining fake identity documents, AI-generated photos, voice samples, and biometric verification videos. This combination makes fraud harder to detect with traditional controls.

Why current defenses fail

Most organizations still rely on two types of protection: human verification and document recognition tools. These approaches have structural limits.

Human analysis is slow and vulnerable to cognitive biases. When a request seems to come from an executive or a known partner, the human brain activates what psychologists call the truth bias: the tendency to believe what we see.

Traditional technologies also have shortcomings. Document recognition systems or OCROptical Character Recognition: technology used to automatically extract text from a scanned document. engines often verify the presence of information, but not its authenticity. Yet a modern deepfake can perfectly reproduce the appearance of a document or a face.

5 concrete signs of a forged document

Even sophisticated forgeries can leave traces. Here are five anomalies frequently observed by digital forensics experts:

  • Lighting inconsistencies — Shadows or reflections do not match the document's environment.
  • Artificial skin textures — AI-generated faces sometimes show irregular texture transitions.
  • Incoherent metadata — The file reveals a modification history incompatible with the document date.
  • Incompatible typography — Some characters come from fonts that do not exist in official documents.
  • Abnormal image compression — Modified areas show different digital signatures.

These anomalies are often invisible to the human eye, but can be detected by AI models.

Illustrative scenario: attempted fraud during bank onboarding

This scenario is entirely fictitious and presented for illustration only. It does not describe a real client case. It aims to illustrate the type of attacks that forensic analysis technologies can be designed to detect.

1 — Context

A European retail bank offers fully online account opening. A new customer submits an ID card and proof of address during the onboardingProcess of integrating a new customer or user into a system or a service. process. The documents look correct, the video selfie is validated. The application is about to be accepted.

2 — What a forensic analysis solution can detect

In this type of scenario, a forensic solution is designed to identify anomalies such as:

  • A face-swap on the document photo (face replaced by an AI-generated face)
  • Metadata inconsistencies revealing the use of retouching software
  • An address already associated with known fraudulent cases

3 — Possible actions if detected

When anomalies are identified, the system can, depending on its configuration, automatically block account activation, trigger an alert to the compliance team and the MLROMoney Laundering Reporting Officer: compliance and anti-money-laundering officer within an organization., and forward the file for manual verification.

4 — Comparing situations

Actual results may vary depending on configurations, use cases, and the specifics of each organization.

This type of attack corresponds to what authorities describe as synthetic identity fraud, a phenomenon documented by several financial-crime bodies.

DeepForgery detection: three levels of analysis

Level 1 — Visual signal analysis

Each image contains an invisible mathematical structure. When an element is modified, this structure can change. The DeepForgery engine is designed to analyze these variations, like a medical scanner looking for anomalies in tissue.

Level 2 — File structure analysis

A digital file contains a technical history. The engine examines in particular the order of modifications, the tools potentially used, technical metadata, and image layers. This analysis can reveal manipulations invisible in the final image.

Level 3 — Logical consistency analysis

The system can compare information with reference databases, for example to identify an address already associated with fraud, detect an inconsistency between a document and a customer profile, or recognize a fraud pattern already observed.

DeepForgery Media and DeepForgery Documents

DeepForgery Documents is designed to analyze the authenticity of digital files and documents. DeepForgery Media is designed to detect manipulation in images, videos, and audio recordings. Both technologies can work together to cover different fraud vectors linked to digital identities.

FAQ

Can DeepForgery systems analyze different types of documents?

The engines are designed to analyze multiple formats: images, PDFs, videos, or audio recordings. They can be used to detect forgeries in identity documents, administrative supporting documents, or media used during an authentication process. Actual capabilities depend on configurations and specific use cases.

How does this approach differ from a simple OCR or manual verification?

An OCR engine reads the text of a document. It does not verify whether the document is authentic. Forensic detection analyzes the visual and technical structure of the file to look for potentially invisible manipulations. It does not replace human judgment but can complement it.

Does integration disrupt existing processes?

The engines are designed to integrate into existing identity verification or onboarding flows. They analyze documents in the background and return a risk score. Exact integration modalities depend on each technical environment.

What teams can concretely gain

  • Automated analysis of documents and media for signs of tampering
  • Potential contribution to reducing the risk of document fraud
  • Generation of technical evidence usable in regulatory audits
  • Fast analysis, compatible with large-scale processes

Concrete benefits depend on the deployment context, configuration, and use cases of each organization.

Conclusion

Deepfakes are no longer a technological curiosity. They are now an accessible criminal infrastructure, capable of producing synthetic identities and media at very low cost.

At the same time, financial and data-protection regulations require increasingly strict identity checks. Organizations have an interest in equipping themselves with mechanisms that help strengthen the detection of document fraud.

When a forged document is identified upstream, it does not generate a regulatory investigation, financial loss, or a crisis of trust. It is simply removed from the process.

Start for free right now Sign up in 2 minutes and test DeepForgery on your first documents. 5 free analyses per day No credit card Instant activation Try for free
#IA #Deepfake #Fraude #RiskOps