Welcome to VelSicuro.com | Cybersecurity Solutions

Types of Deepfakes Now Targeting User Accounts and Identities
By VELSICURO
12 November 2025
2 views
Nasional

Types of Deepfakes Now Targeting User Accounts and Identities

Introduction: The Evolution of Fraud in the AI Era

 

Deepfake technology, which leverages artificial intelligence (AI) to create highly convincing fake audio and visual content, has become one of the most dangerous cyber threats. While deepfakes were initially used primarily for entertainment or political disinformation, cyber criminals are now redirecting them for financial gain. In Indonesia, this threat is increasingly targeting bank accounts, personal identities, and even corporate assets.

We must be aware that this threat is no longer limited to video but has expanded into voice and real-time interactions.

 

3 Main Types of Deepfakes Targeting Your Finances

 

Here are the types of deepfakes most commonly used to launch social engineering and financial fraud attacks:

 

1. Voice Deepfakes

 

This is the easiest and quickest type of deepfake to implement. Cybercriminals only need a short voice sample of the target (obtainable from social media or old call recordings) to replicate that voice.

  • Modus Operandi: The fraudster calls the victim, impersonates the voice of a family member, boss, or company executive, and requests an urgent fund transfer, password, or authentication token. Because the voice heard is identical, victims are less likely to be suspicious.

  • Target: CEO Fraud attacks now widely use deepfake voice to order large fund transfers from the finance department.

 

2. Video Deepfakes (Face Swapping and Lip-Syncing)

 

Although more computationally intensive, deepfake video is highly effective for deceiving visual identity verification systems or during video calls.

  • Modus Operandi: The fraudster creates a video of themselves speaking, but the face and lip movements (lip-syncing) are swapped with the target's face. This is used to:

    • Bypass KYC (Know Your Customer): Applying for loans or opening new bank accounts online using a false identity.

    • Video Call Fraud: Tricking colleagues during virtual meetings into divulging confidential information or convincing them to grant system access.

 

3. Text Deepfakes and Personalization (AI Text Generation)

 

While not a deepfake in the visual/audio sense, the use of generative AI to create highly convincing text is part of the new fraud ecosystem.

  • Modus Operandi: Cybercriminals use Large Language Models (LLMs) to generate grammatically perfect and personalized phishing emails or WhatsApp messages (often mimicking the target's writing style), making them difficult to distinguish from genuine communication.

  • Target: Increasing the success rate of phishing attacks by eliminating spelling or grammar errors that typically serve as red flags.

 

Prevention Steps: Verification is Key

 

Faced with this deepfake threat, vigilance must be increased:

  1. Second Channel Verification: If you receive an urgent fund transfer request via phone (voice) or video call, always verify the request through a second communication channel (e.g., send a text to the person's real mobile number or ask a security question only they would know).

  2. Strengthen Biometric Authentication: Use strong passwords or PINs in addition to biometrics (fingerprint/face), as deepfake video has the potential to fool weak facial recognition systems.

  3. Employee Awareness: Companies must provide specialized training on how to identify and respond to deepfake voice and video threats.

Need Any Technology Solution

Let’s Work Together on Project

GET STARTED
velsicuro.com