Welcome to VelSicuro.com | Cybersecurity Solutions

Deepfakes are a threat, but here are some effective tactics to combat them
By VELSICURO
22 September 2025
13 views
Tips & Trik

Deepfakes are a threat, but here are some effective tactics to combat them

Artificial Intelligence (AI) has become an integral part of our lives, assisting with various tasks, from the office to the home. However, along with this progress comes a new threat: the misuse of AI for criminal activities. AI-based fraud has claimed many victims, utilising deepfake technology to deceive in ways that are almost impossible to distinguish from reality.

Deputy Minister of Communication and Digital Affairs, Nezar Patria, has warned the public to be extremely vigilant. According to him, videos and photos generated by AI are now so perfect that even experts can be fooled.

  • Sophisticated Scam Methods: From Fake Evidence to Imitation Voices

One scam method that utilises AI is the falsification of bank transfer evidence. Nezar Patria explained that scammers can quickly create fake transfer evidence, even imitating the hologram on the back, to convince victims that they have received the money.

In addition, there are also scams that use AI voices. The OJK has warned the public about this method. In this scheme, scammers use deepfake technology to imitate the voices of people you know—such as family, friends, or relatives—over the phone. These familiar voices make victims let their guard down and unwittingly follow the scammer's instructions.

  • Legal Protection and Challenges

The government is not standing idly by. The Ministry of Communication and Digital Affairs has issued a Circular Letter on Artificial Intelligence Ethics and is coordinating with the OJK and Bank Indonesia to prevent losses. Several relevant laws, such as the ITE Law, PDP Law, Criminal Code, and Copyright Law, are also being used to combat this crime.

However, Nezar Patria realises that regulations often lag behind technological developments. AI crime methods evolve much faster than the regulations that are created. Currently, the government is developing a roadmap to ensure that AI is used positively and its negative risks can be mitigated.

  • How to Avoid AI Voice Traps 

AI voice fraud is very dangerous because it relies on emotional factors. If you receive a suspicious call, even if the voice sounds very familiar, take the following steps:

  1. Manual Verification: Do not immediately trust the caller. End the call and contact the person concerned using a telephone number you already have or another trusted communication channel.
  2. Be wary of urgent requests: Scammers often create emergency situations. Think twice if there is an urgent request to transfer money or share personal data.
  3. Pay attention to details: Even if the voice sounds familiar, pay attention to small details such as intonation, tone of voice, or unusual conversation patterns from someone you know.
  • AI Video Scams: When Fake Faces and Voices Become Weapons of Fraud

Digital fraud has reached alarming levels with the use of artificial intelligence (AI)-based videos. Recently, a case of fraud involving video calls that mimic celebrities' faces went viral. Victims received calls from unknown numbers showing the celebrity's face and voice, promising prizes worth millions of rupiah.

This sophisticated imitation, known as deepfake, is created by analysing numerous videos and online content of a person. AI algorithms then create a highly similar and convincing imitation. Fraudsters use deepfakes to impersonate people known or trusted by potential victims, with the aim of extorting money or stealing personal data. Because the faces and voices are familiar, victims tend not to be suspicious and immediately comply with their requests.

  • Tips for Safely Avoiding AI Scams

protect yourself from these sophisticated scams, follow these important steps:

  1. Limit Personal Information on Social Media. Do not carelessly share personal data on social media. Even if the information seems harmless, cybercriminals can use it to create convincing deepfakes and target you.
  2. Be Wary of Calls from Unknown Numbers. If you are not expecting a call or message from a specific person, do not answer it immediately. If you have already answered the call and the person claims to be someone you know, immediately confirm their identity through a trusted communication channel, such as a WhatsApp group.
  3. Ask Unique Questions. If you have already answered the call and feel suspicious, ask specific questions that only you and that person know the answers to. If the answers sound strange, end the call immediately.
    Reject Suspicious Requests.
  4. Be wary of urgent requests, such as money transfers, with the lure of gifts or tempting offers. Sometimes, scammers will also use emergency scenarios to make you panic. Do not hesitate to refuse and immediately cut off communication, even if they claim to be family, close friends, or public figures you know.
  5. Pay Attention to Voice and Visual Details. Although deepfakes are very sophisticated, sometimes there are unusual pauses or distortions in the voice and video. Pay attention to the rhythm and speed of speech. If you feel something is strange, hang up immediately. In videos, look for characteristics such as facial movements that are out of sync with the voice, unnatural eye contact, or inconsistent visual quality.

Source: komdigi.go.id

Need Any Technology Solution

Let’s Work Together on Project

GET STARTED
velsicuro.com