- Print
- DarkLight
Deepfake technology has recently emerged as the latest tool bad actors have leveraged to conduct their cyber scams. Cybercriminals have used many different methods for these scams over the years, but if you knew what to look for, you'd be able to spot the red flags that would let you know that someone is trying to get you into turning over valuable sensitive information and money.
The problem with deepfake technology is that it obscures those red flags, adding a frightening level of realism to these cyber scams. It can also be manipulated to make a person's likeness and voice appear on explicit content. Regardless of age range, anyone could have their image abused by deepfake technology.
Deepfake technology is still in its infancy, but that doesn't mean you cannot take action to protect yourself and your loved ones. This client guide will explain how deepfake technology works, and what you should do if you encounter it online. This guide will also specifically cover what to do in the event deepfake technology is used in a malicious manner against minors.
What is Deepfake Technology?
Deepfake technology is used to impersonate another individual, with the goal of convincing another person that they are talking to a legitimate human being. This can be done by either impersonating their voice, or their likeness.
To do this, a person would need to collect a lot of data on the person they wish to impersonate. They may collect images, videos and audio recordings. After they have the data set, the bad actor will use machine learning and artificial intelligence to create a model that will mimic a person’s vocal patterns and facial expressions.
Bad actors want to collect as much information as possible on the person they are impersonating, as the more data they have at their disposal, the more realistic the imitation will be. This can be incredibly troubling if bad actors want to impersonate a public figure. For example, if a deepfake dataset is trained on hundreds of audio recordings of a particular individual, it will be able to analyze speaking patterns and cadences.
Bad actors can then tell the model to say whatever it wants, and in turn, the model will say it with the voice of the person it was trained on. It can also be used to analyze facial expressions, which bad actors can use to create fake videos.
How Do Bad Actors Use Deepfake Technology?
- Impersonation of Another Person to Conduct Cyberscams: Bad actors could create voice models to impersonate an individual, then contact another person to try and convince them they need personal information or money.
- One scenario could include bad actors creating deepfake of a person's grandchild. The bad actors would contact the grandparents, and using the voice model, say they are in legal trouble and need money for bail. Since the voice on the other line sounds like their grandchild, the grandparents may not realize the person they are speaking to isn't real.
- Another scenario has the bad actors impersonating an executive. They could use the voice model to contact employees within the company, where they may be asked to send over a large amount of money. The employee hears the voice of the executive and follows through with the request, again not knowing that they have been duped.
- Overlaying Facial Images on Explicit Content: Bad actors could take the facial image of a person and edit it onto explicit, pornographic content. Unfortunately, there have been examples of this happening to different groups of people, ranging from famous celebrities, to minors.
- Manipulate Audio/Video to Harm a Person's Reputation: Bad actors may create a fake video where they manipulate both the video and audio to make it seem as though the executive is disparaging another person, or a company, or in an extreme example, make them say offensive statements. This can be done to either harm a person's reputation, or simply just to embarrass them.
How to Handle Deepfake Scams
- Validate Any Requests You Receive: If you are contacted by someone you know who claims to be in danger or has made a request for information and money, stay calm and take the time to validate the request. If you can talk to them in person, find them and ask whether they just tried to contact you. If they try to reach you via phone, check the number to see whether you recognize it. If you don’t, call them back using their actual number to see if anything is wrong.
- Consider a Code Word: Another way to validate requests is to have a code word you and others know to ensure you are speaking to a real person. Thus, if you get an odd request, you can say the code word, and if the person responding doesn’t know it, you’ll know you are talking to an imposter.
- Avoid Rash Decisions: A good practice to follow is to treat these communications the same way you’d approach a potential phishing message. Do not make any rash decisions, be wary of any communications that use pressured language and convey a sense of urgency.
- Limit What You Share Online: As said above, deepfake technology requires bad actors to gather images and voice recordings to create their models. By limiting what you share online, you can make it harder for bad actors to create a deepfake model based on your or your loved ones.
Contact Law Enforcement: In situations where deepfake technology may be used to trick you into thinking a loved one is in danger and needs money, contact the proper authorities. Ask law enforcement to see if they can assist in validating whether deepfake technology is involved and whether they can help craft the proper response. Do not continue to contact the potential fraudsters without consulting with the authorities.
What to Do When Deepfake Technology Involves Minors
Deepfake technology can be used for a number of nefarious purposes, and that unfortunately includes targeting minors. Bad actors can create deepfake models and manipulate the images and audio to appear onto explicit content.
This can be a particular problem given the advent of social media, and the large number of children under the age of 18 that appear on the platform. Bad actors can create these explicit videos using deepfake technology, then hack and take over a social media account belonging to the minor. Then, they could send the explicit content to their contacts through the social media platform.
Should you encounter this scenario with one of your children, please take the following steps as soon as possible:
- Document Everything: Take screenshots of the deepfake content, the hacked account and any relevant communications. Save any URLs and any other content you may believe to be relevant.
- Contact the Platform: Get in touch with the social media platform and report the hacked account and deepfake content immediately. Social media platforms have protocols in place to handle such incidents. You can also submit takedown requests with the platform as well, and can consult StopNCII for assistance.
- Contact Law Enforcement: File a report with local law enforcement. It is crucial to perform this task to ensure any legal action can take place, and is a major reason why it's important to document all aspects of the incident as noted above.
- Consider Legal Assistance: Contact a privacy or child exploration attorney to consider legal actions, such as a cease-and-desist letter, or a subpoena.
- Report the Incident: In addition, report the incident to the National Center for Missing and Exploited Children's CyberTipline and the FBI's Internet Crime Complaint Center.
- Secure Accounts: To prevent bad actors from commandeering the account in the first place, be sure to create strong, unique passwords and implement multifactor authentication when available.
- Stay Vigilant: Set up alerts and consider monitoring services to detect if deepfakes resurface.