Risk Management Tools & Resources

 


Recognizing Deepfakes to Improve Cybersecurity

Recognizing Deepfakes to Improve Cybersecurity

Laura M. Cascella, MA, CPHRM

In today’s modern and connected world, cyberattacks are relentless but not unexpected. Almost everyone has been targeted at some point, from individuals to governments. Healthcare organizations, in particular, sit in the crosshairs because of the amount of valuable and sensitive data they maintain, such as protected health information, personal information, financial data, and intellectual property.1

When attacks are successful, they can lead to upheaval and chaos, patient and worker harm, loss of reputation, financial consequences, and more. Thus, building a strong security culture has become paramount for healthcare organizations, from small practices to large health systems. A crucial component of a solid security culture is an educated and aware workforce, as workers often are the frontline defense for preventing cyberattacks.

Helping workers understand and actively recognize cyberthreats has always been challenging, but artificial intelligence (AI) introduces a level of complexity not previously encountered. AI has made cyberattacks, particularly those involving social engineering, much more sophisticated and difficult to identify. Social engineering “uses psychological manipulation to trick users into making security mistakes or giving away sensitive information.”2 Phishing is an example of a common type of social engineering attack.

As AI evolves, healthcare organizations and workers need to be aware of another potential threat — deepfakes. Deepfakes refer to images, videos, and audio that have been generated or manipulated using AI. The term “deepfake” comes from the deep learning technology used to create the fake, or synthetic, media.3

The purpose of deepfakes often is to deceive or trick people into believing something that isn’t true or didn’t happen. For example, deepfakes might involve using AI to:

  • Generate or manipulate images in a way that is deceptive or misleading.
  • Replace faces in videos to give the appearance that someone said or did something that they did not.
  • Sync audio from one source with video from a different context to make it appear that someone said something that they did not.
  • Clone a person’s image or voice in an effort to trick others into taking unsafe actions or revealing proprietary information.
  • Manipulate an individual’s movements within a video to show them taking actions that did not occur.4

Unfortunately, recognizing deepfakes is not a simple task. It requires greater attention to detail, more scrutiny, and a higher level of critical thinking than identifying less-sophisticated cyberattacks. Although some healthcare workers are well-versed at identifying elements of more traditional cyberthreats — such as odd email addresses, typos, and suspicious links — they might be unprepared to question images, videos, or audio that seem very real.

Combating deepfakes requires awareness and vigilance. Although strategies will evolve as AI becomes more complex, healthcare organizations can take action now to improve worker knowledge and reinforce a strong security culture. Some strategies that may help include the following:

  • Implement a comprehensive cybersecurity program that includes education and training related to various types of cyberattacks. Using a range of training formats and activities — such as online learning, workshops, role-playing, etc. — can help keep individuals engaged.
  • Make healthcare executives, managers, providers, staff members, and volunteers aware of deepfakes, including what they are, how they are used, and their potential threat. Examples from popular media might prove helpful as an educational tool.
  • Use synthetic media as part of training to help staff members learn to identify suspicious images, videos, and audio recordings. The Detect Fakes research project can help determine how well individuals can distinguish synthetic media from real media and may improve workers’ ability to spot fakes.
  • Train the workforce in current best practices related to identifying deepfakes, such as:
    • Paying close attention to facial features to identify aspects that appear manipulated or transformed, such as lack of definition; overly smooth or wrinkled skin; skin discoloration; age-related differences between skin, hair, and eyes; or abnormal facial hair. TechTarget notes that “In many cases, there are inconsistencies within a person’s human likeness that AI cannot overcome.”5
    • Watching for inconsistent or unnatural eye blinking (either too little or too much).
    • Being wary of inconsistent or incongruent reflections or shadows that don’t align with natural physics.
    • Looking for audio that is out of sync with lip movements, and listening for artificial audio noises.
    • Checking for abnormal or unnatural blurring or boundaries between individuals and backgrounds.
  • Educate the workforce about responsible sharing of media within the organization and how to report potential deepfakes.
  • Encourage workers to fact check information and verify the authenticity of images, videos, and audio — particularly when a request is made for the worker to take action or perform a task.
  • Advise workers to go directly to the source of a request if they are concerned it might be related to a deepfake (e.g., directly contacting a person within the organization if they receive a suspicious audio or video message from that individual).6

Healthcare organizations also might want to considering using AI deepfake detection software to help identify and mitigate risks associated with deepfakes. Doing so might become more beneficial as AI evolves and deepfakes become more difficult and time consuming to detect.

To learn more about deepfakes and other issues related to AI in healthcare, see the following resources:

Endnotes


1 Riggi, J. (n.d.). The importance of cybersecurity in protecting patient safety. AHA Center for Health Innovation. Retrieved from www.aha.org/center/cybersecurity-and-risk-advisory-services/importance-cybersecurity-protecting-patient-safety

2 Carnegie Mellon University. (n.d.). What is social engineering? Retrieved from www.cmu.edu/iso/aware/dont-take-the-bait/social-engineering.html

3 Department of Homeland Security. (n.d.). Increasing threat of deepfake identities. Retrieved from www.dhs.gov/sites/default/files/publications/increasing_threats_of_deepfake_identities_0.pdf

4 Ibid.; U.S. Department of Health and Human Services, Office for Civil Rights. (2024, October 25). Social engineering: Searching for your weakest link. Cybersecurity Newsletter. Retrieved from www.hhs.gov/hipaa/for-professionals/security/guidance/cybersecurity-newsletter-october-2024/index.html

5 Froehlich, A. (2024, May 7). How to detect deepfakes manually and using AI. TechTarget. Retrieved from www.techtarget.com/searchsecurity/tip/How-to-detect-deepfakes-manually-and-using-AI

6 U.S. Department of Health and Human Services, Office for Civil Rights, Social engineering: Searching for your weakest link; Department of Homeland Security, Increasing threat of deepfake identities; Illinois State University. (2024, July 15 [last updated]). Determine credibility (evaluating): What are deepfakes? https://guides.library.illinoisstate.edu/evaluating/deepfakes