A reputable doctor appears to endorse a health supplement. A celebrity appears in a video she never shot. A politician seemingly instructs viewers to expose their banking account details.
These are examples of the deepfakes circulating on the internet — video content that can lead to serious consequences.
Deepfakes are synthetic videos created with AI technology to make real people or animals say or do things that they have not said or done in real life. They are becoming more realistic and ubiquitous than ever.
The proliferation of deepfakes raises concerns about the potential effects, from the spread of disinformation to the growing ease of impersonation and identity theft.
But how do people respond to this new type of online content? Do fabricated videos invoke the same reactions as real footage?
Do they lead viewers to heightened skepticism or down a rabbit hole of gullibility? And how likely are viewers to share deepfake content after they’ve stumbled upon it?
These are the questions UT San Antonio Communications Professor Seok Kang aims to answer in a recently published study, and his findings will shape future coursework on ethics, information credibility, and news literacy in UT San Antonio’s new journalism program.
Working with co-author Kayla Valadez ’25, Kang’s comprehensive meta-analysis covered 24 experimental studies involving more than 20,000 participants across 10 countries. The findings identified several key factors that affect how audiences respond to deepfakes.
The study suggests that while deepfakes are becoming increasingly sophisticated, they are not always successful in deceiving the public. But they do succeed in triggering powerful emotional responses.
Emotional hook
The primary power of a deepfake lies not necessarily in its ability to fool the eye, but in its ability to stir the heart, according to the study, “Deepfakes’ Cognitive, Emotional, and Behavioral Impact.”
Kang found that, compared to traditional videos or text, deepfakes consistently heighten emotional responses, creating a sense of immersion that can be used for both manipulative and beneficial purposes.
“The emotional immersion of a manipulated character generated by AI can create strong engagement,” Kang said. “For example, when a known celebrity or a resurrected historical figure talks about a burning issue, people feel a presence — a telepresence — that leads them to a deeper understanding.”
This emotional connection creates a “double-edged sword.”
On one side, educators and health campaigners can use positive deepfakes to bring history to life or deliver public service announcements that resonate deeply with viewers. In these “pro-social” contexts, audiences often recognize that the content is artificial but are receptive because of the value it provides.
“As long as the content is educational and pro-social, people will accept it and don’t mind that it’s a deepfake,” Kang noted.
Shield of media literacy
Heightened engagement with such content also presents a significant challenge, according to the study. Malicious actors are known to use that same emotional leverage to spread disinformation or scams.
Kang’s research identifies what he said is a crucial defense mechanism: media literacy.
The meta-analysis reveals that when deepfakes are used for negative or ambiguous purposes, such as a politician making false claims or a celebrity endorsing a scam, media literacy is the best line of defense.
Study participants who were equipped with digital literacy skills were significantly more likely to rate deepfakes as having “low credibility,” and were less likely to share the content.
“Anyone can make this type of content in a matter of minutes, so the most viable solution is for content consumers to be informed and become critical audiences.” —Seok Kang
“When prebunking or debunking comes into play, media literacy plays a critical role,” Kang said. “People who perceive videos as ‘low credibility’ are less likely to be manipulated and to share the content with others.”
Kang noted that content moderators have not been able to keep up with nefarious content creators and provide effective guardrails for users. Deepfakes are too prevalent, the content is too vast, and the barrier of entry for content creation is too low.
“Anyone can make this type of content in a matter of minutes, so the most viable solution is for content consumers to be informed and become critical audiences,” Kang said.
He advocates for comprehensive media literacy education that teaches users to analyze content on multiple levels to detect synthetic footage and spot potential scams, fake news, propaganda and other malicious content.
Super-aging society
Kang warns that as generative AI tools lower the barrier to creating video content, vulnerable populations may be at higher risk.
He points to the “super-aging society” in the United States, noting that older adults are increasingly consuming content on platforms like YouTube, where AI-generated videos are prevalent.
“Almost over 70% of YouTube videos are videos that use AI technology in the production workflow. Fully AI-generated videos are in a dramatic increase,” Kang said. “We are in a super-aging society, and more and more older adults are exposed to deepfakes because they watch online videos on platforms, such as YouTube, frequently.”
Moving forward, Kang suggests that researchers need to look beyond “sharing intention,” and study behavioral impacts, such as how deepfakes could influence voting behavior, social activism or financial decisions.
AI in the newsroom
In 2025, Kang led the launch of UT San Antonio’s new journalism program. Through the coursework, Kang plans to teach students to ethically engage in human-AI collaboration.
“Generative AI is already here in the newsroom,” Kang said. “Today’s journalists are encouraged to use it. But at the same time, they must be aware of the accuracy of fact-checking. It is a very important part of the curriculum because journalism strives for ethics and impartiality.”
In addition to vetting their own content rigorously, Kang urges journalists to educate others on how to distinguish fact from fiction.
“Journalists are playing a pivotal role in society in sorting out what is real and what is accurate,” he said. “They should be leaders and pioneers in AI literacy.”