
My wife sent me a video interview she saw last week with Memphis Grizzlies star Ja Morant that was pretty shocking.
In what appeared to be a post-game interview with the media after the Grizzlies’ loss to OKC, Morant made some outrageous comments. He said that Thunder star Shai Gilgeous-Alexander was not MVP-worthy, that the Grizzlies had no chance in the series and that NBA Commissioner Adam Silver had taken away his motivation.
It was such an eye-opening interview that I forwarded it to my friends Ed, Steve and Casey.
Trouble is that it was a totally fake video created with Artificial Intelligence. Ed tipped me off because he immediately started searching for articles about what Ja had said and found none. Zero.
But I fell for it, and I hang my naive head in shame.
I invite you to click on the video below and tell me that you could spot it as a fake, aside from Ja’s outrageous comments. Ja’s voice and the post-game setting are both realistic.
I find the fact that this fake video was so well done to be disturbing for what it portends about the future. It is part of a growing phenomena commonly known as deep fakes.
The Ja video was pretty harmless, but what if someone created a fake video of a presidential candidate saying incredibly racist things that he or she would never utter?
We can see where this is headed. I’ve read that such videos already exist.
Want an example? The video in this link isn’t exactly a political deep fake, but it that made the rounds shortly after the death of Pope Francis, created around his meeting with Vice President J.D. Vance the day before he died. The creators took an historic meeting and turned it into an attempt at humor that is pretty disrespectful.

I decided to seek an expert’s opinion and reached out to John Hassell, Ph.D., an Associate Professor of Software Development and Integration at the University of Oklahoma Polytechnic Institute in Tulsa. If you are not familiar, the Polytechnic Institute is OU’s newest campus that offers a host of technology degree tracts, including Artificial Intelligence.
Dr. Hassell has incorporated AI into software development for the past couple years, and shared his thoughts on the subject with me in a 2024 blog post.
When I asked him about the Ja Morant deep fake, he immediately put me in touch with Colin Torbett, an OUPI student who possesses a master’s degree in data science but is now pursuing another undergraduate degree in cyber security.
“Colin is actually doing research on that very topic now,” Dr. Hassell said.
So, Colin connected with me and shared some thoughts on the emerging flood of AI generated deep fakes.
“These fake videos (but also images and audio recordings) are called ‘deep fakes’ because they use an AI technique called ‘deep learning’ in order to create a fabricated, digital artifact,'” Colin told me. “They are indeed pervasive and I see new ones appear everyday on social media, though I typically find one’s which are humorous, benign, and easy-to-spot as fake. On the other hand, some are more nefarious and easily pass as real at first glance.”

Colin described a deep fake he recently saw that claimed to be an interview with a young woman on her dating preferences. Apparently, it was a well done video, but the person holding the microphone for the interview had six fingers. Dead giveaway.
“It does illustrate how pernicious deep fakes can be — and how easily duped anyone can be,” he said. “This can only become a concern for politics especially with all the chaos in the last decade. Obviously, fake video or audio of speeches would be detrimental — if not fatal — for a political career, but would sow discord among voters and the general public.”
My question: how can these deep fakes be more easily detected and even stopped before they are in widespread distribution?
“While there is no immediate antidote to the problem, I am confident that cybersecurity researchers and computer scientists will create digital watermarks and signatures which validate any digital piece of information (video, audio, document, email, etc.) as authentic,” Colin said. “The digital infrastructure and software for these solutions is still in it’s infancy, being developed by startups and university researchers. It might take 5-10 years for this technology to be refined and widely adopted.”
Wait. Five to 10 years for a real solution? The bad guys are going to have quite a head start.
“In the mean time, my only advice (unsolicited, I admit) is to take everything online with a few grains of salt — especially if it confirms something one already believes,” Colin said. “It’s easy to dismiss something if runs contrary to a belief about the world, but being skeptical about information that affirms a deeply held worldview is an effective antidote to confirmation bias, and the deep political entrenchment we see reinforced by social media today.”
Colin has worked for technology-based firms for about a decade, beginning with an internship at OKC’s Spiers New Technologies in 2015. He gained his interest in deepfakes and AI while in graduate school, earning his MS in 2017.
“Since then, deep fakes have exploded and are becoming a serious concern,” he said. “My interest revolves around helping to create a novel solution for a pervasive problem that affects everyone. What I really want to do is what every good engineer wants: to use my skills and science to solve complex problems for the world.”
I hope that one day Colin Torbett leads his own high tech company that creates antidotes to deep fakes and will keep videos like the Ja Morant interview out of my timeline.
Then I won’t get fooled again. Maybe.
BONUS CONTENT: My friend Don Mecoy shared a video with me that provides a deep dive into deep fakes and how they are evolving and their threats to society. Watch the video below:
DOUBLE BONUS CONTENT UPDATE:
Concerned that AI is coming after your job? Read what my friend Dr. John Hassell at University of Oklahoma Polytechnic Institute has to say on the subject. Spoiler alert: It’s not likely!
https://www.news9.com/story/68c731020ebc3adec64fbb37/is-ai-coming-for-your-job-ou-professor-weighs-in-on-widespread-fear