Be careful out there; it’s a deep fake future

Screenshot taken from deep fake video of Ja Morant post-game interview

My wife sent me a video interview she saw last week with Memphis Grizzlies star Ja Morant that was pretty shocking.

In what appeared to be a post-game interview with the media after the Grizzlies’ loss to OKC, Morant made some outrageous comments. He said that Thunder star Shai Gilgeous-Alexander was not MVP-worthy, that the Grizzlies had no chance in the series and that NBA Commissioner Adam Silver had taken away his motivation.

It was such an eye-opening interview that I forwarded it to my friends Ed, Steve and Casey.

Trouble is that it was a totally fake video created with Artificial Intelligence. Ed tipped me off because he immediately started searching for articles about what Ja had said and found none. Zero.

But I fell for it, and I hang my naive head in shame.

I invite you to click on the video below and tell me that you could spot it as a fake, aside from Ja’s outrageous comments.  Ja’s voice and the post-game setting are both realistic.

I find the fact that this fake video was so well done to be disturbing for what it portends about the future. It is part of a growing phenomena commonly known as deep fakes.

The Ja video was pretty harmless, but what if someone created a fake video of a presidential candidate saying incredibly racist things that he or she would never utter?

We can see where this is headed. I’ve read that such videos already exist.

Want an example? The video in this link isn’t exactly a political deep fake, but it that made the rounds shortly after the death of Pope Francis, created around his meeting with Vice President J.D. Vance the day before he died. The creators took an historic meeting and turned it into an attempt at humor that is pretty disrespectful.

Screenshot from Threads of deep fake video of Pope Francis taking a swing at VP JD Vance

I decided to seek an expert’s opinion and reached out to John Hassell, Ph.D., an Associate Professor of Software Development and Integration at the University of Oklahoma Polytechnic Institute in Tulsa. If you are not familiar, the Polytechnic Institute is OU’s newest campus that offers a host of technology degree tracts, including Artificial Intelligence.

Dr. Hassell has incorporated AI into software development for the past couple years, and shared his thoughts on the subject with me in a 2024 blog post. 

When I asked him about the Ja Morant deep fake, he immediately put me in touch with Colin Torbett, an OUPI student who possesses a master’s degree in data science but is now pursuing another undergraduate degree in cyber security.

“Colin is actually doing research on that very topic now,” Dr. Hassell said.

So, Colin connected with me and shared some thoughts on the emerging flood of AI generated deep fakes.

“These fake videos (but also images and audio recordings) are called ‘deep fakes’ because they use an AI technique called ‘deep learning’ in order to create a fabricated, digital artifact,'” Colin told me. “They are indeed pervasive and I see new ones appear everyday on social media, though I typically find one’s which are humorous, benign, and easy-to-spot as fake. On the other hand, some are more nefarious and easily pass as real at first glance.”

A cyber security major at OU’s Polytechnic Institute campus in Tulsa, Colin Torbett is researching the deep fake phenomenon

Colin described a deep fake he recently saw that claimed to be an interview with a young woman on her dating preferences. Apparently, it was a well done video, but the person holding the microphone for the interview had six fingers. Dead giveaway.

“It does illustrate how pernicious deep fakes can be — and how easily duped anyone can be,” he said. “This can only become a concern for politics especially with all the chaos in the last decade. Obviously, fake video or audio of speeches would be detrimental — if not fatal — for a political career, but would sow discord among voters and the general public.”

My question: how can these deep fakes be more easily detected and even stopped before they are in widespread distribution?

“While there is no immediate antidote to the problem, I am confident that cybersecurity researchers and computer scientists will create digital watermarks and signatures which validate any digital piece of information (video, audio, document, email, etc.) as authentic,” Colin said. “The digital infrastructure and software for these solutions is still in it’s infancy, being developed by startups and university researchers. It might take 5-10 years for this technology to be refined and widely adopted.”

Wait. Five to 10 years for a real solution? The bad guys are going to have quite a head start.

“In the mean time, my only advice (unsolicited, I admit) is to take everything online with a few grains of salt — especially if it confirms something one already believes,” Colin said. “It’s easy to dismiss something if runs contrary to a belief about the world, but being skeptical about information that affirms a deeply held worldview is an effective antidote to confirmation bias, and the deep political entrenchment we see reinforced by social media today.”

Colin has worked for technology-based firms for about a decade, beginning with an internship at OKC’s Spiers New Technologies in 2015. He gained his interest in deepfakes and AI while in graduate school, earning his MS in 2017.

“Since then, deep fakes have exploded and are becoming a serious concern,” he said. “My interest revolves around helping to create a novel solution for a pervasive problem that affects everyone. What I really want to do is what every good engineer wants: to use my skills and science to solve complex problems for the world.”

I hope that one day Colin Torbett leads his own high tech company that creates antidotes to deep fakes and will keep videos like the Ja Morant interview out of my timeline.

Then I won’t get fooled again. Maybe.

BONUS CONTENT: My friend Don Mecoy shared a video with me that provides a deep dive into deep fakes and how they are evolving and their threats to society.  Watch the video below:

DOUBLE BONUS CONTENT UPDATE:

Concerned that AI is coming after your job? Read what my friend Dr. John Hassell at University of Oklahoma Polytechnic Institute has to say on the subject. Spoiler alert: It’s not likely!
https://www.news9.com/story/68c731020ebc3adec64fbb37/is-ai-coming-for-your-job-ou-professor-weighs-in-on-widespread-fear

Apple draws the line on altered reality in photos

Screenshot
The Wall Street Journal’s Joanna Stern takes a selfie with Apple software chief Craig Federighi

If you’ve ever been fooled by a photo that had something added — or eliminated — you should watch this fascinating video interview by Wall Street Journal tech reporter Joanna Stern with Apple Inc.’s software chief Craig Federighi. The interview focused on Apple Intelligence, which is Apple’s version of artificial intelligence.

Near the end of the 25-minute interview, Stern raises her iPhone and takes a selfie of herself and Federighi as they are seated across from each other at the company’s Apple Park headquarters in Cupertino, Calif.

Then it got really interesting.

Stern showed the photo to Federighi and, using Apple’s most recent photo editing software, quickly edited out a water bottle and a microphone that the photo had captured.

She edited the photo with the intention of showing how easy it is to remove unwanted objects from photos, then asked Federighi about Apple’s approach to allowing users to alter reality in their photos.  Or even adding in objects or people who weren’t there.

Federighi’s thoughtful answer about Apple’s decisions on limiting AI use in its photo software intrigued me.

“There were a lot of debates internally, ‘do we want to make it easy to remove that water bottle or microphone’ because that water bottle was there when you took that photo,” he said. “The demand from people to clean up what seem like extraneous details in a photo that don’t fundamentally change the meaning of what happened has been very, very high. So we were willing to take that small step.”

However, the company ensured that if a photo was altered, it was reflected in the metadata for that photo. And Federighi said Apple drew a line on further editing to alter the reality of their photos.

“We are concerned that the great history of photography and how people view photographic content as something that you can rely on, that is indicative of reality …” Federighi said. “And our products, our phones are used a lot, and it’s important to us that we help convey accurate information, not fantasy … we make sure that if you do remove a little detail in a photo, we update the metadata on the photo so you can go back and check that this is an altered photo.”

It’s clear that Apple has given this subject a lot of thought and is working to distance itself and its software from ‘deepfakes’ that seem to be showing up everywhere. Just check your Facebook feed.

Here’s a link to an article in Info Security Magazine that lists the top 10 deepfakes from 2022.

That debate over editing photos took me back to my days as a reporter and editor at The Oklahoman in the 1980s and 1990s. It was a time certainly before digital photos and software that let you easily alter the reality of a picture.

However, I recall there was quite a debate at the paper over whether drinks in the hands of people at a party should be edited out, by cropping or by being retouched by an artist.

So, editing photos has been an issue for decades.

And that led me to contact Doug Hoke, The Oklahoman’s current photo manager who worked at the paper all through the pre-digital age of the ’80s and ’90s.

Screenshot
Doug Hoke from his profile image on Facebook.

Doug is one of my favorite photographers, with a long history of shooting great photos. His work was regularly featured in Sports illustrated in the pre-digital days.

I asked Doug if my memory was correct and altered photos were an issue back in the day. Here’s what he said in response to the question:

“Way back when if Gaylord (the publisher) didn’t want something in the paper, it wasn’t there,” he said. “The airbrushing of photos was originally done to help with the reproduction, as coarse screens and letter press technique left much to be desired. That evolved into the removal of items, like cocktail drinks, (or) the adding of details like clothing, lengthening hems,  adding material to swimsuits, closing up v-necks, etc.

“When the digital age hit, the ease that photos could be altered called for new guidelines for photography. What is the common practice now is no pixels should be added or removed, except by cropping, and cleaning up dust spots on the chip. Toning and adjusting contrast should only be to help reproduce the image as accurately as possible.”

Doug said he supports Apple’s limits to digital editing that distorts the reality of photos.

“When Apple first announced that they would only allow small details to be removed, I applauded them,” he said. “Craig is correct that photography is based in reality, and I firmly believe that the photos should remain as untouched as possible. You may think that water bottle is in the way, but future generations will look at these details with amazement. Think of old photos you look at, you study every detail in the photo to get a better sense of history. If we remove all those details now, no one will ever see them.”

There’s a distinction between photograph and a photo illustration, Doug said. Or there once was.

“The line between photograph and illustration has been blurred and will never be the same,” he said. “Publications try to hold onto the strict guidelines of what is a photo and what is an illustration but the public probably doesn’t really care. I don’t think the general public has a strong grasp of reality anymore. Games, TikTok, IG, X, whatever they look at. If they think an image is cool they like it without giving any thought to whether it is accurate or not.

“We have had to reject several ‘photos’ that were obviously enhanced by AI, mostly portraits. Accepting photos from unknown sources will be a huge lift in the near future as AI will just continue to get better. Really glad Apple took a stand and said just because we can doesn’t mean we should.”

Did you catch what Doug said? The public is suffering from both ignorance and apathy on whether a photo has been altered.

But we should be concerned. Thank you, Apple, for taking a stand.

For software engineer John Hassell, the future is now for AI Chatbots

JohnHassell2023
Oklahoma-based software engineer John Hassell has embraced artificial intelligence chatbots as part of his daily workflow.

In the past couple of months, I’ve heard more about artificial intelligence (AI) chatbots than any other topic, except, perhaps, the media hysteria caused by Chinese spy balloons.

According to IBM, a chatbot is a computer program that uses artificial intelligence and natural language processing (NLP) to understand questions and automate responses to them, simulating human conversation.

In fact, it was just a month ago that I signed up on the free Open AI ChatGPT website and asked Chatbot to write me a couple of essays on the Oklahoma City Thunder’s tanking philosophy.

The essays turned out well written and with solid arguments.

Meanwhile, we’ve seen a lot of hand-wringing from ethicists over the potential of AI bots to write term papers for high school and college students or mimic the voice of well known people to have them say outrageous things.

So, the jury’s still out on what our future will look like with AI Chatbots churning out reports, papers and art. But there are people who already embrace the potential of chatbots as tools to enhance their workflow.

One of those is Oklahoman John Hassell, who works as an embedded software engineer for Tactical Electronics in Broken Arrow. I’ve known John since 2005, when he was a Ph.D. candidate at the University of Oklahoma and entered the Donald W. Reynold’s Governor’s Cup collegiate business plan competition with a concept known as ZigBeef.

As pitched by John and his team in the Governor’s Cup, Zigbeef applied RFID technology to ear tags for cattle as a way to easily identify them and ensure a safe beef supply for consumers.

ZigBeef won second place in the Graduate Division of the Governor’s Cup.

After completing his Ph.D. and pursuing ZigBeef for a number of years, John has gone on to work in embedded software development, as well as applying his skills to mobile app development.

So, I was pleased to hear from him recently when he described how ChatGPT has quickly become a major factor in his workflow.

John said he heard about AI and initially was skeptical of any potential benefits.

But an OpenAI art program known as Dall-E changed his perspective. He asked it to draw a photo from his memory of his family’s old two-story farm house near Okemah.

“On a lark, the first time I used it, I typed in a paragraph describing a mental picture of the sandy road, surrounded by a pecan tree orchard, leading up to the white farm two-story house,” he said. “OpenAI’s system produced something shockingly similar to what I was imagining. The picture it created in seconds was suitable for hanging in my office as a picture.”

Now you know why the art world has been in an uproar over AI potential.

Next, Hassell asked ChatGPT to produce some programming code that involved an obscure Linux script.

“In a second, ChatGPT comprehended exactly what I needed to do, and then provided the working code to do it,” he said. “I had been working on that issue for weeks.”

So, now Chatbot is part of John Hassell’s routine workflow. He produced a legislative mobile app for the Oklahoma Electric Cooperatives Association and is working to implement a “quiz” feature as part of it. The quiz required writing a short summary of each legislator.

He assigned the task to Chatbot.

“Once again, ChatGPT provided an easily readable, accurate summary, correctly punctuated, with an interesting fact, for each legislator and their district,” John said. “It was not completely accurate and had to be checked. Nevertheless, it saved an incredible amount of tedium and time in writing this program.”

I wanted to know more about the perspective John has gained about AI and the Chabot, so I asked him a few more questions. Here they are in Q&A format:

Q: How has AI helped streamline or enhance what you do?
A: I’ve actually started to migrate away from my standard resource of programming help, sites like StackOverflow and Google search. Now, I am able to ask specific questions that tend to get me answers quicker.

Q: Isn’t using an AI Chatbot considered cheating?
A: It is somehow cheating the same way that leveraging a calculator was somehow cheating in the 1970s, or that using a tractor instead of a mule team was cheating at the start of the last century. New technology is neither ethical or unethical, it just is. We will find if we aren’t using this technology in future years we are just left behind.”

Q: How much do you worry about inaccurate feedback you receive from Chatbot?
A: In my few short weeks of usage, it has indeed been inaccurate many times. However, the inaccurate solutions provided, or the prose presented, still brought me much farther and quick ahead than without it.

Q: There seems to be some fear about how AI will impact our future in a negative way; what is your perspective on that potential?
A: I can tell you that after using ChatGPT the past few weeks, the user interfaces on my smart phone, on my truck radio, even on most websites, seem antiquated.  Having to search for, and manipulate computer controls, in such a precise and particular manner feels so “old” already. Not to be too dramatic, but this will change will be huge… and it’s happening with record speed.

Q: What else would you like us to know about the topic of AI Chatbots or your work?
A: Interestingly, I’ve gotten better at using ChatGPT in my programming work by thinking less like a computer programmer in many ways. Now, instead of overly-specifying what I need, and the way I need it, I revert to more-human prose, asking what I ultimately need… not trying to tell ChatGPT on how to find the answer for me. I’m having to de-program my decades of learning and specifying the minutiae of how to get things done with a computer. Now, ChatGPT has learned to do a lot of that. I look forward to seeing these improvements in all the tedious things we all have to deal with in interacting with all the machines that are here to help us.

Takeaway: I only heard about ChatGPT a few months ago, and thought that its impact wouldn’t show up for years while it was being perfected.

But as John Hassell has demonstrated, Chatbot’s future is now. We should embrace it.