Quick News Bit

AI makes it harder to spot deep fakes than ever before, but awareness is key, says expert

0
AI
Credit: Pixabay/CC0 Public Domain

As artificial intelligence programs continue to develop and access is easier than ever, it’s making it harder to separate fact from fiction. Just this week, an AI-generated image of an explosion near the Pentagon made headlines online and even slightly impacted the stock market until it was quickly deemed a hoax.

Cayce Myers, a professor in Virginia Tech’s School of Communication, has been studying this ever evolving technology and shares his take on the future of deep fakes and how to spot them.

“It is becoming increasingly difficult to identify disinformation, particularly sophisticated AI generated deep fake,” says Myers. “The cost barrier for generative AI is also so low that now almost anyone with a computer and internet has access to AI.”

Myers believes because of this we will see a lot more disinformation—both visual and written—over the next few years. “Spotting this disinformation is going to require users to have more media literacy and savvy in examining the truth of any claim.”

While photoshop programs have been used for years, Myers says the difference between that and disinformation created with AI is one of sophistication and scope. “Photoshop allows for fake images, but AI can create altered videos that are very compelling. Given that disinformation is now a widespread source of content online this type of fake news content can reach a much wider audience, especially if the content goes viral.”

When it comes to combating disinformation, Myers says there are two main sources—ourselves and the AI companies.

“Examining sources, understanding warning signs of disinformation, and being diligent in what we share online is one personal way to combat the spread of disinformation,” he says. “However, that is not going to be enough. Companies that produce AI content and social media companies where disinformation is spread will need to implement some level of guardrails to prevent the widespread disinformation from being spread.”

Myers explains the problem is that the technology of AI has developed so fast that it’s likely that any mechanism to prevent the spread of AI generated disinformation will not be full proof.

Attempts to regulate AI are going on in the U.S. at the federal, state, and even local level. Lawmakers are considering a variety of issues including disinformation, discrimination, intellectual property infringement, and privacy.

“The issue is that lawmakers do not want to create a new law regulating AI before we know where the technology is going. Creating a law too fast can stifle AI’s development and growth, creating one too slow may open the door for a lot of potential problems. Striking a balance will be a challenge,” says Myers.

Provided by
Virginia Tech


Citation:
AI makes it harder to spot deep fakes than ever before, but awareness is key, says expert (2023, May 26)
retrieved 26 May 2023
from https://techxplore.com/news/2023-05-ai-harder-deep-fakes-awareness.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.

For all the latest Technology News Click Here 

 For the latest news and updates, follow us on Google News

Read original article here

Denial of responsibility! NewsBit.us is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a comment