Skip to main content
An AI-generated image of a family posing outside with a mountain in the distance

Many AI-generated images look realistic until you take a closer look

MidJourney

Did you notice that the image above was created by artificial intelligence? It can be difficult to spot AI-generated images, video, audio and text at a time when technological advances are making them increasingly indistinguishable from much human-created content, leaving us open to manipulation by disinformation. But by knowing the current state of the AI technologies used to create misinformation, and the range of telltale signs that what you are looking at might be fake, you can help protect yourself from being taken in.

World leaders are concerned. According to a report by the World Economic Forum, misinformation and disinformation may “radically disrupt electoral processes in several economies over the next two years”, while easier access to AI tools “have already enabled an explosion in falsified information and so-called ‘synthetic’ content, from sophisticated voice cloning to counterfeit websites”.

The terms misinformation and disinformation both refer to false or inaccurate information, but disinformation is that which is deliberately intended to deceive or mislead.

“The issue with AI-powered disinformation is the scale, speed and ease with which campaigns can be launched,” says Hany Farid at the University of California, Berkeley. “These attacks will no longer take state-sponsored actors or well-financed organisations – a single individual with access to some modest computing power can create massive amounts of fake content.”

He says that generative AI (see glossary, below) is “polluting the entire information ecosystem, casting everything we read, see and hear into doubt”. He says his research suggests that, in many cases, AI-generated images and audio are “nearly indistinguishable from reality”.

However, research by Farid and others reveals that there are strategies you can follow to reduce your risk of falling for social media misinformation or disinformation created by AI.

How to spot fake AI images

Remember seeing a photo of Pope Francis wearing a puffer jacket? Such fake AI images have become more common as new tools based on diffusion models (see glossary, below) have allowed anyone to start churning out images from simple text prompts. One study by Nicholas Dufour at Google and his colleagues found a rapid increase in the proportion of AI-generated images in fact-checked misinformation claims from early 2023 onwards.

“Nowadays, media literacy requires AI literacy,” says Negar Kamali at Northwestern University in Illinois. In a 2024 study, she and her colleagues identified five different categories of errors in AI-generated images (outlined below) and provided guidance on how people can spot these for themselves. The good news is that their research suggests people are currently about 70 per cent accurate at detecting fake AI images of people. You can use their online image test to assess your own sleuthing skills.

5 common types of errors in AI-generated images:

  1. Sociocultural implausibilities: Is the scene depicting rare, unusual or surprising behaviour for certain cultures or historical figures?
  2. Anatomical implausibilities: Take a close look: are body parts like hands unusually shaped or sized? Do the eyes or mouths look strange? Have any body parts merged?
  3. Stylistic artefacts: Does the image look unnatural, almost too perfect or stylistic? Does the background look odd or like it is missing something? Is the lighting strange or variable?
  4. Functional implausibilities: Do any objects look bizarre or like they might not be real or work? For example, are buttons or belt buckles in weird places?
  5. Violations of physics: Are shadows pointing in different directions? Are mirror reflections consistent with the world depicted within the image?
See also  Ozempic and Wegovy: Everything you need to know about the semaglutide weight loss drugs
An image of a man brushing his teeth with two toothbrushes, one of which looks strange, that has been generated by an AI program

Strange objects and behaviour can be clues that an image was created by AI

MidJourney

How to identify video deepfakes

AI technology known as generative adversarial networks (see glossary, below) has allowed tech-savvy individuals to create video deepfakes since 2014 – digitally manipulating existing videos of people to swap in different faces, create new facial expressions and insert new spoken audio aligned with matching lip-syncing. This has enabled a growing array of scammers, state-backed hackers and internet users to produce video deepfakes where celebrities such as Taylor Swift and ordinary people alike may find themselves unwillingly featured in non-consensual deepfake pornography, scams and political misinformation or disinformation.

The techniques for spotting AI fake images (see above) can be applied to suspect videos too. Additionally, researchers at the Massachusetts Institute of Technology and Northwestern University in Illinois have compiled some tips for how to spot such deepfakes, but they have acknowledged that there is no fool-proof method that always works.

6 tips for spotting AI-generated video:

  1. Mouth and lip movements: Are there moments when the video and audio aren’t completely synced?
  2. Anatomical glitches: Does the face or body look weird or move unnaturally?
  3. Face: Look for inconsistencies in face smoothness or wrinkles around the forehead and cheeks, along with facial moles.
  4. Lighting: Is the lighting inconsistent? Do shadows behave as you would expect? Pay particular attention to a person’s eyes, eyebrows and glasses.
  5. Hair: Does facial hair look weird or move in strange ways?
  6. Blinking: Too much or too little blinking could be a sign of a deepfake.

A newer category of video deepfakes is based on diffusion models (see glossary, below) – the same AI technology behind many image generators – that can create completely AI-generated video clips based on text prompts. Companies are already testing and releasing commercial versions of AI video generators that could make it easy for anyone to do this without needing special technical knowledge. So far, the resulting videos tend to feature distorted faces or bizarre body movements.

“These AI-generated videos are probably easier for people to detect than images, because there is a lot of movement and there is a lot more opportunity for AI-generated artefacts and impossibilities,” says Kamali.


How to identify AI bots

Social media accounts controlled by computer bots have become common on many social media and messaging platforms. A growing number of these bots have also been taking advantage of generative AI technologies such as large language models (see glossary, below) since 2022. These make it both easy and cheap to churn out AI-written content through thousands of bots that is grammatically correct and convincingly customised to different situations.

See also  Racism is such a touchy topic that many US educators avoid it – we are college professors who tackled that challenge head on

It has become much easier “to customise these large language models for specific audiences with specific messages”, says Paul Brenner at the University of Notre Dame in Indiana.

Brenner and his colleagues have found in their research that volunteers could only distinguish AI-powered bots from humans about 42 per cent of the time – despite the participants being told they were potentially interacting with bots. You can test your own bot detection skills here.

Some strategies can help identify less sophisticated AI bots, says Brenner.

5 ways to determine whether a social media account is an AI bot:

  1. Emojis and hashtags: Excessive use of these can be a sign.
  2. Uncommon phrasing, word choices or analogies: Unusual wording could indicate an AI bot.
  3. Repetition and structure: Bots may use repeated wording that follows similar or rigid forms and they may overuse certain slang terms.
  4. Ask questions: These can reveal a bot’s lack of knowledge about a topic – particularly when it comes to local places and situations.
  5. Assume the worst: If a social media account isn’t a personal contact and their identity hasn’t been clearly validated or verified, it could well be an AI bot.

How to detect audio cloning and speech deepfakes

Voice cloning (see glossary, below) AI tools have made it easy to generate new spoken audio that can mimic practically anyone. This has led to the rise of audio deepfake scams that clone the voices of family members, company executives and political leaders such as US President Joe Biden. These can be much more difficult to identify compared with AI-generated videos or images.

“Voice cloning is particularly challenging to distinguish between real and fake because there aren’t visual components to support our brains in making that decision,” says Rachel Tobac, co-founder of SocialProof Security, a white-hat hacking organisation.

Detecting such AI audio deepfakes can be especially tricky when they are used in video and phone calls. But there are some common-sense steps you can follow to distinguish authentic humans from AI-generated voices.

4 steps for recognising if audio has been cloned or faked using AI:

  1. Public figures: If the audio clip is of an elected official or celebrity, check if what they are saying is consistent with what has already been publicly reported or shared about their views and behaviour.
  2. Look for inconsistencies: Compare the audio clip with previously authenticated video or audio clips that feature the same person’s voice. Are there any inconsistencies in the sound of their voice or their speech mannerisms?
  3. Awkward silences: If you are listening to a phone call or voicemail and the speaker is taking unusually long pauses while speaking, they may be using AI-powered voice cloning technology.
  4. Weird and wordy: Any robotic speech patterns or an unusually verbose manner of speaking could indicate that someone is using a combination of voice cloning to mimic a person’s voice and a large language model to generate the exact wording.
See also  We're homing in on the best ways to tackle misinformation
Videograb of an AI-generated version of Narendra Modi dancing to the song Gangnam Style

Public figures such as Narendra Modi behaving out of character can be an AI giveaway 

@the_indian_deepfaker

The technology will only get better

As it stands, there are no consistent rules that can always distinguish AI-generated content from authentic human content. AI models capable of generating text, images, video and audio will almost certainly continue to improve and they can often quickly produce authentic-seeming content without any obvious artefacts or mistakes. “Be politely paranoid and realise that AI has been manipulating and fabricating pictures, videos and audio fast – we’re talking completed in 30 seconds or less,” says Tobac. “This makes it easy for malicious individuals who are looking to trick folks to turn around AI-generated disinformation quickly, hitting social media within minutes of breaking news.”

While it is important to hone your eye for AI-generated false information and learn to ask more questions of what you read, see and hear, ultimately this won’t be enough to stop harm and the responsibility to detect fakes can’t fall fully on individuals. Farid is among researchers who say that government regulators must hold to account the largest tech companies – along with start-ups backed by prominent Silicon Valley investors – that have developed many of the tools that are flooding the internet with fake AI-generated content. “Technology is not neutral,” says Farid. “This line that the technology sector has sold us that somehow they don’t have to absorb liability where every other industry does, I simply reject it.”

Diffusion models: AI models that learn by first adding random noise to data – such as blurring an image – and then reversing the process to recover the original data.

Generative adversarial networks: A machine learning method based on two neural networks that compete by modifying original data and then try to predict whether the generated data is authentic or real.

Generative AI: A broad class of AI models that can produce text, images, audio and video after being trained on similar forms of such content.

Large language models: A subset of generative AI models that can produce different forms of written content in response to text prompts and sometimes translate between various languages.

Voice cloning: The method of using AI models to create a digital copy of a person’s voice and then potentially generating new speech samples in that voice.

Topics:

Source link

Felecia Phillips Ollie DD (h.c.) is the inspiring leader and founder of The Equality Network LLC (TEN). With a background in coaching, travel, and a career in news, Felecia brings a unique perspective to promoting diversity and inclusion. Holding a Bachelor's Degree in English/Communications, she is passionate about creating a more inclusive future. From graduating from Mississippi Valley State University to leading initiatives like the Washington State Department of Ecology’s Equal Employment Opportunity Program, Felecia is dedicated to making a positive impact. Join her journey on our blog as she shares insights and leads the charge for equity through The Equality Network.

Leave a Reply

https://coburnforsenate.com/
https://mts-mqtebuireng.sch.id/
https://hotelarjuna.com/
http://espanahijos.com/
https://kimkartoharjo.madiunkota.go.id/
https://sites.google.com/view/oceania-harvard-sig/about
https://sites.google.com/view/enigmaths/home
https://sites.google.com/view/microdosingpsychedelics/home
https://sites.google.com/view/braddockgrease/home
https://sites.google.com/view/donaldgrasse/home
https://sites.google.com/view/cleanwharfeilkley/home
https://sites.google.com/view/uptownchristmastrees/
https://sites.google.com/view/schev-tempsite/home
https://lewesbonfire2018.blogspot.com/
https://moviemunn.blogspot.com/
https://runopolis.blogspot.com/
https://bestonlinedrugstore.blogspot.com/
https://hambos2novel.blogspot.com/
https://federasty.blogspot.com/
https://business-writer.blogspot.com/
https://changetheagenda.blogspot.com/
https://mschangart.weebly.com/
https://igleceldom.weebly.com/
https://tylercoverdale.weebly.com/
https://compassionatestanford.weebly.com/
https://laurelryohe.weebly.com/
https://uwmicrophiles.weebly.com/
https://roll4rock.weebly.com/
https://travellerchris.weebly.com/
https://gwynllyw.weebly.com/
https://billsantiago.weebly.com/
https://latinocaucus.weebly.com/
https://communitiesconnectingforchildren.weebly.com/
https://redmoonpathways.weebly.com/
https://urangcianjur.weebly.com/
https://vtsbl.weebly.com/
https://rickmountshootingschool.weebly.com/
https://forthamiltoncommunityclub.weebly.com/
https://edsupportgroup.weebly.com/
https://susans-words2.weebly.com/
https://kadiehenderson.weebly.com/
https://parmatours.weebly.com/
https://tractgames.weebly.com/
https://hazratkhateeb-e-azam.weebly.com/
https://financialsupport.weebly.com/
https://debraperrone.weebly.com/
https://barcelonaplanetfilmfestival.weebly.com/
https://aplusc.weebly.com/
mikatoto
SENGTOTO
SENGTOTO
SITUS EVOSTOTO
LOGIN EVOSTOSO