The digital divide is growing, and social media has become a breeding ground for distrust. The proliferation of artificial intelligence (AI) has made it increasingly difficult to discern what's real and what's fake online.
The advent of AI-generated images, videos, and altered photos on social media platforms has led to widespread confusion. President Donald Trump's recent actions have inadvertently fueled the spread of AI-edited content, including a fake image of a Venezuela operation that sparked outrage among many users. The proliferation of AI-generated evidence has also raised concerns about misinformation in courtrooms.
Researchers warn that the increasing sophistication of AI technology will make it increasingly challenging to detect fake media. "In terms of just looking at an image or a video, it will essentially become impossible to detect if it's fake," says Jeff Hancock, founder of the Stanford Social Media Lab. The old strategies for identifying AI-generated content, such as checking the number of fingers in an image, are likely to become obsolete.
The erosion of trust online is not new, but AI has accelerated the spread of misinformation. Similar breakdowns in trust have occurred throughout history, from election misinformation in 2016 to the mass production of propaganda after the printing press was invented in the 1400s.
Fast-moving news events are particularly susceptible to manipulated media, which can fill in for the lack of information. The consequences of AI-generated content are significant, not just in terms of deception but also in the collapse of trust itself. As researcher Renee Hobbs notes, "If constant doubt and anxiety about what to trust is the norm, then actual disengagement is a logical response. It's a coping mechanism."
To address this issue, experts are working on incorporating generative AI into media literacy education. The Organization for Economic Co-operation and Development has scheduled a global assessment of media and artificial intelligence literacy for 15-year-olds in 2029.
Social media giants have expressed concerns about the growing problem of AI misinformation. Adam Mosseri, head of Instagram, recently wrote that "the vast majority of photographs or videos that I see are largely accurate captures of moments that happened in real life," but this is no longer the case.
Researchers emphasize the importance of common awareness and common sense as protection measures against AI-generated content. Hany Farid notes that people are just as likely to say something real is fake as they are to say something fake is real, especially when there's a partisan agenda involved. Siwei Lyu, a professor at the University at Buffalo, suggests that everyday internet users can boost their AI detection skills by paying attention and asking themselves why they trust or distrust what they see.
As AI technology continues to advance, it's essential for individuals to develop critical thinking skills and become more discerning about the media they consume. The digital divide is growing, but by raising awareness and promoting critical thinking, we can work towards rebuilding trust online.
The advent of AI-generated images, videos, and altered photos on social media platforms has led to widespread confusion. President Donald Trump's recent actions have inadvertently fueled the spread of AI-edited content, including a fake image of a Venezuela operation that sparked outrage among many users. The proliferation of AI-generated evidence has also raised concerns about misinformation in courtrooms.
Researchers warn that the increasing sophistication of AI technology will make it increasingly challenging to detect fake media. "In terms of just looking at an image or a video, it will essentially become impossible to detect if it's fake," says Jeff Hancock, founder of the Stanford Social Media Lab. The old strategies for identifying AI-generated content, such as checking the number of fingers in an image, are likely to become obsolete.
The erosion of trust online is not new, but AI has accelerated the spread of misinformation. Similar breakdowns in trust have occurred throughout history, from election misinformation in 2016 to the mass production of propaganda after the printing press was invented in the 1400s.
Fast-moving news events are particularly susceptible to manipulated media, which can fill in for the lack of information. The consequences of AI-generated content are significant, not just in terms of deception but also in the collapse of trust itself. As researcher Renee Hobbs notes, "If constant doubt and anxiety about what to trust is the norm, then actual disengagement is a logical response. It's a coping mechanism."
To address this issue, experts are working on incorporating generative AI into media literacy education. The Organization for Economic Co-operation and Development has scheduled a global assessment of media and artificial intelligence literacy for 15-year-olds in 2029.
Social media giants have expressed concerns about the growing problem of AI misinformation. Adam Mosseri, head of Instagram, recently wrote that "the vast majority of photographs or videos that I see are largely accurate captures of moments that happened in real life," but this is no longer the case.
Researchers emphasize the importance of common awareness and common sense as protection measures against AI-generated content. Hany Farid notes that people are just as likely to say something real is fake as they are to say something fake is real, especially when there's a partisan agenda involved. Siwei Lyu, a professor at the University at Buffalo, suggests that everyday internet users can boost their AI detection skills by paying attention and asking themselves why they trust or distrust what they see.
As AI technology continues to advance, it's essential for individuals to develop critical thinking skills and become more discerning about the media they consume. The digital divide is growing, but by raising awareness and promoting critical thinking, we can work towards rebuilding trust online.