New Delhi: The deepfake video involving Rashmika Mandanna has brought the devastating challenges artificial intelligence (AI) poses in front of us as a society.
Especially, AI-powered disinformation is a serious threat to democracy and society as a whole.
Amid the debate around the effective safety measures platforms and governments should introduce to curb the ill effects of AI-powered disinformation, the role of the common people becomes equally important as they are both careers and consumers.
By learning to detect and debunk AI-powered disinformation, people who spend time on social media or the Internet can help protect themselves and society from its harmful effects.
Let us first understand the key trends in the changing landscape of disinformation to understand the threat and fight against it:
Rise of deepfakes and other synthetic media
Synthetic media is a broader term that encompasses all types of artificial media, including deepfakes, that are created using artificial intelligence.
Deepfakes are videos or audio recordings manipulated to make it appear that someone is saying or doing something that they never said or did.
Deepfakes and other synthetic media can be used to spread disinformation in a variety of ways, such as by creating fake videos of politicians saying or doing controversial things or by creating fake news articles that are designed to look like they are from reputable sources.
Social media algorithms clubbed with confirmation bias are largely responsible
The way that social media algorithms and human psychology work, it isn't likely to be the most credible material that forces its way most readily into your eye line, but rather that which attracts the most attention, often because it is dramatic and outrage-inducing.
Social media algorithms can be designed to show users content they are likely interested in. However, these algorithms can also be used to spread disinformation by offering users content that is designed to confirm their existing biases (also known as confirmation bias) or to manipulate their emotions.
For example, social media algorithms can be used to show users content that is likely to make them angry or afraid, which can make them more susceptible to disinformation.
Confirmation bias or preconceived bias becomes the tool that spreads disinformation as well as misinformation at an unmatched speed.
Targeting of specific groups with disinformation campaigns
Disinformation campaigns are often targeted at specific groups of people, such as people of a certain political persuasion, race, or religion. This is because it is more effective to spread disinformation to people who are already primed to believe it.
The social media users may follow these tips to protect themselves from the harmful effects of disinformation:
Be critical of the information you see online: Don't believe everything you read or see. Verify information from multiple sources before sharing it.
Be careful about who you follow on social media: Only follow accounts you trust and provide accurate information.
Report disinformation when you see it: If you see disinformation being spread online, report it to the platform where you saw it.
Be aware of your own biases: We all have biases, but it's important to be mindful of them so that we can avoid being misled by information that confirms our existing beliefs.
Before trying to evaluate photographs, videos, etc, forensically, Internet users step back and ask themselves:
A. Why might it be that this is being shared right now?
B. What is the response it is trying to provoke in me and others?
C. How susceptible am I to taking it on board, given my existing sympathies, and how does it affect those?
D: Have you searched for online comments and possible previous mentions of the very incident being put before you as having happened this morning?
Above all, be alert, be sceptical and — challenging as it undoubtedly is — try to resist being led by your emotions and preconceptions.