Why we deserve deepfakes

Shivaji Dasgupta
07 Nov 2023
New Update
Rashmika Mandanna deepfake

Actress Rashmika Mandanna (File image)

Kolkata: The content creation industry relies heavily on tactical fakes to keep the juggernaut rolling. Only when the intent becomes evil, does it become a deepfake. It is important to note that the latter is in many ways an organic extension of the former, with deceit being a necessary adhesive.


Deepfakes can be easily defined as the digital manipulation of voice, video or image for the purpose of creating fake imagery. As an outcome of Machine Learning and AI, there is a whole bouquet of software that makes superimposition an amateur activity. The Rashmika Mandanna episode may well be to public awareness on this matter that the Cyrus Mistry accident did for rear seat belts. An acceleration of regulatory and behavioural evolution that will be beneficial to all.

But truthfully, wilful faking has been an integral element of content culture for ages. Every film hero, especially in action mode, cultivates a panel of body doubles who are invaluable to the industry. They can perform stunts which are too risky or complicated and can fill in for sequences where direct visual contact is unnecessary. Thus saving both money and agony while ensuring that a faux projection is created. When I was part of an Amitabh Bachchan TVC production, it was plainly apparent that the in-person photocopy was a reasonably pampered fellow, obviously value-adding.

This pattern is equally true for pure-play voice, extending from dubbing to mimicry. Chetan Shashital and his many peers have built humongous careers on this premise. Deadlines are easier met and paid for if dubbing is done by these characters and that seems perfectly legit. While FM stations and a few less scrupulous brand owners invest in them to concoct a perception of originality - a cloned voice leads the customer to imagine that the real person is an endorser.


Some of us know of it while others don’t - in either case, we happily indulge. Reality shows also encourage mimicry and replication, as an easy form of entertainment. While community live shows, across the nation, thrive on Kishore-Rafi-Lata replicators as easy fodder for hungry audiences.

While the intent in most, if not all, of the above rests in the ‘good’, it simply takes a turn in motivation to turn it to ‘bad’. Especially since, quite like white lies, deceit is an underlying part of the process, whatever the gravitas of the outcome. Deepfaking thus becomes potent ammunition for those who understand the pulse of the customer, a karmic backlash for people like us who have revelled in its softer avatars. In so many ways, we deserve it but now the focus must shift to what’s next.

A quick global scan will reveal that regulators are still debating on whether this is a civil or a criminal matter - some states in the US suggesting heavier penalties for defamatory pornographic cases. Now that Rajeev Chandrasekhar has publicly commented, one can hope for a fast-forward prosecution culture in India, eventually leading to enhanced prevention. Frankly, this is what really works in India, public awareness or self-regulation merely tick marks in most cases. Backseat belting is an apt case in point as is Income Tax filing protocols. Although, there is a lot that brands and studios can proactively do, as long as they can handle the pain points.


Firstly, the bodies representing the advertisers, whether ASCI or otherwise, must propose a client-agency undertaking that neither body nor throat doubles should be used for any commercial or digital activity. This must be endorsed by celebrities, radio stations, producers, event impresarios and their peers. In order to instil a culture of compelling originality even scripts must be accordingly tweaked. As a need-to-do behaviour shift, which must extend to meme creations through facial replacement - a seemingly light-hearted act that is now bearing far heavier consequences.

Those in the relevant Machine Learning software business must act similarly, investing in information trails for all users. Educational institutes must stop teaching people such techniques, as the potential for damage exceeds the ability for good. The nascent but powerful digital creator community can orchestrate this movement in an inspirational manner.

These are early days for AI at large but its outcomes tread very gingerly on the borderline of good and bad. ChatGPT is prolific in multiplying original creations - just because it is a machine and not man we don’t yet consider this plagiarism. Unlike Deepfaking, which is clearly man, where our collective conscience is sufficiently distressed. Not to miss is the underlying irony - unlike Sci-Fi genre creators we don’t believe that machines can be deliberately evil. In man, though, we have scant trust.