Exclusive: Rajat Sharma on filing PIL and his fight against deepfakes
The rapid growth of digital and emerging technology has also posed some regulatory challenges. While digital is revolutionising businesses as well as the society, the dark underbelly has led to some serious ethical and law and order issues.
Morphing of images is not a new problem – from the era of Photoshops and air brushing to the use of special filters, photographs and videos have been manipulated, either for aesthetic reasons or more malicious motives.
With the growth of Artificial Intelligence, a major issue that has arisen is the proliferation of ‘deepfakes’. These artificially created videos and images have the potential to cause significant damage, having been used to generate fake news, false pornographic videos, and malicious hoaxes. High-profile individuals such as celebrities and politicians have frequently been targeted, with well-known figures like Sachin Tendulkar, Rashmika Mandanna, Alia Bhatt, and Ranveer Singh falling prey to such deceptive practices.
Taking cognizance of the serious damage that deepfakes are capable of doing, recently, Rajat Sharma, Chairman and Editor-in-Chief of India TV, filed a public interest litigation against non-regulation of deepfake technology in the country. In May this year, a division bench of the Delhi High Court, comprising Acting Chief Justice Manmohan and Justice Manmeet Pritam Singh Arora, issued notice on this PIL and sought response of the Union Government through Ministry of Electronics and Information Technology. During the hearing, the bench orally remarked that “this is a major problem” and asked the Central Government if it is willing to act on the issue.
Sharma’s plea states that the proliferation of deepfake technology poses significant threats to various aspects of society, including misinformation and disinformation campaigns, undermining integrity of public discourse and democratic processes, potential use in fraud and identity theft as well as harm to individuals’ reputations and privacy.
Also read:
Regulating Deepfake: India TV’s Rajat Sharma moves Delhi HC; Notice Issued
In this special conversation with Adgully, Rajat Sharma, Chairman and Editor-in-Chief, India TV, speaks at length about the reasons that prompted him to take this legal step, as well as his expectations from the PIL; why deepfakes are such a menace, steps that need to be taken to regulate deepfakes, effectively mitigating the harm associated with deepfake misuse, and more.
India TV has carved a niche for itself in the media landscape. What, according to you, sets it apart from other news channels? How do you navigate the balance between providing unbiased news coverage and catering to the interests of your audience?
India TV has always aimed to provide accurate and timely news, while ensuring that our coverage resonates with a wide audience. What sets us apart is our commitment to journalistic integrity and our efforts to present news that is both factual and relevant to our viewers. We navigate the balance between unbiased coverage and audience interests by adhering to strict editorial standards and continuously engaging with our audience to understand their needs and concerns. Our focus remains on delivering news that informs, educates, and empowers the public.
You have been a prominent figure in Indian journalism for quite some time. Could you elaborate on the motivation behind filing the PIL against the non-regulation of deepfake technology in India?
As a journalist, I have always believed in the power of truthful reporting and the accuracy of information. Filing the PIL was driven by my worry about the erosion of trust in the media and its broader implications for society and celebrities. I was shocked to see videos showing me “selling” medicine for diabetes and weight loss, and the potential damage deepfakes could cause to my image deeply concerned me. I’ve seen similar videos causing sleepless nights for Amitabh Bachchan and other celebrities.
Deepfake technology, if left unregulated, poses a significant threat to the authenticity of information, which is the bedrock of journalism and democracy. After experiencing firsthand the malicious use of my likeness in several deepfake videos promoting false medical advice, it became clear that immediate legal and regulatory measures were necessary to curb this growing menace.
What specific concerns led you to take legal action on the deepfake issue, particularly in relation to its impact on public discourse and democratic processes? Could you share any personal experiences or instances where you or your organization have been directly affected by the proliferation of deepfake content?
The decision to take legal action was driven by several specific concerns. For instance, one day Sarod Maestro Ustad Amzad Ali Khan met me at a get-together. He asked me, “Why are you selling medicines?” He said, “You shouldn’t do this”. I tried explaining that it was not me, but he was not convinced. This made me realise how deepfakes can manipulate public opinion, spread misinformation, and create false narratives, which can severely disrupt public discourse and democratic processes. Personally, my experience of being targeted by a deepfake video underscored the vulnerability of individuals to such attacks. As a media organization, we have observed an increase in the misuse of deepfake content, which not only harms individuals, but also undermines public trust in media institutions.
In your opinion, what are the potential risks associated with the misuse of deepfake technology, particularly in terms of fraud, identity theft, and erosion of trust in media and public institutions?
The misuse of deepfake technology presents several risks, including fraud, identity theft, and the erosion of trust in both media and public institutions. Deepfakes can be used to create convincing false identities, manipulate financial transactions, and deceive individuals and organizations. Furthermore, the spread of false information through deepfakes can lead to widespread misinformation, causing panic, social unrest, and undermining the credibility of legitimate news sources. The potential for deepfakes to influence elections and political processes is particularly concerning, as it can distort democratic outcomes and weaken public confidence in democratic institutions.
The PIL emphasizes the need for regulatory frameworks to define and classify deepfakes and AI-generated content. What specific measures or regulations do you believe are necessary to effectively mitigate the harm associated with their misuse?
To effectively mitigate the harm associated with deepfake misuse, several regulatory measures are necessary. First, platforms and applications that facilitate deepfake creation should be identified and blocked to prevent their proliferation. Additionally, there should be a mandate for the clear disclosure of AI-generated content, such as through watermarks or other identifiable markers, to ensure transparency and prevent deception. It is also crucial to appoint a government official responsible for addressing deepfake-related complaints swiftly, ensuring that concerns are addressed promptly and efficiently.
Furthermore, stringent measures must be implemented for the rapid removal of deepfake content to minimize its potential harm. Increasing public awareness about the dangers of deepfakes and how to recognize them is also essential in combating their misuse. These combined efforts can significantly reduce the adverse impacts of deepfakes on individuals and society.
How do you assess the government’s response to the issue of deepfake regulation so far, particularly in light of its statement of intent in November 2023?
Taking a positive stance, the government has addressed the deepfake regulation matter in its statements of intent, notably those released in November 2023 and the minister has assured me of concrete action and effective enforcement mechanisms. The government has promised to adopt a more proactive approach, implement comprehensive regulatory frameworks, and ensure the active participation of all stakeholders, including tech companies, media organizations, and civil society.
Looking ahead, what steps do you believe are necessary to protect individuals and democratic institutions from the potentially harmful effects of deepfake technology? Do you trust the judiciary system to give a respite from this growing concern?
To protect individuals and democratic institutions from the harmful effects of deepfake technology, it is essential to enact comprehensive laws specifically targeting the creation and distribution of deepfake content, invest in advanced detection and prevention technologies, and encourage robust collaboration between governments and tech companies to address the issue. Additionally, ensuring judicial support by equipping the judiciary to handle deepfake-related cases promptly and effectively is crucial. Trusting that the judiciary will recognize the gravity of this issue and take decisive action, it is imperative to provide timely relief and safeguard the public from the detrimental impacts of deepfake technology.

Share
Facebook
YouTube
Tweet
Twitter
LinkedIn