Avatar 2.0: Is responsible use of deepfake technologies possible? - Part 1

Soon, we might have avatars acting on our behalf. Synthesia, a UK-based innovator, is set to launch Synthesia 2.0, an AI-driven platform capable of creating full-body avatars that can display emotions, sing, dance, and move with lifelike precision. This will have a range of valuable use cases, particularly in the business world. However, with these advancements come significant ethical concerns about the misuse of deepfake technology. Synthesia CEO Victor Riparbelli emphasizes the company’s commitment to responsible innovation, ensuring ethical use of their technology.

Yes, the ethical use is the issue.

Only recently, the Bombay High Court had to step in, directing social media platforms to take action against deepfake videos of Ashishkumar Chauhan, MD & CEO, National Stock Exchange (NSE). The AI-created videos, showing Chauhan giving stock recommendations, were circulated on social media platforms. The NSE had filed a complaint with the cyber police, and had argued that the deepfake videos were capable of manipulating the markets and resulting in unfair trade practices and breach of regulations.

Countries the world over are waking up to the issue.

Germany is considering new laws to tackle the rise of malicious AI-generated content online. However, civil liberties advocates feel that stricter regulations may have unforeseen consequences.

The US recently passed a law to combat AI deepfakes and protect original content from being used for AI training without permission. The Content Origin Protection and Integrity from Edited and Deepfaked Media Act (COPIED ACT) will introduce a digital document called “Content provenance information” to authenticate and detect AI-generated content, making it illegal to alter or remove watermarks. This move comes after several high-profile cases of AI-generated deepfakes, including Taylor Swift’s deepfake nude photos, went viral on social media platforms. The law also allows state officials to enforce its provisions, enabling legal action against AI companies that use content without permission or payment.

Deepfakes are manipulated media, such as videos or audio, that can be created using AI to portray individuals saying or doing things they did not actually say or do. In India, existing laws offer some recourse against deepfake issues, but the lack of a clear legal definition hampers targeted prosecution. A comprehensive approach is needed to address the evolving threats of deepfake technology, including reinforcing privacy and data protection laws, imposing limitations on freedom of expression, and establishing proactive rules to govern the distribution and use of deepfake technologies.

Responsible use possible?

As avatars become increasingly sophisticated, the line between reality and digital illusion blurs. And given the rapid advancements in deepfake technology, as seen with Synthesia’s AI avatars, what specific safeguards can be implemented to ensure the responsible use of this technology?

Sumit Gupta, Founder-CEO, Viral Pitch, thinks it is crucial to use specific safeguards to ensure the responsible use of deepfake technologies like Synthesia’s AI avatars, which have made huge leaps forward. He suggests that we need to establish clear regulations and guidelines, embed watermarks and disclosures for transparency, and develop AI and machine learning tools for deepfake detection. Additionally, following ethical best practices and increasing public awareness through education are also vital steps that can help maintain trust and integrity, ensuring that the benefits of deepfake technology are harnessed responsibly.

Deepfake technology is both a marvel and a challenge in the digital age, points out Dr Vikram Kumar, Founder-MD at SRV Media.

While it has the potential to redefine the media and entertainment landscape, it also poses significant risks. It is important for the media and entertainment sectors to harness deepfake technology’s power responsibly and ethically. To do that, he suggests, a comprehensive approach:

  1. Get Explicit Permission:The initial step when it comes to using deepfake technology is that one should always seek permission before duplicating someone’s likeness. This practice respects privacy as well as helps prevent misuse of an individual’s photo. Having clear consent forms the basis of responsible use of deepfakes.
  2. Robust Content Moderation:Combining advanced automated systems with human oversight is essential in identifying and addressing harmful or misleading deepfake content. Trust and Safety teams can be set up specifically for AI-created content, where guidelines are provided to guide ethical behaviour intended to discourage any form of harassment, deceitfulness or discrimination.
  3. Establishing Certification for Creators:Introducing certification programmes will go a long way in promoting ethical guidelines and best practices among creators of deepfakes. These programmes must also teach creators about responsible usage of the technology, potential harms that may arise from its misuse, and the means through which fake videos can be identified, thus building trust across the industry at large.
  4. Advanced Technological Solutions:Utilize state-of-the-art technologies to identify and reduce the dangers of deepfakes. While creators can maintain transparency by using digital signatures and watermarks, AI algorithms are capable of identifying and authenticating deepfake content. It needs constant innovation to keep up with the development of deepfake methods.

Rohan Naterwalla, Executive Creative Head, Punt Creative, cites the example of John Conor collaborating with Skynet (the Terminator franchise) to imply that the topic of deepfakes is so significant that it’ like the apocalypse; it’s important and deserves a nuanced conversation, but it's not the end of the world.

“Yes, “deepfake” is the latest buzzword in town. Yes, like any new tech, there are already a billion contrarian think pieces that delve deep into this subject where most ‘experts’ tend to either have a singularly utopian or singularly dystopian opinion about this nascent technology. And no, this topic needn’t be the second coming of John Conor collaborating with Skynet. Still, it merits a nuanced conversation that debates the philosophical ramifications of redefining the word “identity” as we know it,” says Naterwalla.

According to John Paite, Chief Creative Officer, India, Media.Monks, there is no easy answer to this question. However, he adds, certain measures can be taken to avoid misusing deepfake from both the developer and to the receiver. “Governments need to enact regulations with clear definitions and penalties for misuse, especially in electoral contexts. Embedding visible or invisible watermarks can help viewers identify AI-generated content. Additionally, public awareness campaigns and the development of AI detection tools are essential to educate the public and facilitate the identification of falsified content. Collaboration among tech firms to establish ethical guidelines and incorporating media literacy are also vital steps to mitigate the impact of deepfakes during sensitive periods such as elections,” he says.

“The development of deepfake technology has accelerated with advancements in AI. This has made it easier to create high-quality deepfake content, emphasising the need for guidelines and rules to promote its responsible use. Industry-wide measures such as the development of detection tools, ethical guidelines, legal frameworks, and public education are essential. On an individual level, it is crucial to exercise caution when encountering content with unfamiliar faces and to verify its authenticity through multiple sources,” says Aakash Goplani, Account Director, SoCheers.

(Tomorrow: Part 2 of this report will delve into how the average person can become more adept at identifying deepfakes, the potential positive applications of deepfakes, especially in the fields of marketing and advertising, and more)

Also Read: Avatar 2.0: Is responsible use of deepfake technologies possible? - Part 1

Media
@adgully

News in the domain of Advertising, Marketing, Media and Business of Entertainment