New Delhi: Considering that more than 60 countries, including India, are entering election mode this year, it is vital that we remain vigilant on recent trends in the dynamic digital landscape, especially deepfakes, says Ivana Bartoletti, Global Chief Privacy and AI Governance Officer at Wipro.
With the widespread use of generative AI, we face a new and concerning threat: deepfakes.
“Deepfakes have become accessible to everyone, posing a significant risk as these manipulations allow the creation and dissemination of realistic audio and video content featuring individuals saying and doing things they never actually said or did,” emphasised Bartoletti, also the founder of the ‘Women Leading in AI Network’.
The consequences extend beyond the digital realm, as online disinformation and coordination can spill over into real-world violence.
In India, the government has issued an update to its AI advisory, saying that the big digital companies do not need the government’s permission anymore before launching any AI model in the country.
However, big tech companies are advised to label “under-tested and unreliable AI models to inform users of their potential fallibility or unreliability.”
“Under-tested/unreliable Artificial Intelligence foundational models)/ LLM/Generative Al, software(s) or algorithm(s) or further development on such models should be made available to users in India only after appropriately labelling the possible inherent fallibility or unreliability of the output generated,” according to the new MeitY advisory.
All intermediaries or platforms must ensure that the use of AI models /LLM/Generative AI, software or algorithms “does not permit its users to host, display, upload, modify, publish, transmit, store, update or share any unlawful content as outlined in Rule 3(1)(b) of the IT Rules or violate any other provision of the IT Act.”
The digital platforms have been asked to comply with new AI guidelines with immediate effect.
According to Bartoletti, to ensure public safety, companies must take responsibility and implement measures to combat deepfakes and disinformation.
“This includes investing in advanced detection technologies to identify and flag deepfake content, as well as collaborating with experts to develop effective debunking methods,” she noted.
Additionally, promoting media literacy and critical thinking among the public is crucial.
“By taking proactive steps to address the risks of deepfakes, we can protect the integrity of elections and uphold the democratic process,” said Bartoletti.
(IANS)