Categories
India Media

Make and fake information, artificial intelligence (AI) and its misuse: is there need for a comprehensive law?

An inescapable and handy tool, AI today has the potential to further harm the impact of the internet impacted world by its potential to generate fake images and targeted misinformation

India, with its rapid economic growth and the world’s second largest population, is witnessing a significant surge in AI adoption. However, the lack of a comprehensive AI policy poses serious dangers, particularly regarding the spread of fake news, deep fakes, and fake videos. This absence of regulation allows malicious actors to operate with impunity, undermining public trust, social stability, and democratic processes. This article examines what can be done to regulate AI to curb misuse.

Election misinformation and AI

Artificial Intelligence is reshaping the world as we know it. From helping in translation in lower courts in India, to analysing and predicting biological pathways for development of vaccines- there is hardly a field that AI has not found its use case in.

The ability of AI to generate real life like images, voice notes, and deep fake videos has been causing problems and especially in a country like India where internet penetration is high, these problems are exacerbated. While fake news and misinformation have been rampant before the advent of AI, they have become bigger problems to deal with, after AI use has become a common phenomenon. Recently, an audio clip of NCP (Sharad Pawar) leader and Baramati MP Supriya Sule and Congress Leader Nana Patole— was released by the Bharatiya Janata Party alleging that they were involved in a financial fraud relating to Bitcoin. These audio clips were termed as ‘likely to be AI’ by experts.

In India, the acceptance of AI and Deepfakes is rather unsurprising. For example, during the 2024 General Elections, the Prime Minister himself tweeted an AI generated meme video of him dancing in a rock concert like set up-and remarked that “Like all of you, I also enjoyed seeing myself dance;” “Such creativity in peak poll season is truly a delight. #PollHumour.” This also served as a counter to the West Bengal Police warning users to not share a similar video featuring the WB Chief Minister Mamata Banerjee.  Fake videos of Hindi cinema actors Ranveer Singh and Amir Khan campaigning for the Indian National Congress were circulated during the General Elections 2024. Few members of the Congress IT Cell were arrested for circulating a doctored video of Home Minister Amit Shah in Telangana. These people were arrested under the normal criminal laws like CrPC and IPC.

Recently, President of Global Affairs at the technology company Meta was reported as saying that the artificial intelligence only had a modest impact on global elections this year vis-à-vis its platforms including Facebook, Instagram etc. However, Meta and its platforms—as big as they are—form only a part of the AI Ecosystem.

Elon Musk—the billionaire who has openly supported Donald Trump for President in the United States—posted a fake voice note(generate by another user) having 2024 Presidential Candidate Kamala Harris’ voice saying things that she actually did not in real life. This video has more than 135 Million views on Elon Musk owned social site-X.

The use of AI during elections, therefore, has brought forward the issue of regulating the deep fakes and other AI generated misinformation.

What are the policies in India on AI?

There is no one comprehensive policy on Artificial Intelligence in India. India only recently got its Data Protection Act in 2023 highlighting the slow pace with which a technology related law to come into place. This was also because the government had to modify its bill according to the report of the Joint Parliamentary Committee, the pandemic etc. The policy documents on AI in India are all guiding documents or strategies by departments. For example, NITI Aayog had released a National Strategy for Artificial intelligence in 2018. Key highlights of the NSAI include India’s vision to position itself as a global “AI Garage” for developing economies by creating scalable AI solutions for common global challenges. It also advocates for a three-pronged approach: piloting AI projects in high-priority sectors, building a robust ecosystem for AI innovation, and engaging stakeholders across public and private sectors. Furthermore, the strategy emphasizes the late-mover advantage, encouraging India to adapt and innovate existing technologies to leapfrog in the global AI landscape.

Other than this, laws like the Information Technology Act of 2000 and the Digital Personal Data Protection Act of 2023 address certain aspects of data protection and misuse, they fall short of comprehensively addressing the challenges posed by rapidly evolving AI technologies.

Does India need a comprehensive law?

Experts have differing views on this. A recent paper in Carnegie India has noted that there is no consensus on the need for a comprehensive legislation on Artificial Intelligence.  Arguments against it include concerns about stifling innovation, the premature nature of such a law, the evolving pace of AI, and the effectiveness of existing laws like the IT Act. However, some advocate for a dedicated AI law to address novel risks, protect fundamental rights, ensure accountability, and align with global standards. Alternative approaches, such as self-regulation, co-regulation, and sector-specific regulations, are what the experts suggest.

While this has been the case for any technology related law, AI has been one key sector in which even the industry leaders are open to regulation provided it does not stifle the innovation. Therefore, the larger interests of people and the need to serve them should prevail over the superficial ‘need to preserve innovation’ which often gets thrown around as an argument against any measure to have the science benefit the masses.

What can be done about Fake News?

Addressing AI-generated fake news is essential for preserving democracy and societal harmony. Key strategies focus on transparency, public awareness, technological interventions, regulation, and collaboration.

Transparency and Accountability

Campaigns and officials must disclose AI use, including algorithms, data, and objectives, to ensure public scrutiny. Independent oversight bodies should monitor AI in elections, enforce ethical practices, and handle violations efficiently.

Public Awareness and Media Literacy

Comprehensive digital literacy campaigns can empower voters to identify AI-generated content. Supporting fact-checking organizations and collaborating with media outlets can counter misinformation and encourage responsible reporting.

Technological Interventions

Developing AI tools to detect and label synthetic content is critical. Widespread use of watermarks and labels for AI-generated media can help distinguish real from fake content, fostering trust in information sources.

Regulatory Frameworks

New or updated laws must address gaps in managing AI-generated fake news. A balanced, innovation-friendly approach is crucial. Ethical AI development guidelines should promote accountability among developers and researchers.

AI Governance Body

A dedicated AI governance body can establish comprehensive guidelines, monitor AI use across sectors, and address emerging challenges. This reduces reliance on entities like MeitY and ensures specialized oversight and proactive regulation. This body should also be independent enough to regulate the government’s usage of AI since government using AI for in data analysis makes it a formidable force when it wants to march against civil rights movements and activists.

Multi-Stakeholder Collaboration

AI companies must adopt self-regulation and ethical practices. Governments, tech firms, researchers, and civil society should collaborate on shared initiatives, leveraging expertise to develop effective, scalable solutions.

 

Conclusion

AI is here to stay, and its impact on our lives will only grow with time. While its potential for innovation and progress is undeniable, so too are the risks it brings, especially when it comes to misinformation and deep fakes. It is no longer a question of whether we should address these challenges but how quickly and effectively we can do so. Governments need to step up and establish independent, rule-of-law-based mechanisms to regulate AI while fostering innovation. Striking this balance is crucial—not just for technological advancement, but for safeguarding democracy, societal trust, and individual rights in an AI-driven world.

(The author is a legal researcher with the organisation)

Related:

A swarm of fake news hits social media from India, hatred for all that is Muslim given a fillip: Hamas-Israel conflict

Social media platforms finally compel extremist groups to shun hate speech, fake news

Fake News Regarding Situation of Migrant Workers in Tamil Nadu Being Made Viral

Exit mobile version