Hamid Rather

At the 18th G20 Summit in New Delhi, Prime Minister Narendra Modi called upon the bloc to establish a global framework for Artificial Intelligence (AI) governance. He said, “Today, we are witnessing an unprecedented scale and speed in new-generation technologies. Artificial Intelligence (AI) is one such example in front of us. In 2019, we (the G20) adopted the ‘Principles on AI’. Today, we need to take one step further.”

Towards this vision and mission, the NITI Aayog has published a series of papers on the subject of Responsible AI for All.  Further, the apex telecom regulator, Telecom Regulatory Authority of India (TRAI), in July this year has floated a new consultation paper which recommended that the Centre should set up a domestic statutory authority to regulate AI in India through the lens of a “risk-based framework”, while also calling for collaborations with international agencies and governments of other countries for forming a global agency for the “responsible use” of AI. India seems preparing the ground for establishing a global agency with regulatory oversight on what is segmented as “responsible” or “ethical” AI use cases. The Digital India Bill, 2023 which is expected to replace the Information Technology Act, 2000 may include many provisions regarding digital technology like AI and cryptocurrencies regulations.

The International Efforts

The concern for regulation of AI has its advocates in the tech industry, government, and international forums. Sam Altman, the founder of OpenAI – the company behind ChatGPT – had called for an international regulatory body for AI, akin to that overseeing nuclear non-proliferation. The tech giant Microsoft had released a paper titled “Governing AI: A Blueprint for India” for Artificial Intelligence governance in India. The paper proposed regulations prescribing safety and security requirements, then charted out deployment for permitted uses in a licensed Al data center with post-deployment safety and security monitoring and protection. In this regard alarm bells were constantly ranged up by tech leaders Elon Musk, Steve Wozniak (Apple co-founder), and over 15,000 others for a six-month pause in AI development in April this year, saying labs are in an “out-of-control race” to develop systems that no one can fully control.

The policy response towards AI governance has been different too, across jurisdictions, with the European Union having taken a predictably tougher stance by proposing to bring in a new AI Act that segregates artificial intelligence as per use case scenarios, based broadly on the degree of invasiveness and risk; the United Kingdom is on the other end of the spectrum, with a decidedly ‘light-touch’ approach that aims to foster, and not stifle, innovation in this nascent field. In the US, the White House Office of Science and Technology Policy has kicked off public consultation to unveil a blueprint for an AI Bill of Rights. The People’s Republic of China too has floated its own set of measures to regulate AI.

Concern with AI

The concerns with AI being flagged fall into three broad heads: privacy, system bias, and violation of intellectual property rights. Joanna J. Bryson, professor at Hertie School of Governance, Germany, and an expert on policy and ethical issues of Artificial Intelligence, prescribes, “It is critical to remember that what is being held accountable is not machines themselves but the people who build, own, or operate them—including any who alter their operation through assault on their cybersecurity. It is thus important to govern the human application of technology—the human processes of development, testing, operation, and monitoring.”

Across the globe, countries are grappling with the ethical implications of AI. Take the European Union’s General Data Protection Regulation (GDPR), which champions individual data rights and the principle of transparency. India can draw inspiration from GDPR to safeguard citizens’ privacy and ensure responsible AI development. AI systems are increasingly making decisions in various domains, from healthcare to finance. Who is responsible when AI fails or causes harm? The United States has introduced discussions on AI accountability, emphasizing the need for clear guidelines to determine responsibility. India should consider similar mechanisms to protect its citizens. Bias in AI algorithms can perpetuate discrimination and inequality. Countries like Canada are addressing this issue through the Algorithmic Impact Assessment (AIA), ensuring that AI systems are scrutinized for potential bias. India must explore ways to combat bias and promote fairness in AI decision-making.

Data is the lifeblood of AI, and its responsible use is vital. Singapore’s Personal Data Protection Act (PDPA) serves as a blueprint for regulating data use, ensuring that data is collected and handled ethically and securely. India’s Digital India Bill, 2023 should be aligned with such principles. AI has implications for national security, both in terms of defense and cybersecurity. China’s approach to AI in defense illustrates the importance of balancing innovation and security. India must develop a strategic framework that safeguards national interests without stifling innovation. To foster trust in AI, transparency and accountability are paramount. Finland’s “MyData” initiative empowers individuals with control over their personal data, exemplifying a citizen-centric approach. India can benefit from similar initiatives to empower its citizens. The fear of job displacement due to automation is widespread. South Korea’s “Human-Centered AI” approach focuses on human-centric AI development, thereby promoting job creation and supporting displaced workers. India can adopt similar strategies to manage workforce transitions.

Conclusion

AI knows no borders, and international cooperation is essential. Australia’s participation in the Global Partnership on AI (GPAI) showcases how nations can collaborate to set global standards and address shared challenges. India should actively engage in such forums. As India forges ahead in the AI landscape, it’s crucial that our regulatory efforts reflect a balanced approach—one that encourages innovation while safeguarding our values, ethics, and citizens’ interests. By drawing inspiration from global examples and tailoring them to our unique context, we can create a regulatory framework that ensures AI is a force for good in India, shaping a brighter, more equitable future for all.

(This write-up first appeared on 8th October 2023, in my weekly column | Statecraft in the newspaper The Brighter Kashmir)

(Hamid Rather is a public policy professional, entrepreneur, and socio-political activist)