As the United Nations General Assembly hosts the Summit of the Future in New York, the governance of Artificial Intelligence (AI) is gaining significant attention. The UNGA has approved the Global Digital Compact, which outlines shared values for global governance of technologies like AI. However, similar to the discourse surrounding AI ethics, the task of converting these broad values into actionable implementation frameworks falls to lawmakers worldwide.
The Compact emphasizes a “balanced, inclusive, and risk-based approach” to AI governance and insists on the “full and equal representation of all nations” in this effort. These ambitious ideals, reminiscent of the Sustainable Development Goals, can be challenging to achieve. Consequently, India must carefully consider its AI interests before choosing a direction.
Currently, India has several options: it can implement comprehensive legislation similar to that of the European Union, adopt a more sector-focused approach like China or the United States, or promote soft law in AI governance. It’s important to note that these discussions mainly pertain to AI integration within digital services, rather than robots or autonomous vehicles, so let’s examine each approach.
The European Approach
A prominent example of broad legislation is the European Union’s AI Act, which establishes regulations for various AI systems. This “risk-based” law aligns with the principles of the Global Digital Compact. For example, it prohibits AI applications deemed unacceptable risks, such as social scoring, and requires pre-approval for high-risk AI systems.
However, the EU has a history of enacting regulations without adequate consideration. Since 2016, member states have introduced around 100 tech-specific laws, assigning oversight to over 270 regulators. Even tech companies in Europe have expressed their discontent over the compliance burdens associated with these regulations. Given AI's potential as a transformative technology, a more nuanced approach that avoids a compliance-focused mindset and excessive regulatory language is essential.
Narrower Legislative Alternatives
China has implemented laws targeting specific activities such as algorithmic recommendations and the generation of synthetic content, including deepfakes. These measures, predictably, introduce additional controls in an already constrained information environment. Meanwhile, California's controversial AI safety bill has sparked intense discussions regarding the effectiveness of narrow legislation for AI regulation. This bill mandates stringent safety measures for developers and requires audits for cutting-edge models, raising questions about the merits of increased state control.
Criticism of the California Bill parallels concerns raised about China’s approach—such regulations can be overly restrictive and may lack sufficient stress testing. India needs to be cautious, as laws tailored to specific technologies or markets can take considerable time to revise. For instance, the Telecom Act, passed recently, followed the 1885 Telegraph Act. Clearly, we cannot afford decades to rectify any missteps.
Adopting Soft Laws and Standards
India should utilize technical standards, which are typically developed by experts and offer flexibility. Standards not embedded in law can adapt swiftly to the dynamic AI landscape. The International Organization for Standardization (ISO) is already setting the pace with standards for AI risk management and impact assessments, with India’s Bureau of Indian Standards (BIS) making progress in a similar direction.
However, it is crucial to involve the private sector and civil society in the process of standard-setting. The Telecom Engineering Centre's 2022 attempt to establish AI fairness standards faced criticism for its lack of practicality and was ultimately discontinued.
What Steps Should India Take?
India has the chance to advocate for effective global frameworks, with technical standards serving as key examples. The nation has already initiated efforts for international standard-setting in telecom through local organizations such as the Telecommunications Standards Development Society, and is beginning to treat service standard-setting more seriously.
Moreover, AI deployment by the public sector, especially in critical infrastructure like airports and public services, needs more focus. U.S. President Joe Biden's 2023 Executive Order on AI offers a valuable model, directing federal law enforcement to collaborate with the executive branch to ensure that technologies such as facial recognition do not infringe on civil rights. Such measures can help safeguard citizens when AI is integrated into public services.
The deployment of AI by the government must be governed by stringent safeguards, as merely stating values holds little weight when there’s an imbalance of power created by the AI user. India has the opportunity to demonstrate that emerging economies can successfully adopt new technologies. By building public sector capabilities to safely implement AI and establishing sandboxes for testing use cases before fully deploying AI systems, India can contribute significantly to global public good beyond just digital compacts.
The authors are affiliated with Koan Advisory Group, a consultancy specializing in technology policy. The opinions expressed are personal.
This article is part of ThePrint-Koan Advisory series, which analyzes emerging regulations and policies in India's technology sector.