Balancing Innovation and Rights: Regulating AI Algorithms in Democracies

Pallepati Sai Abhijeet Rao

The field of artificial intelligence has seen incredible expansion in the past decade, drawing on a rich legacy of previous research and development. A lot of this expansion can be attributed to improvements in computing capacity and availability of large datasets. One of the most important applications of AI is large language models (LLMs) and other generative applications, which have become extremely popular since the launch of ChatGPT. LLMs are software programs that are designed to comprehend and produce natural language after being trained on large amounts of data.

Over the course of the last few years, the number of generative AI applications, its usage and access have only increased, resulting in a massive increase in AI-generated information produced by increasingly complex algorithms. The market for generative AI is expected to exceed $1 trillion within the decade (Bloomberg L.P. 2023). While generative AI has streamlined the automation of repetitive tasks and enhanced productivity, it also poses significant challenges—chief among them, its potential to facilitate the spread of misinformation.

Artificial Intelligence and Misinformation

While misinformation is as old as human civilisation, what makes it different now is the immense capacity of different actors to spread misinformation with the aid of high-capacity Gen AI models (Shin, n.d.) coupled with the immense influence and reach of social media companies, which have been accused of profiting from misinformation and polarisation (Sabga 2022). Artificial intelligence companies have been accused of copyright infringement (Brittain 2025), generating harmful content (Davies 2025) and propagating existing biases (Villar 2025).

Governments, on the other hand, faced numerous obstacles in regulating social media companies in the era of artificial intelligence, such as the lack of technical capacity, the slow pace of regulatory development and the need to protect freedom of expression and trade.

Moreover, such challenges exist in the backdrop of geopolitics, with countries like the United States, China and India engaged in a global AI race, like the space race between the Soviet Union and the US; however, this time there is extensive participation of private sector companies, which are often monopolies (Zhang, Khanal, and Taeihagh 2025). Thus, countries around the globe have adopted different approaches to regulating AI based on their unique contexts and national interests.

Thus, a fundamental policy problem arises as to how the State can regulate the deployment and the potential impact of artificial intelligence-based algorithms while ensuring that the fundamental rights of citizens, research and development, and economic and strategic interests are protected?

Chinese Approach to Regulating AI

One specific approach to regulating artificial intelligence, which has been implemented by China, is the mandatory registration of algorithms that provide recommendations, influence public opinion or drive engagement on social media. This registry would be maintained by the Cyberspace Administration of China (CAC) along with the Ministry of Industry and Information, the Ministry of Public Security, and the State Administration for Market Regulation (China Law Translate 2022).

This registry forms an important part of three key regulations implemented by the Chinese government to govern artificial intelligence (Sheehan 2023).

  • Provisions on the Management of Algorithmic Recommendations in Internet Information Services
  • Provisions on the Administration of Deep Synthesis Internet Information Services
  • Interim Measures for the Management of Generative Artificial Intelligence Services

These regulations also mandate regular algorithm audits, the labelling of synthesised content, disclosure of training datasets and ensuring the explainability of algorithms. However, these regulations also call on companies to ensure that their algorithms do not promote “negative or harmful information”, essentially calling for self-censoring outputs generated by algorithms.

Thus, the aim of this article is to examine if such regulatory tools can be implemented in India and other democratic countries while respecting fundamental rights and advancing innovation.

 Comparative Democratic Approaches

The United States recently rescinded all regulations related to artificial intelligence products and services, believing that such regulations would endanger American dominance in the industry (Associated Press 2025); however, the effects of such a drastic step remain to be seen.

The European Union’s primary legislation to govern AI has been through the EU AI Act along with other key legislations such as the EU General Data Protection Regulation; however, the act has been criticised for increasing the compliance burden on technology companies, which could hinder Europe’s competitiveness (Reuters 2025).

India does not currently have a specific law governing AI; however, the National Strategy for Artificial Intelligence has called for collaborating with the industry to come out with sector specific AI regulations (NITI Aayog 2018).

Possible Value Conflicts

This article seeks to analyse algorithm registries as a potential regulatory solution to the formulated policy problem by examining different perspectives and identifying the underlying assumptions. These registries would not only examine how algorithms would process the inputs provided by users but also the datasets that they were trained on.

The policy problem highlights critical value conflicts such as

  • Security vs Freedom of expression
  • Transparency vs. intellectual property rights
  • Economic freedom vs. public interest 

Stakeholder Perspectives

Cyberspace Administration of China

The Chinese government implemented such a regulation believing that it would ensure state control over the development and deployment of AI-enabled algorithms and ensure “core socialist” values are not violated.

A possible assumption is that without adequate supervision, such algorithms can create law and order problems, and the government must have complete control of the information that is consumed and produced by internet users in cyberspace.

However, governments recognise the importance of promoting innovation while also acknowledging their limited technical capacity to effectively regulate digital goods and services. Hence, governments endeavour to balance national security with economic development with the underlying assumption that research and development of artificial intelligence is a joint effort between the government and the private sector (Zhang, Khanal, and Taeihagh 2025).

Industry’s Perspective

Across the globe, companies have predictably taken a negative stance against any kind of regulation; in fact, they would push for industry-led regulations or ethical principles, which place a limited liability on the industry. Companies would also try to influence research and development of regulation for their own benefit (Benkler 2019).

The underlying assumption is that regulations create more problems than they solve and that they would hinder healthy competition in a rapidly changing field. This could be seen in the difficulties that companies faced while registering their algorithms with the CAC (Hao 2022).

  • Lack of technical expertise to understand complex algorithms
  • Concerns about exposure of intellectual property rights
  • Vague definition of “national security” or “social public interest”

Civil Society

The civil society would prefer regulation over algorithms; however, they would also seek to limit the power of governments to ensure that regulation of technology companies does not result in censorship of people or companies.

The underlying assumption is that when governments equip themselves with immense power, they would tend to misuse it for state interests rather than using it to protect people’s interests. In democracies citizens have an important role to play in regulating big tech companies; hence, they would prefer that information on algorithms and datasets be publicly available, unlike China’s registry system, which is accessible only to the government (O’Shaughnessy 2023).

A registry system for AI driven algorithms would enable enhanced oversight which would address the challenges posed by the black-box nature of algorithmic systems. Regulators would be able to examine the algorithm for its potential to spread misinformation and propagate bias. This would also improve public trust as citizens would feel reassured that the technologies they use are subject to checks and balances.

However, such regulatory tools also present serious concerns, primarily the potential negative impact on innovation as companies might cut down on cutting edge research. Government could engage in censorship of information by misusing such regulatory tools. Disclosure of algorithm would also face the risk of exposing intellectual property, thereby disincentivizing innovation further. Finally, both the government and the public might have limited technical knowledge to analyse such advanced algorithms meaningfully.

Way Forward

Despite their shortcomings, algorithm registries are useful tools to regulate and hold technology companies accountable; however, such tools must be adapted to work in democratic contexts.

Unlike the Cyberspace Administration of China, which is completely under the control of the executive, an independent agency must be established which would administer such registries.

Only those algorithms which are deemed high risk, as defined by the European Union’s AI Act, must be registered, along with the datasets with which they are trained (Sheehan and O’Shaughnessy 2023). To protect intellectual property, companies could share only data regarding the intent, purpose and training data while protecting the proprietary code.

The authority should also release a summary of the data submitted to it and clear reasons as to why it has approved a particular algorithm. This would ensure both companies and regulatory agencies would be held accountable to the public, as this can result in better products and decisions.

Algorithm registries can be promising tools to ensure responsible development and deployment of AI-enabled technologies. However, such regulations must respect democratic values and fundamental rights while ensuring transparency and security. Such regulation would promote innovation which aligns with democratic values and national interest. Responsible AI technologies have the potential to reduce or possibly combat misinformation.

Leave a Comment

Your email address will not be published. Required fields are marked *