top of page

Navigating Regulatory Challenges in Indian AI Development

Writer's picture: Abhisht ChaturvediAbhisht Chaturvedi

Author: Abhisht Chaturvedi, Research Analyst, Insights International




On March 1, 2024, The Ministry of Electronics & Information Technology (hereafter referred to as the Ministry), Government of India, issued an Advisory for Big Tech Firms like Google, Adobe, etc., which are working on their AI, not on startup firms. All those firms that want to put out their AI in the public domain first should seek permission from the government.


Additionally, if a firm wants to release its AI in the public domain while it is still in the underdeveloped stage, it should include a disclaimer stating that the AI is still under development and may not be fully reliable. However, these two advisories have sparked discontent among many firms working on their AI, who believe that this could be a significant obstacle for India to compete on the global stage in terms of tech.

 

Challenges Faced by Big & Small Tech Firms due to Government Regulations


While the advisory aims to prevent malpractice in AI, for big tech firms, these advisories may pose logistical challenges and regulatory burdens. The process of obtaining government permission could be time-consuming and bureaucratic, potentially delaying product launches and hindering innovation.


For big tech firms, the requirement to seek government permission before deploying underdeveloped AI solutions and the mandate to include a disclaimer about the AI's unreliability could have significant financial implications. The permission process may introduce bureaucratic delays and administrative costs, potentially affecting investment decisions and project timelines. Furthermore, these regulatory advisories may discourage small firms from making significant advancements in the future. The stringent regulatory requirements could create barriers to entry, making it challenging for small firms to compete with established player.


Additionally, concerns about the market acceptance of underdeveloped AI solutions may impact the financial viability of AI projects, leading to cautious investment strategies in India's evolving tech landscape. Moreover, concerns about compliance and market acceptance may deter small firms from investing in AI research and development, limiting their ability to innovate and make breakthroughs in the future.


For small tech firms, navigating these regulatory requirements could pose challenges. The process of seeking government permission and adhering to disclaimer mandates may seem daunting, particularly with limited resources and expertise. Moreover, the stigma associated with underdeveloped AI may deter potential users and investors, impacting the firm's growth and viability.

 

Benefits and Challenges of AI Disclaimers in Ensuring Credibility and Transparency

 

As for the disclaimer requirements, they may raise concerns about the credibility and market acceptance of underdeveloped AI solutions. However, the disclaimers could benefit the public as well as AI developers themselves. There have been cases where machine learning and large language models instead of getting trained on human produced data, also inadvertently feed on to AI generated data. This can cause distortions and discrepancies that can grow substantially and make the models erroneous and even make them hallucinate more. A label that clearly spells out that an output is AI generated could go a long way in not only helping the public in discriminating between AI generated and human generated data/outputs but also other AI developers as well.

 

Current Situation and Future Directions


In response to criticism from both local and global entrepreneurs and investors. The Ministry of Electronics and IT released an updated version of the advisory on March 15, which no longer mandates government approval before launching or deploying AI models in the South Asian market. Instead, firms are advised to label under-tested and unreliable AI models to inform users about potential fallibility or unreliability.


The ministry stated earlier this month that while the advisory is not legally binding, it serves as an indication of the future direction of regulation, with government compliance expected.

The advisory underscores that AI models must not be utilized to disseminate unlawful content as per Indian law, and should not condone bias, discrimination, or threats to the integrity of the electoral process. Intermediaries are also encouraged to employ "consent popups" or similar mechanisms to transparently notify users about the potential unreliability of AI-generated output.


The ministry maintains its focus on ensuring the easy identification of deepfakes and misinformation, advising intermediaries to label or embed content with distinct metadata or identifiers. However, it has removed the requirement for firms to develop a method for identifying the "originator" of specific messages.


Despite these challenges, there are avenues for small tech firms to thrive amidst regulatory constraints. By leveraging agile development practices and focusing on iterative improvements, these firms can demonstrate their commitment to addressing AI's developmental challenges. Additionally, fostering open communication with users about the AI's limitations can help build trust and manage expectations effectively.


Eventually, it is imperative for the Ministry and top forms to consult with each other and more importantly, it is important for the Ministry to consult with smaller firms because they hold the potential to grow in the future. The effort should be to reach a common ground to ensure that regulation does not impede India's journey to the global stage in terms of technology and doesn't compromise the reliability of AI for People.


About the Author:

Abhisht is a Research Analyst at Insights International. His research interests include tech policy, media, and communications.




77 views0 comments

Comments


bottom of page