More
    HomeStartupThe Ministry of Information Technology has eliminated the necessity for approval when...

    The Ministry of Information Technology has eliminated the necessity for approval when launching new AI models.

    The Ministry of Electronics and Information Technology (MeitY) has revised its previous advisory to the largest social media companies operating in the country regarding the utilization of artificial intelligence. This revision entails the removal of a provision that previously required intermediaries and platforms to obtain government approval before deploying “under-tested” or “unreliable” AI models.

    The IT ministry announced that the advisory issued on March 15 supersedes the previous one from March 1. The earlier requirement for platforms to seek “explicit permission” from the Centre before deploying “under-testing/unreliable” AI models has been omitted from the new advisory.

    According to Moneycontrol, which reviewed a copy of the revised advisory issued by the IT ministry, “The advisory is issued in suppression of advisory.. dated March 1, 2024.”

    In its updated version, intermediaries are no longer mandated to submit an action taken-cum-status report. However, compliance is still mandatory with immediate effect. Although the obligations in the revised advisory remain unchanged, the language has been softened.

    As per the March 1 advisory, digital platforms were obligated to seek prior approval before launching any AI product in India. Sent to digital intermediaries on March 1, the directive required platforms to label under-trial AI models and ensure that no unlawful content was hosted on their sites. The MeitY advisory also warned of penal consequences for non-compliance with the directive.

    However, in the revised advisory, the government stated that under-tested or unreliable AI products should be labeled with a disclaimer indicating that outputs generated by such products may be unreliable.

    The advisory, as cited by Moneycontrol, said, “Under-tested/unreliable Artificial Intelligence foundational model(s)/ LLM/Generative Al, software(s) or algorithm(s) or further development on such models should be made available to users in India only after appropriately labeling the possible inherent fallibility or unreliability of the output generated.”

    The new advisory emphasizes that AI models should not disseminate content that violates Indian laws. Platforms must ensure that their AI algorithms are unbiased and do not compromise electoral integrity. They should employ consent pop-ups to caution users about unreliable output.

    Reportedly, the updated advisory also focuses on identifying deepfakes and misinformation, instructing platforms to label or embed content with unique identifiers. This applies to audio, visual, text, or audio-visual content, enabling easier identification of potential misinformation or deepfakes, even without defining “deepfake.”

    MeitY also requires labels, metadata, or unique identifiers to indicate if content is artificially generated or modified, attributing it to the intermediary’s computer resource. Additionally, if users make changes, the metadata should identify them or their computer resource.

    The communication was sent to eight major social media intermediaries, including Facebook, Instagram, WhatsApp, Google/YouTube (for Gemini), Twitter, Snap, Microsoft/LinkedIn (for OpenAI), and ShareChat. Adobe did not receive the advisory, nor did Sarvam AI and Ola’s Krutrim AI.

    The March 1 advisory faced criticism from AI startups, with many founders expressing concerns about its potential impact on generative AI startups.

    “Bad move by India,” remarked Aravind Srinivas, CEO of Perplexity.AI.

    Similarly, Pratik Desai, founder of Kissan AI, which offers AI-backed agricultural assistance, labeled the move as demotivating.

    Addressing the criticism over the advisory at that time, Minister of State for Electronics and Information Technology Rajeev Chandrasekhar, said, “[the] advise to those deploying lab-level/undertested AI platforms onto the public Internet and that cause harm or enable unlawful content – to be aware that platforms have clear existing obligations under IT and criminal law. So the best way to protect yourself is to use labeling and explicit consent and if you’re a major platform take permission from the government before you deploy error-prone platforms.”

    The minister reassured that the country is supportive of AI given its potential for expanding India’s digital and innovation ecosystem.

    LEAVE A REPLY

    Please enter your comment!
    Please enter your name here

    Must Read

    spot_img