The Indian government spoiled the holiday festivities for many last year by introducing on Christmas eve a draft policy of rules that would force a whole range of internet companies to remove content from their platforms. The regulations are intended to curb the misuse of social media and stop the spreading of fake news that sparked unrest and violence earlier this year, but internet companies and privacy advocates say the new measures are a threat to free speech.
The government is proposing to amend Section 79 of India’s IT Act, which would require internet companies to take down content deemed inappropriate by authorities. If a company receives a complaint from a law enforcement agency, the firm would be required to trace and report within 72 hours the origin of that content and to disable that user’s access within 24 hours. Implementing such a measure would effectively break the end-to-end encryption services of platforms like Facebook-owned Whatsapp.
Another recommendation would require internet companies to purge their platforms of “unlawful” content, although a clear definition of what that actually constitutes has yet to be decided, prompting concerns that its loose definition leaves it open to abuse.
Amba Kak, a lawyer and public policy advisor in India for Mozilla, which is the owner of web browser Firefox, says the new provisions will turn internet companies into censors and undermine users’ security. She provided an update on the debate through an email interview, just over a week prior to the deadline for the proposed amendment.
BAHREE: What is the proposed amendment to the intermediary laws about? What does it say?
KAK: The new rules proposed by the Ministry of Electronics and Information Technology (MEITY) dilute the legal provision which ensures that internet companies generally have no obligations to actively censor content. Under the new rules, all “intermediaries” are required to “proactively” purge their platforms of “unlawful” content or else potentially face criminal or civil liability. Intermediary is defined very broadly to include just about any online company ranging from social media and e-commerce platforms to internet service providers.
The rules also require services to make information about the senders of content and messages available to government agencies for surveillance purposes. This is a sharp blow to end-to-end encryption technologies, used to secure most popular messaging, banking, and e-commerce apps today.
Why is this of concern?
This is a concern because it turns online companies into censors and undermines security. We do need to find ways to hold social media platforms to higher standards of responsibility, and tackling harmful content on the internet is no doubt a challenging task. But the rules end up putting even more trust on these companies to decide what content is appropriate and what isn’t, and they haven’t earned that trust.
The rules don’t define what counts as “unlawful” content, but this would likely include all content that is illegal under various laws in India. And because of the overly broad sweeping definition, companies will be forced to make judgement calls in the absence of context, and will be incentivized to “take down first, think later,” or prevent such content from surfacing at all to protect themselves at the expense of users.
Instead of acknowledging this complexity and scale, the draft rules direct companies to rely on technology to fix the problem. They promote the deployment of “automated tools to filter content.” We would be very cautious about treating automated tools as a silver bullet for illegal content – this essentially prioritizes speed over the accuracy of content removals.
Also, a broad definition of “intermediaries” goes far beyond social media companies. The government has justified this move based on “instances of misuse of social media by criminals and anti-national elements.” For entities like internet service providers, browsers and operating systems, these content control obligations seem entirely misplaced and inapplicable, but they still create a legal risk that can’t be ignored.
There’s a tendency to see this issue as a fight between big tech companies and the government, but, above all, this is a threat to the internet users, as it will inevitably lead to over-censorship and chill free expression.
Where has the matter reached?
The rules are still in draft stage. The MEITY [Ministry of Electronics and Information Technology] has opened it up for consultation, and given time till 31st January for stakeholder comments. They’ll publish all comments by 4th February and close the consultation on the 14th after allowing time for stakeholders to counterclaim.
While these rules could be notified by the government in a hurry, it’s clear that they will require a complete rethink, and one that takes into account stakeholder feedback.
What are the changes you (not just Mozilla, but the wider industry as a whole) would like to see in the proposal?
Besides the concerns already mentioned, some of the others voiced by Mozilla, members of civil society and industry are:
One-size-fits-all obligations for all types of online services and all types of unlawful content is arbitrary and disproportionately harms smaller players.
Requiring services to decrypt encrypted data weakens overall security and contradicts the principles of data minimization, endorsed in MEITY’s draft data protection bill.
Disproportionate operational obligations, like mandatorily internet companies to incorporate in India, are likely to spur market exit and deter market entry for SMEs.
What are the chances the amendment will be passed in its current form? How soon?
I don’t expect these rules to be notified in the current form. The backlash from a diversity of stakeholders has hopefully given the government enough reason to pause. This issue requires detailed deliberations over the subsequent months, and India has an opportunity to take a measured approach that can be instructive to governments that are grappling with this issue in other countries as well.