For this article:

18 Mar 2026·Source: The Indian Express
5 min
Polity & GovernanceInternational RelationsScience & TechnologyNEWS

US Counter-Terror Chief Discusses Social Media Content Regulation and India's IT Act

US counter-terror chief discusses social media content regulation, highlighting India's IT Act and its implications for free speech.

UPSC-PrelimsUPSC-Mains
US Counter-Terror Chief Discusses Social Media Content Regulation and India's IT Act

Photo by Aquib Akhter

Quick Revision

1.

The US counter-terror chief commented on the challenges of regulating social media content.

2.

India's Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, were referenced in the discussion.

3.

The discussion focused on balancing national security concerns with freedom of expression.

4.

Content moderation, particularly regarding terrorist content, child sexual abuse material (CSAM), and misinformation, was a key topic.

5.

India's IT Act 2000 and IT Rules 2021 are the primary legal instruments for online content regulation.

6.

Section 69A of the IT Act empowers the government to block public access to content.

7.

Rule 3(1)(b) of the IT Rules 2021 requires intermediaries to exercise due diligence.

8.

The Ministry of Electronics and Information Technology (MeitY) is the nodal ministry for these rules.

Key Dates

2000: Enactment of the Information Technology Act.2021: Promulgation of the IT (Intermediary Guidelines and Digital Media Ethics Code) Rules.

Key Numbers

Section @@69A@@: Empowers government to block content.Section @@79@@: Provides safe harbor to intermediaries.Rule @@3(1)(b)@@: Requires intermediaries to exercise due diligence.

Visual Insights

India's Response to Online Extremism & Key Incidents (March 2026)

This dashboard highlights key statistics and incidents related to online content regulation and cyber-enabled terrorism, demonstrating the challenges faced by democracies like India in managing digital content.

URLs Blocked (2025)
9,845

Number of URLs promoting radicalization and terrorist propaganda blocked by India, showcasing active government response to online extremism.

Red Fort Attack
Nov 10, 2025

Cited as an example where extremist networks systematically used social media to incite violence, underscoring the need for content regulation.

Bondi Beach Attack
Dec 14, 2025

Another incident where extremist networks used social media for incitement, highlighting the global nature of cyber-enabled terrorism.

Global Incidents of Online Extremism (2025)

This map highlights key locations where social media was weaponized for extremist attacks in 2025, as mentioned in recent reports, underscoring the transnational nature of cyber-enabled terrorism.

Loading interactive map...

📍Delhi, India (Red Fort Attack)📍Sydney, Australia (Bondi Beach Attack)

Mains & Interview Focus

Don't miss it!

The ongoing global debate on regulating social media content, as highlighted by the US counter-terror chief's comments on India's IT Act, 2000, and the IT (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, underscores a critical policy dilemma. Democracies worldwide grapple with balancing national security imperatives against the fundamental right to freedom of expression. India's legislative framework represents a proactive, albeit controversial, attempt to navigate this complex terrain.

India's approach, particularly through Section 69A of the IT Act, grants the government significant power to block content deemed detrimental to national security or public order. This provision has been invoked frequently, often without transparent due process, raising concerns about potential misuse. The subsequent IT Rules, 2021, further shifted the onus onto social media intermediaries, mandating proactive content moderation and stringent compliance mechanisms, a departure from the earlier 'safe harbor' principle under Section 79.

This regulatory shift, requiring platforms to appoint compliance officers and remove objectionable content within tight deadlines, has been met with considerable resistance from tech companies and civil society groups. Critics argue that such mandates could lead to over-censorship and stifle legitimate dissent, effectively turning private platforms into state censors. The absence of an independent oversight body for content moderation decisions further exacerbates these concerns.

Comparing India's stance with the US, which lacks a single overarching law for content regulation, reveals differing philosophies. The US relies more on platform self-regulation and judicial interpretation, while India has opted for a more prescriptive legislative framework. This divergence reflects varying cultural contexts, threat perceptions, and interpretations of digital sovereignty. However, both nations acknowledge the shared challenge of combating terrorist content, child sexual abuse material (CSAM), and misinformation.

Moving forward, India's policy must evolve to incorporate greater transparency, independent judicial review of content blocking orders, and robust grievance redressal mechanisms that are truly accessible to users. A purely state-centric approach risks eroding democratic values. Collaborative frameworks involving government, industry, and civil society, perhaps inspired by global best practices, could offer a more sustainable path to responsible digital governance.

Exam Angles

1.

Polity & Governance: Examination of IT Act, 2000, and its amendments, especially Section 69A, and its implications on fundamental rights like freedom of speech and expression (GS Paper II).

2.

Internal Security: Analysis of the threat of online radicalization, 'white collar terrorism,' and the role of social media in orchestrating terrorist activities (GS Paper III).

3.

Science & Technology: Understanding the challenges posed by AI-generated misleading content and encrypted communication platforms in content moderation (GS Paper III).

4.

International Relations: Regional cooperation in countering cross-border digital radicalization and online terror networks (GS Paper II).

View Detailed Summary

Summary

The US counter-terror chief discussed how difficult it is for countries like India to control harmful content on social media without limiting people's freedom to speak. India uses its IT laws, like the 2021 rules, to make social media companies responsible for removing illegal content, a challenge many democracies face globally.

The Centre is set to expand the authority to issue content blocking orders on social media platforms, allowing ministries such as Home Affairs, External Affairs, Defence, and Information and Broadcasting to exercise this power under Section 69(A) of the Information Technology (IT) Act, 2000. This amendment, necessitated by the proliferation of AI-generated misleading content, will significantly impact tech platforms like Instagram, Facebook, and YouTube, which may receive takedown orders from a wider array of government agencies. The scope of this move could also extend to regulators like the Securities and Exchange Board of India (SEBI), enabling them to send direct content blocking orders to tech companies.

This development comes amidst increasing concerns over the weaponization of social media by terrorist organizations for radicalization and orchestrating attacks across the Indian subcontinent. A report by Eurasia Review highlights incidents such as the Red Fort attack on November 10, 2025, and the Bondi Beach attack on December 14, 2025, as examples of extremist networks systematically using digital propaganda, encrypted messaging platforms, and online psychological manipulation to recruit and mobilize vulnerable individuals. The report notes that groups like the Islamic State (IS) and Pakistan-based terror outfits such as The Resistance Front and People's Anti-Fascist Front are actively leveraging social media's low-cost, decentralized, fast, and globally connected nature for propaganda.

Investigators cited in the report described the Red Fort incident perpetrators as radicalized online, terming it "white collar terrorism" due to many being well-educated individuals. These groups often use encrypted communication platforms like Threema, making forensic tracking difficult. In response to this growing threat, India reportedly blocked 9,845 URLs promoting radicalization and terrorist propaganda in 2025 alone. This expansion of content blocking powers is crucial for India's internal security and digital governance, directly impacting fundamental rights and the regulatory landscape for technology companies, making it highly relevant for UPSC General Studies Paper II (Polity & Governance) and Paper III (Internal Security, Science & Technology).

Background

The Information Technology (IT) Act, 2000, is India's primary law dealing with cybercrime and electronic commerce. Section 69A of the IT Act empowers the Central Government to issue directions to block public access to any information through any computer resource. This power can be exercised in the interest of the sovereignty and integrity of India, defence of India, security of the State, friendly relations with foreign States or public order, or for preventing incitement to the commission of any cognizable offence relating to these. The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, further elaborate on the due diligence to be observed by intermediaries, including social media platforms, and the government's power to issue blocking orders. Historically, content regulation on the internet has been a contentious issue, balancing the need for national security and public order with fundamental rights like freedom of speech and expression guaranteed under Article 19(1)(a) of the Indian Constitution. The government's ability to issue takedown notices has often been challenged, leading to judicial scrutiny and debates over transparency and accountability. The current move to expand blocking powers to more ministries reflects an evolving approach to digital content management in the face of new challenges. The original intent of Section 69A was to provide a legal framework for emergency blocking of content. However, the proliferation of sophisticated digital threats, including AI-generated misleading content and organized online radicalization, has necessitated a re-evaluation of the existing regulatory mechanisms and the scope of their application across various government departments.

Latest Developments

In recent years, the threat of online radicalization has intensified, with reports indicating that social media platforms are being actively weaponized by terrorist organizations. The Eurasia Review report highlighted how groups like the Islamic State (IS) have strengthened their digital operations despite territorial losses, expanding their online networks to countries like India and Bangladesh by 2024 to influence vulnerable populations through secure communication channels. Incidents like the Red Fort attack in November 2025 and the Bondi Beach attack in December 2025 demonstrate the systematic use of digital propaganda and encrypted messaging platforms for recruitment and mobilization. Governments across the region are responding with stronger regulations. India, for instance, blocked 9,845 URLs promoting radicalization and terrorist propaganda in 2025 alone. Countries like Australia, Malaysia, Singapore, and Indonesia have also introduced laws to counter online extremism. The rise of cyber-enabled terrorism, particularly in regions like Jammu and Kashmir, where online recruitment drives connect youth to extremist networks, underscores the urgent need for robust cybersecurity frameworks and closer intelligence collaboration. Looking ahead, the Centre's move to allow more ministries to issue content blocking orders under Section 69A of the IT Act, 2000, signifies a proactive stance against evolving digital threats. This expansion, particularly to include regulators like SEBI for specific content, indicates a comprehensive strategy to tackle not only national security concerns but also other forms of online misinformation and manipulation, including those impacting financial markets.

Sources & Further Reading

Frequently Asked Questions

1. Given the existing Section 69A, what new challenge has prompted the government to expand content blocking authority to more ministries?

The primary driver for expanding content blocking authority is the rapid proliferation of AI-generated misleading content. This new form of content, often sophisticated and difficult to detect, poses significant challenges, especially when weaponized by terrorist organizations for online radicalization.

Exam Tip

Remember that while Section 69A existed, the *expansion of authority to more ministries* is a direct response to *AI-generated misleading content* and *online radicalization*. This 'why now' aspect is crucial for Mains answers.

2. Which specific section of the IT Act empowers the government to block online content, and what is the recent proposed change regarding its implementation?

Section 69A of the Information Technology (IT) Act, 2000, empowers the Central Government to issue directions to block public access to any information through any computer resource. The recent proposed change is to expand this authority, allowing ministries such as Home Affairs, External Affairs, Defence, and Information and Broadcasting, and potentially regulators like SEBI, to directly issue content blocking orders to social media platforms.

Exam Tip

For Prelims, remember 'Section 69A' is the key. A common trap might be to confuse it with Section 79 (safe harbor for intermediaries) or Rule 3(1)(b) (due diligence). The *expansion to multiple ministries* is the new development.

3. How do the IT Act, 2000, and the IT (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, collectively regulate online content in India, and what is their distinct scope?

The IT Act, 2000, is the foundational law dealing with cybercrime and electronic commerce, with Section 69A providing the overarching power to block content. The IT Rules, 2021, on the other hand, are subordinate legislation that specify the due diligence requirements for intermediaries and establish a grievance redressal mechanism, making them more detailed on operational aspects of content moderation.

  • IT Act, 2000: Primary legislation, provides broad powers like content blocking (Section 69A) and safe harbor for intermediaries (Section 79).
  • IT Rules, 2021: Detailed guidelines under the Act, focusing on due diligence (Rule 3(1)(b)), grievance redressal, and specific content moderation requirements for social media intermediaries and digital media publishers.

Exam Tip

Think of the IT Act as the 'constitution' for cyber law and the IT Rules as the 'by-laws' that provide operational details. The Act gives the power, the Rules tell *how* to use it and what intermediaries *must* do.

4. The expansion of content blocking powers raises concerns about freedom of expression. How does India balance national security concerns, especially against cyber-enabled terrorism, with the constitutional right to free speech?

India attempts to balance these by allowing content blocking under specific, legally defined grounds outlined in Section 69A of the IT Act, such as national security, public order, and preventing incitement. However, the broad nature of these powers and the lack of robust oversight mechanisms often lead to concerns about potential misuse and chilling effects on free speech. The government argues that the threat of cyber-enabled terrorism and misinformation necessitates strong measures.

Exam Tip

When critically examining, always present both sides: the government's justification (national security, preventing incitement, AI-generated threats) and the concerns (freedom of speech, potential misuse, lack of transparency). This shows a balanced perspective for Mains.

5. Why is a US Counter-Terror Chief discussing India's social media content regulation, and what does this signify about the global approach to online content moderation?

The US Counter-Terror Chief's discussion on India's social media regulation highlights the global nature of online threats, particularly cyber-enabled terrorism and the spread of misinformation. It signifies a growing international consensus that online content moderation is not just a domestic issue but a transnational challenge requiring coordinated efforts. Countries are increasingly looking at each other's legal frameworks, like India's IT Act and Rules, to find effective ways to combat these threats while navigating free speech concerns.

Exam Tip

Connect this to the broader trend of 'digital diplomacy' and 'global governance of the internet'. The US interest shows that India's approach to tech regulation has international implications and is part of a larger global debate.

6. Critically examine the implications of expanding content blocking authority to multiple ministries under Section 69A of the IT Act, 2000, for both national security and digital rights in India.

The expansion of content blocking authority aims to bolster national security by enabling a quicker and more comprehensive response to online threats like cyber-enabled terrorism and AI-generated misinformation. It allows specialized ministries to act directly on issues within their domain, potentially improving efficiency. However, this move raises significant concerns for digital rights. It could lead to a fragmented approach to content moderation, increase the risk of arbitrary blocking orders, and create a chilling effect on free speech due to the sheer number of agencies empowered to issue takedown notices. The lack of a centralized, transparent oversight mechanism for these expanded powers is a key concern.

  • For National Security: Enables faster response to online radicalization and misinformation, allows specialized ministries (Home, Defence, External Affairs) to act on specific threats.
  • For Digital Rights: Risks fragmented and potentially arbitrary blocking, raises concerns about due process and transparency, and could lead to a 'chilling effect' on legitimate online expression.

Exam Tip

For Mains, structure your answer by clearly separating the 'pros' (national security benefits) and 'cons' (digital rights concerns). Conclude with a balanced view, perhaps suggesting the need for stronger judicial oversight or a clear, unified standard for blocking orders.

Practice Questions (MCQs)

1. With reference to content blocking on social media platforms in India, consider the following statements: 1. Section 69(A) of the Information Technology (IT) Act, 2000, empowers only the Ministry of Information and Broadcasting to issue content blocking orders. 2. The recent amendment allows ministries like Home Affairs and External Affairs to issue such orders, partly due to AI-generated misleading content. 3. The Eurasia Review report identified the Red Fort attack (2025) and Bondi Beach attack (2025) as examples of social media weaponization for radicalization. Which of the statements given above is/are correct?

  • A.1 and 2 only
  • B.2 and 3 only
  • C.3 only
  • D.1, 2 and 3
Show Answer

Answer: B

Statement 1 is INCORRECT: Section 69(A) of the IT Act, 2000, empowers the Central Government to issue directions to block public access to any information. While the Ministry of Information and Broadcasting has been a key authority, the news indicates an expansion to other ministries, not an exclusive power. The original provision allows the 'Central Government' to issue directions, which can be exercised through various ministries. Statement 2 is CORRECT: The Centre is set to allow ministries like Home Affairs, External Affairs, Defence, and Information and Broadcasting to issue content blocking orders under Section 69(A) of the IT Act, 2000. This move is explicitly stated to be necessitated by the proliferation of AI-generated misleading content on the internet. Statement 3 is CORRECT: The Eurasia Review report, titled 'Weaponisation Of Social Media Platforms For Radicalisation: A Threat Looming Large In The Indian Subcontinent,' explicitly states that the Red Fort attack on November 10, 2025, and the Bondi Beach attack on December 14, 2025, demonstrate how social media platforms are being systematically weaponized to radicalize individuals for terrorist attacks.

2. Which of the following statements best describes 'white collar terrorism' as mentioned in the context of online radicalization?

  • A.Terrorist activities funded by large corporations.
  • B.Terrorist attacks carried out by individuals with advanced technical skills.
  • C.Acts of terrorism orchestrated by well-educated individuals radicalized through online platforms.
  • D.Terrorism involving financial crimes and money laundering by extremist groups.
Show Answer

Answer: C

Option C is CORRECT: The Eurasia Review report, in the context of the Red Fort incident, states that investigators described the phenomenon as 'white collar terrorism' because many perpetrators were well-educated individuals who were radicalized online. This term specifically refers to the involvement of educated individuals in terrorist activities, often facilitated by online radicalization. Option A is INCORRECT: While corporations might indirectly be involved in funding, 'white collar terrorism' in this context refers to the perpetrators' background, not the funding source. Option B is INCORRECT: While advanced technical skills might be used in cyber-enabled terrorism, the term 'white collar terrorism' as used in the report specifically highlights the educational background of the perpetrators, not just their technical skills. Option D is INCORRECT: This describes financial terrorism or terror financing, which is a different aspect from 'white collar terrorism' as defined in the report.

3. Consider the following statements regarding the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021: 1. These rules mandate social media intermediaries to observe due diligence in content moderation. 2. They were introduced to provide a framework for the government's power to issue content blocking orders under Section 69A of the IT Act, 2000. 3. The rules specifically address the challenges posed by AI-generated misleading content. Which of the statements given above is/are correct?

  • A.1 only
  • B.1 and 2 only
  • C.2 and 3 only
  • D.1, 2 and 3
Show Answer

Answer: B

Statement 1 is CORRECT: The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, indeed elaborate on the due diligence to be observed by intermediaries, including social media platforms, in content moderation. Statement 2 is CORRECT: These rules were introduced to provide a more detailed framework for the government's power to issue blocking orders, including those under Section 69A of the IT Act, 2000, and to regulate digital media content. Statement 3 is INCORRECT: While the recent amendment to Section 69A's implementation is necessitated by AI-generated misleading content, the IT Rules, 2021, were primarily focused on intermediary liability, grievance redressal, and digital media ethics at the time of their introduction. They did not specifically or extensively address AI-generated content as a primary challenge, though their general provisions might apply. The specific mention of AI-generated content as a necessitating factor for the *amendment* to Section 69A's scope is a more recent development.

Source Articles

AM

About the Author

Anshul Mann

Public Policy Enthusiast & UPSC Analyst

Anshul Mann writes about Polity & Governance at GKSolver, breaking down complex developments into clear, exam-relevant analysis.

View all articles →