Digital Governance in Hybrid Regimes: How Global Social Media Platforms Navigate the Space Between Democracy and Authoritarianism
Jason Miklian a*, Sarah Cechvala a, John Katsos b
a Centre for Global Sustainability, University of Oslo, Norway, b American University of Sharjah, UAE
Funding Details: This work was supported by the University of Oslo under the UiO:Dem scheme.
Disclosure Statement: N/A
Abstract
Global social media platforms can both enable and undermine public spheres of deliberation. These dynamics are acutely felt in hybrid regimes: governance settings that maintain formally democratic legal frameworks while systematically undermining them through executive overreach and selective enforcement. This paper investigates how global social media platforms operate within hybrid regimes, developing a “worst of both worlds” framework to argue that these contexts produce a distinctive three-way mismatch between citizen expectations of democratic protection, platform design assumptions rooted in liberal democratic norms, and regime exploitation of both. Drawing on a four-part taxonomy of government platform measures and caselet studies of Turkey (2016-2024), India (2019-2025), and Myanmar (2011-2024), we show how hybrid regimes deploy content control, surveillance, propaganda dissemination, and democratic façade maintenance through formally legal mechanisms that set them apart from fully authoritarian digital governance. We conclude with implications for platform governance, cautioning against the assumption that hybrid regimes are “almost democratic” and highlighting the structural risks for users, firms, and civic trust.
Keywords: Hybrid regimes; social media platforms; digital governance; competitive authoritarianism; democratic backsliding; platform regulation
Introduction
Global social media platforms operate today in political environments that their designers never anticipated. While these platforms emerged from liberal democratic contexts and publicly trade on norms of openness, free expression, and inclusive participation, they increasingly operate in governance settings where such norms exist primarily on paper. Social media platforms can trigger changes that strengthen democratic practice, but their deployment can also undermine trust in institutions, facilitate misinformation, and accelerate democratic erosion (Diamond, 2015; Lorenz-Spreen et al., 2023). In authoritarian contexts, platforms are frequently instrumentalized for surveillance, propaganda, and the suppression of dissent (Polyakova & Meserole, 2019; Roberts & Oosterom, 2024). Between these poles lies a large and growing category of governance that remains under-examined: hybrid regimes.
Hybrid regimes, understood here through the lens of competitive authoritarianism (Levitsky & Way, 2010), are political orders that maintain formally democratic institutions, including elections, constitutions, and legal frameworks, while systematically subverting them through executive overreach, selective enforcement, and the manipulation of nominally independent bodies. They are not simply transitional states on a path toward either democracy or dictatorship; contemporary hybrid regimes demonstrate a notable capacity to sustain this configuration as a stable governance form (Karl, 1995; Schedler, 2015). The Economist Intelligence Unit classified 36 countries as hybrid regimes in 2024, and an additional 59 as authoritarian (EIU, 2025). Freedom House reported that global internet freedom declined for the fourteenth consecutive year (Freedom House, 2024). The intersection of these two trends constitutes the central puzzle of this paper.
What distinguishes hybrid regimes from fully authoritarian systems is the strategic maintenance of democratic legal architecture: elections, constitutional protections, and regulatory frameworks that formally resemble those of liberal democracies. This legal architecture creates a distinctive operating environment for global social media platforms, one that differs qualitatively from authoritarian settings where platforms either comply with overt state directives or face outright exclusion. In hybrid regimes, governments use the language and legal instruments of democracy to compel platform compliance, justifying censorship as content moderation, surveillance as public safety, and data localization as privacy protection. This produces what we term a “worst of both worlds” dynamic for platform users: the formal protections of democracy encourage citizens to use platforms for genuine political engagement, while the authoritarian practices concealed within those formal structures expose users to surveillance, prosecution, and suppression.
We develop this argument through two analytical contributions. First, we propose a four-part taxonomy of government platform measures: content control, surveillance of opposition, propaganda dissemination, and democratic façade maintenance. While individual measures appear across regime types, we argue that hybrid regimes deploy a distinctive combination of all four, mediated through formally democratic legal mechanisms, that produces qualitatively different outcomes for platforms and users. Second, we apply this taxonomy to caselet studies of three hybrid regimes: Turkey (2016-2024), India (2019-2025), and Myanmar (2011-2024). These cases represent different positions within the hybrid category and different trajectories of democratic backsliding, allowing us to examine variation in how the four-part taxonomy manifests across contexts.
We first examine scholarship on social media platform governance across regime types, then present caselet studies of Turkey, India, and Myanmar. We develop our analysis by showing how these cases generate the “worst of both worlds” framework and conclude with limitations and implications. Three questions guide the inquiry: How do global social media platforms navigate governance in hybrid regimes? How do different hybrid configurations produce divergent outcomes for platforms and users? And what does this reveal about the structural risks of digital engagement where democratic institutions exist but do not function as intended?
Literature Review
Social Media Platforms in Liberal Democracies
While the definitional parameters of liberal democracy remain long debated (Rhoden, 2015; Wolff, 2022), for this article we consider liberal democracies as representative governments characterized by free and fair elections, the protection of individual rights and civil liberties, the rule of law, and institutional checks and balances (Mukand & Rodrick, 2020; van der Brug et al., 2021). These contexts combine electoral democracy with constitutionalism, safeguarding against the arbitrary use of authority.
Most major social media platforms emerged from liberal democratic environments, and their operational features reflect these origins (Golumbia, 2024). Their terms of service, content policies, and public rhetoric invoke principles of free expression and inclusive participation (Rattanasevee et al., 2024). These platforms are, however, fundamentally profit-driven enterprises, not democratic projects. They were designed for social connection and entertainment; any democratic role was attributed to them largely after the 2012 Arab Spring (Morozov, 2013). Over the past decade, the gap between platform rhetoric about openness and actual governance practices has widened considerably. Social media appears democracy-supportive in some arenas like political participation, while detrimental in others like trust and polarization (Lorenz-Spreen et al., 2023).
In liberal democracies, social media platforms are regulated through rules-based procedures intended to safeguard open and inclusive deliberation. Germany’s Network Enforcement Act (NetzDG) exemplifies this approach: it requires platforms to remove illegal content swiftly, implement compliance and reporting mechanisms, and imposes significant fines for non-compliance (Breindl & Kuellmer, 2013). In response, some platforms have assumed quasi-regulatory responsibilities, including internal compliance systems and rapid takedown procedures (Heldt, 2019). “The regulation of the Internet is the result of complex relationships between both public and private actors who act both in coordination and competition with each other” (Farrand & Carrapico, 2013, p. 358). Public actors set standards while platforms respond with technical solutions (Lessig, 2006).
The liberal democratic model carries its own risks. The Snowden revelations of 2013 demonstrated that even in liberal democracies, intelligence agencies conducted mass surveillance through platforms including Facebook, Google, and Apple via programs such as PRISM, often with varying degrees of corporate cooperation (Lyon, 2014; Hintz & Brown, 2017). This exposed a significant gap between professed democratic norms and actual surveillance practices, a point we return to when considering what distinguishes hybrid regimes. The critical difference is that in liberal democracies, such practices provoked substantial public controversy, legal challenges, and policy reform precisely because they violated the norms these polities formally uphold.
Social Media Platforms in Authoritarian Regimes
Authoritarian regimes are political systems in which power is concentrated in the hands of a single ruler or a narrow elite, with limited or no institutional constraints on their authority. Such regimes lack meaningful political pluralism, restrict or suppress opposition, and curtail civil and political liberties, relying on coercion, patronage, and controlled participation to maintain power (Howard & Roessler, 2006; Linz, 2000).
Social media platforms can pose risks for authoritarian regime survival. However, authoritarian orders have shown considerable capacity to adapt digital platforms for political advantage (Yoo & Moon, 2025), using them to stymie democratic mobilization (Diamond & Plattner, 2012; Ruijgrok, 2017) and spread propaganda to control political narratives (Schlumberger, Edel, Maati & Saglam, 2024). Such behavior constitutes digital authoritarianism: “the use of digital information technology by authoritarian regimes to surveil, repress, and manipulate domestic and foreign populations” (Polyakova & Meserole, 2019, p. 2). Regimes can shut down the internet during critical political moments such as elections (Gohdes, 2020) or conduct surveillance to extract critical political discourse from media platforms (Deibert et al., 2008). China’s “Great Firewall” serves as both a cybersecurity mechanism and a tool for restricting citizens’ access to foreign information (Creemers, 2020), while content is restricted less heavily than the deliberative online processes and tools such as messengers themselves (Clark et al., 2017).
Schlumberger et al. (2024) conceptualize this as “authoritarian informationalism”: a distinctive mode of domination in which regimes systematically escalate surveillance and information control as central mechanisms of governance, moving beyond crude censorship to sophisticated management of informational environments.
Authoritarian regimes tend to restrict platforms more heavily than content itself. Some require backend access in exchange for market access; others block platforms entirely and back state-funded alternatives (Hall & Ambrosio, 2017; Ortmann & Thompson, 2014). The empowered space, where citizens might engage in activism or decentralized political action, is the most heavily restricted. Authoritarian regimes deploy “networked authoritarianism” (MacKinnon, 2011) to ensure that empowerment is selective and state-mediated. Monitoring and restriction of platforms benefit authoritarian regimes by shifting enforcement into the background, making restrictions less visible. Digital authoritarianism has become increasingly institutionalized (Feldstein, 2022), provoking greater “participatory censorship” (Luo & Li, 2024) and increased incidents of peer surveillance and self-censorship (Hintz & Milan, 2018).
When operating in authoritarian conditions, global social media companies must choose between rejecting government demands and facing exclusion, or complying with state directives and becoming instruments of the regime (Maréchal, 2017). Authoritarian regimes have accumulated knowledge of how to structure legislation that “force[s] the private sector to do the state’s bidding by policing privately owned and operated networks according to the state’s demands” (Deibert, 2015, p. 66). Social media platforms often acquiesce to authoritarian conditions to preserve their ability to operate in those markets (Gunitsky, 2015; Yilmaz et al., 2024).
Social Media Platforms in Hybrid Regimes
While extensive literature exists on social media platforms in liberal democracies and authoritarian regimes, hybrid regimes, which constitute a substantial proportion of contemporary governing bodies, remain under-examined. Variously labeled “illiberal democracies,” “delegative democracies,” or “competitive authoritarian regimes,” they share a defining feature: formally democratic institutions are maintained while power is concentrated through executive overreach and selective enforcement (Karl, 1995; Levitsky & Way, 2010). The EIU classified 36 countries as hybrid regimes in 2024, with a combined population of over two billion people (EIU, 2025).
We adopt the competitive authoritarianism framework of Levitsky and Way (2010) as our primary conceptual lens. Competitive authoritarian regimes are civilian regimes in which formal democratic institutions are regarded as the principal means of obtaining and exercising political authority, but in which incumbents violate those rules so severely that the regime fails to meet conventional minimum standards for democracy. This definition specifies what we mean by hybridity and distinguishes these regimes from both liberal democracies (where democratic rules are broadly upheld) and closed autocracies (where democratic institutions do not exist or function merely as rubber stamps). The existence of formal legal frameworks, combined with their systematic subversion, is precisely what creates the unique operating environment for social media firms in hybrid settings.
Levitsky and Way further identify two variables that shape the trajectory of competitive authoritarian regimes: linkage (the density of ties to Western democracies across economic, geopolitical, social, communication, and civil society dimensions) and leverage (the vulnerability of incumbent governments to external pressure for democratization). These variables help explain why hybrid regimes deploy different mixes of platform governance measures, a point we develop through our case studies.
Emergent studies offer insights into how social media platforms function in hybrid contexts. Digital networks may create channels for accountability and mobilization, enabling local actors to narrow information gaps and lower coordination costs (Mattoni et al., 2025), but social media activism in hybrid regimes often provokes heightened surveillance and censorship as a countervailing response (Earl et al., 2022). Firms operating in these settings encounter regulatory dilemmas as they balance global policies with local laws (Reuber & Fischer, 2021). Hybrid regimes frequently impose complex compliance rules and self-censorship as conditions for market access (Kneuer et al., 2024), or treat global social media firms as utilities with special ownership and control rules, viewing networks as strategic assets justifying legal controls (Sitaraman, 2022).
This adaptive capacity is also enabled by platform business models that prioritize market retention over governance consistency. Meta’s revenue has grown from $134.9 billion in 2023 to $164.5 billion in 2024, with projections exceeding $200 billion in 2025 (Meta, 2025). These revenue incentives create powerful motivations for platforms to comply with regime demands to maintain market access in countries with large user bases, even when such compliance undercuts democratic norms and user protections. What existing scholarship has not yet specified is how the institutional architecture of hybrid regimes produces qualitatively different outcomes for platform governance compared to authoritarian regimes where many of the same tools (surveillance, propaganda, censorship) are also deployed. We address this gap by developing a four-part taxonomy of government platform measures and applying it to three hybrid cases. Specifically, hybrid regimes deploy “authoritarian informationalism” through formally legal mechanisms that disguise surveillance and control as regulatory compliance.
A Taxonomy of Government Platform Measures
We identify four categories of government measures directed at social media platforms. None is unique to any single regime type, but their combination and the legal mechanisms through which they are deployed differ systematically across hybrid, authoritarian, and democratic contexts.
First, content control encompasses measures to limit or shape information distribution on social media platforms, ranging from targeted takedown requests to outright platform blocking. In authoritarian regimes, content control tends to be comprehensive and overt. In hybrid regimes, it is selective and legally mediated, targeting specific opposition voices, critical media, or politically sensitive topics while maintaining the appearance of an open information environment.
Second, surveillance of opposition involves using social media platforms to monitor, identify, and track dissenting voices, from requiring platforms to store data locally to exploiting the visibility social media provides for identifying opposition actors. In authoritarian regimes, surveillance is systematic and broadly applied. In hybrid regimes, it tends to be targeted and plausibly deniable, focused on specific opposition figures, journalists, or activists rather than the population at large.
The “transmit-trap” dynamic (Parks et al. 2017) captures a particularly insidious aspect of this surveillance: users feel empowered to transmit dissenting views, believing themselves protected by formal democratic legal frameworks, only to discover that their dissent has been captured in real time through data-sharing agreements and monitoring mechanisms. Users become trapped by the very act of speaking, experiencing what feels like democratic voice but facing authoritarian consequences.
Third, propaganda dissemination involves using social media platforms to promote regime-aligned narratives through bot networks, troll armies, and state-aligned influencers. Hybrid regimes rely more heavily on this measure precisely because they cannot shut down opposition speech without undermining their democratic credentials.
Fourth, democratic façade maintenance involves tolerating social media platforms to project openness, both internationally and to domestic audiences. Hybrid regimes permit social media use at levels sufficient to create the appearance of a functioning public sphere, providing citizens with what feels like genuine political engagement while constraining its boundaries. This measure has no meaningful equivalent in authoritarian regimes, where governments face less pressure to project democratic legitimacy, or in liberal democracies, where the public sphere is genuinely open.
We argue that the distinctive feature of hybrid regimes is the deployment of all four measures simultaneously through formally democratic legal mechanisms. Citizens experience what appears to be an open digital public sphere, but content control, surveillance, and propaganda operate within and beneath that appearance. This produces the “worst of both worlds” for users: the formal protections of democracy encourage genuine political engagement, while the authoritarian practices concealed within those protections expose users to real consequences for that engagement.
Caselet Studies of Three Hybrid Regimes
To examine how global social media platforms navigate hybrid regimes and how such regimes shape societal dynamics, we analyze brief cases from Turkey, India, and Myanmar. We selected these cases for three reasons. First, each represents a distinct trajectory within the hybrid category: Turkey exemplifies democratic backsliding from a relatively consolidated democracy following the 2016 coup attempt; India represents democratic erosion from within the world’s largest democracy under an increasingly authoritarian executive; and Myanmar experienced a post-authoritarian political opening (2011-2021) that collapsed into civil war following the 2021 military coup. Second, this variation enables us to examine how different hybrid configurations produce different mixes of our four platform governance measures. Third, all three countries are covered by Freedom House’s “Freedom on the Net” index, allowing us to ground our analysis in standardized comparative data. Turkey scored 31/100 (“Not Free”), India 50/100 (“Partly Free”), and Myanmar 9/100 (“Not Free”) in the 2024 assessment (Freedom House, 2024). The EIU Democracy Index classified Turkey as a “Hybrid Regime” (score 4.26), India as a “Flawed Democracy” (score 7.29), and Myanmar as “Authoritarian” (score 0.85) in 2024 (EIU, 2025). V-Dem classified Turkey and India as “Electoral Autocracies” and Myanmar as a “Closed Autocracy” (V-Dem, 2024).
Two observations follow from these classifications. First, they underscore that hybrid regimes are not a homogeneous category; our three cases occupy different positions within the hybrid space. Second, Myanmar’s classification as authoritarian by some indices and hybrid by others reflects the country’s trajectory over our period of analysis (2011-2024), during which it moved from military rule through a hybrid opening to contested authoritarian control. We include Myanmar because the 2011-2021 hybrid period, during which social media platforms became deeply embedded in the country’s political and information ecosystem, generated dynamics that continue to shape the current conflict.
Turkey (2016-2024): Authoritarian Digital Drift
Turkey entered hybrid regime territory following President Recep Tayyip Erdoğan’s consolidation of power after the 2016 coup attempt, yet it retains a highly engaged online citizenry with deep internet penetration (Akyuz & Hess, 2018; V-Dem, 2024). Internet censorship in Turkey formally began in 2007 with a law regulating content to protect minors (Akdeniz & Güven, 2020). The Internet and Communications Technology Authority (BTIK) is responsible for regulation of internet communications. Law No. 5651 is the main legal framework for governing Turkey’s internet, justified by reference to Germany’s NetzDG (TBMM(a), 2020).
In practice, the regulation requires any platform with over one million daily users to maintain data inside Turkey, which opponents characterize as a mechanism for the BTIK to monitor dissidents and opposition figures (TBMM(b), 2020). Human rights organizations have expressed concern that Law No. 5651 undermines freedom of expression and the right to access information, calling the social media amendments “dystopian” (HRW, 2022). Increased oversight of digital spaces was coupled with a near-complete takeover of traditional media by Erdoğan-aligned backers, who now control over 70 percent of conventional media platforms (RSF, 2020; Toros & Toros, 2020).
Turkey’s Freedom on the Net score declined from 53/100 in 2014 to 31/100 in 2024, a 22-point erosion of internet freedom over a decade (Freedom House, 2024). By the end of 2024, Turkey had blocked a cumulative total of 1,264,506 domain names, with 311,000 blocks issued in 2024 alone (EngelliWeb, 2024). Of these, 82% were issued by the BTK rather than through court orders, indicating executive dominance over judicial authorization. In the first half of 2024 alone, 515 journalists faced prosecution, with 36 receiving prison sentences or fines (RSF, 2025; Stockholm Center for Freedom, 2025).
Law No. 7418, passed in 2022, criminalized “false information” under Article 217/A with prison sentences ranging from 1 to 3 years. Within two years of the law’s passage, approximately 1,500 legal proceedings had been initiated under its provisions (Amnesty International, 2022). This represents a significant expansion of the state’s capacity to prosecute digital speech ex post facto.
Social media platforms constitute critical spaces for political discourse in Turkey. The state has become a global leader in requests to remove content from X and Facebook (Akdeniz & Güven, 2020), with over 20,000 ban requests on X alone in 2024 (X, 2025). The state has deployed bot networks to support the President and disparage opposition (Grossman et al., 2020). In 2020, Twitter removed over 7,000 accounts linked to the AKP youth wing, which had generated 37 million tweets across four distinct political identities (Saka, 2018; Stanford Internet Observatory, 2020). These developments reinforce the “transmit-trap” (Parks, Goodwin & Han, 2017): the government compels opposition voices to communicate online, monitors their engagement through platform data access, and prosecutes them using the regulatory provisions of Law No. 5651. In 2024, the government introduced an “Agent of Influence” bill expanding restrictions on foreign-aligned actors and digital platforms, but withdrew it in November 2024 under domestic and international pressure (Amnesty International, 2024; CPJ, 2024).
In August 2024, Turkey blocked access to Instagram after accusing the platform of censoring condolence posts following the assassination of Hamas leader Ismail Haniyeh. The ban was lifted following negotiations with Meta that granted the government access to user data in exchange for Meta’s continued ability to operate and extract advertising revenue (Financial Times, 2024). Such measures reflect “cyber-authoritarianism” (Esen, 2022) and the capacity of hybrid regimes to work across the democracy-authoritarianism spectrum simultaneously.
Turkey’s case illustrates the deployment of all four platform measures. Content control operates through targeted takedown requests and platform blocking. Surveillance is enabled by mandatory local data storage and direct data-sharing agreements with platforms. Propaganda is disseminated through state-aligned bot networks. The democratic façade is maintained through formal reference to European regulatory models and the rhetoric of protecting democracy from social media threats. The constriction of information spaces also affects offline participation: Turkish social media users who experience conflicting political views online are more inclined to participate in offline politics (Toros & Toros, 2022), while citizens who favor stronger state digital intervention carry stronger affinity toward the government (Çarkoğlu & Andı, 2020).
India (2019-2025): Weaponizing Rights through Social Media
India is recognized as the world’s largest democracy, yet is experiencing a marked shift toward authoritarianism, particularly in the erosion of freedoms related to expression, speech, and political dissent (Basu & Sen, 2023). Article 19 of the Indian Constitution guarantees the right to free speech and expression, which courts have interpreted to include press freedom, access to information, and the right to communicate opinions in diverse forms (Upadhyay, 2024). The state has weaponized this provision. Amendments to Article 19 expanded the scope of permissible restrictions to cover “libel, slander, defamation, contempt of court, or any matter which offends against decency or morality or which undermines the security of, or tends to overthrow, the State” (Wasiq, 2022, p. 7). The government has used these changes to suppress digital dissent, as when it blocked access to a 2023 BBC documentary on 2002 religious riots and Prime Minister Narendra Modi’s role, framing the film as anti-government propaganda (Peterson, 2023). The state has also invoked Article 19 to pressure platforms into removing or blocking content critical of the government, narrowing the space for open political debate (Basu & Sen, 2023).
The state has imposed internet shutdowns justified under Article 19 on grounds of “public order” or “sovereignty and integrity” (Kathuria et al., 2018). In Jammu and Kashmir, the government imposed the world’s longest internet shutdown (213 days) following the abrogation of Article 370, which revoked the region’s special autonomous status (Maqbool, 2020). Authorities have ordered mobile shutdowns preemptively, as when Kashmiri separatists planned to stream a video address to the United Nations in 2019, or in Nagaland in 2022 to prevent sharing of lynching images. Two hundred million people have had mobile data connections shuttered for days or weeks in both punitive and preemptive fashions. India has recorded approximately 920 internet shutdowns since 2016, the highest historical tally globally (Access Now, 2026; SFLC.in, 2025). In 2025 alone, India recorded 65 shutdowns across 12 states, continuing the trend of widespread digital disconnection.
Section 66A of the IT Act (2000), which criminalized “offensive” online speech, together with the 2021 Intermediary Guidelines, expanded the government’s ability to ban content deemed against “public order” or “decency” (Basu & Sen, 2023). These rules require platforms to remove government-flagged content within tight deadlines, appoint local compliance officers with legal liability, and enable message traceability that undermines encryption (Wilson, 2019).
During the 2020-2021 farmers’ protests, the government required X to block critical accounts, including those of activists and journalists (Happy & Mogha, 2024). After explicitly threatening to prosecute local X employees under criminal provisions, the state secured the blocking of approximately 250 accounts supporting the protests or critical of the government. The government has deployed a Central Monitoring System for real-time digital interception (Singh et al., 2024), alongside mandatory SIM card registration, digital identity systems, and facial recognition technologies (Sachan, 2018).
The Telecom Act 2023 introduced Sections 19(f) and 20, creating backdoor access to telecommunications infrastructure through which the state can intercept and monitor user communications at scale (Oxford OHRH, 2024). The Supreme Court’s Anuradha Bhasin v. Union of India decision established a proportionality standard for internet shutdowns (Columbia Global Freedom of Expression, 2020), yet the government has repeatedly ignored it. These shutdowns disrupt Aadhaar-linked food security, DBT (Direct Benefit Transfer) payments, and government service delivery, creating downstream humanitarian impacts on vulnerable populations (HRW, 2023).
India’s creator economy has reached $12.28 billion in annual value, with Meta investing $100 million and supporting 362 million Instagram users (BCG, 2025). These platform investments create economic incentives for compliance with government demands to maintain market access and growth in a strategically important market.
Even as dissenting voices are silenced, hate speech against religious minorities, Dalits, and other vulnerable communities has increased. Platforms enforce moderation unevenly: content aligned with majoritarian narratives frequently remains online, while dissenting voices and minority activists face takedowns and legal action (Basu & Sen, 2023; Dey, 2024). This selective enforcement normalizes intolerance and increases the vulnerability of minority populations to online harassment and offline violence.
India’s case is notable for the sophistication of its content control: the state uses formally democratic legal mechanisms (constitutional amendments, intermediary guidelines, court orders) to achieve authoritarian outcomes. Surveillance operates through both platform compliance and state-built monitoring systems. Propaganda is disseminated less through state-run bot networks than through amplification of majoritarian content via selective enforcement. The democratic façade is maintained through constitutional protections that are systematically hollowed out. India’s high degree of linkage to Western democracies (Levitsky & Way, 2010), combined with its large economy that limits Western leverage, explains why the state can pursue aggressive digital governance while maintaining its democratic international standing.
Myanmar (2011-2024): Social Media as Tool for Oppression and Resistance
Following Myanmar’s political opening in 2011, the country transitioned from authoritarian control toward what appeared to be a functioning hybrid regime. The continuing dominance of wealthy military leaders with extensive patronage networks positioned Myanmar in a contested space between democratic aspiration and military control. This period witnessed a dramatic escalation in internet use from 1.4 percent in 2012 to 58.5 percent in 2023 (World Bank, 2025), driven by the political opening and mobile infrastructure expansion. Facebook benefited significantly through its positioning as a zero-rated platform, becoming synonymous with the internet for many users (Nothias, 2020). By 2018, 40 percent of Myanmar’s internet users listed Facebook as their primary news source (Whitten-Woodring et al., 2020).
In mid-2014, Facebook had only one Burmese-language moderator for a user base of 1.2 million people. By 2015, the company had hired four Burmese moderators. During the 2017 genocide crisis, this number remained inadequate at five moderators (BSR, 2018). It was not until late 2018, months after the violence had escalated, that Meta substantially increased Burmese moderation capacity to 99 staff members (BSR, 2018). This chronic underinvestment in safety capacity during Myanmar’s critical political period enabled large-scale hate speech dissemination.
A critical technical challenge further hampered platform safety: 75% of Burmese-language internet users employed Zawgyi encoding rather than Unicode (Meta Engineering, 2019). Facebook’s content moderation tools were designed and trained exclusively on Unicode, creating serious mistranslations when applied to Zawgyi-encoded text. This technical infrastructure gap meant that hate speech targeting the Rohingya was systematically undetectable through automated moderation systems (Kissane, 2023).
This uptake was accompanied by government efforts to exploit Meta’s platforms for political and ideological gain. Groups such as Ma Ba Tha and the 969 Movement organized Facebook trainings to promote anti- Islamic hate speech targeting the Rohingya, portraying them as terrorists and existential threats to Buddhism (Bakali, 2021). These discourses built popular support for the military’s operations in Rakhine State, which escalated in August 2017 and forced more than 700,000 Rohingya to flee to Bangladesh. UN investigators concluded that Ma Ba Tha-affiliated hate speech on Facebook was a significant contributing factor in enabling the violence (Miles, 2018).
Myanmar’s hybrid configuration left the country without institutional capacity to counter disinformation or safeguard marginalized populations. Meta has been criticized for prioritizing “the expansion of its network without appearing to consider the consequences of doing so” (Whitten-Woodring et al., 2020, p. 421). Following the 2017 violence, Meta acknowledged that it had been “too slow” to respond and began hiring Burmese-speaking moderators (McPherson, 2018; Warofka, 2018). Accountability remains contested, with lawsuits in the United States, United Kingdom, and Ireland seeking more than $150 billion in reparations (Schissler, 2024).
In August 2018, Meta banned 18 accounts and 52 pages including those of Min Aung Hlaing, the armed forces commander who later orchestrated the 2021 coup, expanding these bans in 2019 to four armed groups. The military has since reconstituted its presence through closed groups, proxy pages, and covert networks (Khine, 2023). The 2021 military coup shifted the country into a contested authoritarian order. Myanmar remains embroiled in civil war between the military junta, civilian resistance forces, and a constellation of ethnic armed organizations. During the coup, social media platforms became indispensable for the pro-democracy movement, enabling coordination of demonstrations, civil disobedience, and real-time documentation of military crackdowns (Jordt et al., 2021; Tønnesson et al., 2022). Simultaneously, military-linked actors continued spreading propaganda through proxy pages despite the 2018 ban on Tatmadaw accounts (Khine, 2023). The junta deployed internet shutdowns to disrupt protest coordination while expanding surveillance to track activists.
Myanmar illustrates how propaganda dissemination and democratic façade maintenance operate differently in contexts with low linkage and leverage. The military’s limited integration with Western institutions meant international pressure exerted minimal constraining effect, allowing overt propaganda with fewer reputational costs. Content control took the form of blunt internet shutdowns rather than the legalistic targeting seen in Turkey and India. Surveillance relied less on compelling platform cooperation and more on exploiting the public visibility of social media activity. The democratic façade collapsed with the 2021 coup, but the preceding hybrid period had embedded social media so deeply into Myanmar’s political ecosystem that both the junta and the resistance now depend on it.
Discussion
Turkey, India, and Myanmar reveal how hybrid regimes interact with global social media platforms in ways that differ qualitatively from both liberal democratic and authoritarian contexts. In liberal democracies, platform governance centers on content regulation (primarily hate speech and disinformation) through transparent, rules-based mechanisms. In authoritarian regimes, the state turns platforms into instruments of networked authoritarianism. Hybrid regimes combine elements of both: adopting the rhetoric and legal frameworks of liberal democracies while employing authoritarian tactics to prosecute dissent, criminalize “false information,” and turn platforms against opponents.
Table 1 summarizes the relationship between regime types and our four-part taxonomy of government platform measures.
Table 1: Government Platform Measures Across Regime Types
Table 2 maps these measures against our three cases with specific examples drawn from the caselet analysis.
Table 2: Government Platform Measures in Turkey, India, and Myanmar
Political Economy of Platform Compliance
The financial incentives driving platform compliance with hybrid regime demands deserve explicit scrutiny. Meta’s revenue trajectory reveals the scale of market pressures shaping platform governance decisions: from $134.9 billion in 2023 to $164.5 billion in 2024, with projections exceeding $200 billion in 2025 (Meta, 2025). Within this context, major hybrid regimes represent critical growth markets.
India alone hosts 362 million Instagram users (BCG, 2025), making it one of Meta’s largest markets. Turkey has among the highest engagement rates globally; Myanmar’s growth from 1.4% to 58.5% internet penetration represented significant expansion potential. Compliance with regime demands becomes a rational business decision when market access in high-user-population countries generates enormous revenue flows.
We term this dynamic “digital obedience taxation”: the implicit tax imposed by hybrid regimes on platforms in exchange for market access. Platforms pay this tax through data-sharing agreements, content removal, account suppression, and moderation policies that serve regime interests. The cost is borne by users through compromised privacy, surveillance, and restricted speech. We argue that platform business models create structural dependence on market access in hybrid regimes, undermining the normative commitments to free expression that platforms publicly espouse.
Platforms, Informational Resilience, and Learned Helplessness
Beyond censorship and surveillance, hybrid regime control of social media generates an “informational resilience” problem distinct from crude internet freedom metrics. Users face epistemic confusion as platforms become vectors for state-managed disinformation while restricting counter-narratives, producing a distinctive “learned helplessness” in digital citizenship.
When platforms algorithmically amplify regime-aligned content while suppressing dissent, users experience a degraded information environment that appears natural rather than state-engineered. Users gradually internalize permissible speech boundaries and self-censor proactively. The platform-state collaboration becomes invisible because it operates through ordinary content moderation and algorithmic ranking. Over time, citizens develop “state-manufactured consent,” achieved through systematic degradation of their epistemic environment.
This dynamic has downstream consequences for political trust and civic engagement. When citizens cannot distinguish between platform algorithms, user preferences, and state censorship shaping their information environment, they lose faith in unmediated political communication. Platforms themselves lose legitimacy as trusted intermediaries. Users either retreat into private messaging platforms or abandon digital activism entirely, accepting the regime’s control of public information space.
Authoritarian Informationalism as a Distinctive Regime Form
Building on Schlumberger et al. (2024), we argue that hybrid regimes increasingly deploy “authoritarian informationalism” as a distinctive governance form. Rather than crude censorship, authoritarian informationalism involves sophisticated management of information flows through formally democratic institutions and legal frameworks.
Authoritarian informationalism operates through several mechanisms evident in our cases. First, it escalates surveillance as a central mode of domination, treating information control as essential state infrastructure. Second, it maintains the formal legal architecture of democracy while inverting its substance: constitutions protect “public order” at the expense of dissent, laws protect “privacy” while enabling state access, and regulations “protect minors” while surveilling political actors. Third, it is enabled by platform cooperation, which becomes profitable when market access is conditional on compliance.
Authoritarian informationalism explains why hybrid regimes are particularly effective at co-opting platform governance. Fully authoritarian regimes must either compel compliance through explicit threats or accept platform exclusion. Hybrid regimes deploy the language of liberal democracy to achieve authoritarian outcomes, making platform compliance appear technically necessary rather than politically coercive. Platforms adopt these frameworks without perceiving themselves as instruments of repression because the legal language mirrors democratic regulation.
This study offers a conceptual framework for understanding global social media platform governance in hybrid regimes. Its analysis is constrained by several limitations that merit explicit acknowledgment. First, hybrid regimes are not a uniform category. They encompass varied combinations of institutional weakness, elite capture, electoral manipulation, and normative ambiguity, and grouping them may obscure contextual differences that shape how platforms function in each setting. While our three cases reflect distinct patterns of platform-state relations, findings may not generalize to the broader universe of hybrid regimes. Countries such as Hungary, Tunisia, Nigeria, the Philippines, or the United States also carry trajectories of democratic backsliding and platform governance that could complicate or contradict the model presented. Our case selection, while justified by the variation it captures within the hybrid category, necessarily excludes regimes where different configurations of state capacity, digital penetration, and civil society resilience may produce dynamics our framework does not anticipate.
Second, our analytical focus on social media platforms may obscure other relevant digital infrastructures, including mobile telecom providers, encrypted messaging apps, and state-developed digital identity systems, that also mediate state-citizen relations and shape the information environment in hybrid contexts. India’s Aadhaar system and Turkey’s centralized telecom monitoring illustrate how state control extends well beyond social media platforms into adjacent digital infrastructures. Our analysis also draws on scholarship and cases from a period of significant geopolitical flux, platform rebranding, and policy volatility. Some of the dynamics we document, including content moderation trends, data-localization enforcement, and user behavior, may shift rapidly. We therefore stress that this study is exploratory and theory-building in nature, and that the framework we propose should be tested against a wider range of cases and updated as the digital governance landscape evolves.
The “worst of both worlds” framework specifies why hybrid contexts produce different outcomes than authoritarian ones: a three-way mismatch between citizen expectations (grounded in formal democratic protections), platform design assumptions (rooted in liberal democratic norms), and regime exploitation (enabled by the legal architecture of democracy). This mismatch does not arise in fully authoritarian contexts, where all parties have adjusted to the absence of democratic protections, nor does it fully characterize liberal democracies, where democratic protections, though imperfect (as the Snowden revelations demonstrated), are subject to genuine institutional accountability.
Digital space in hybrid regimes is a limited asset deeply cherished by citizens, as it is often the only means for free expression, however fleeting. When firms share data or restrict digital space in exchange for market access, they corrupt the product for users who take on the risks of political engagement but receive none of the participatory rewards. Users then abandon platforms in two directions: toward overtly pro-government sources (they give up resistance), or toward products that can better deliver security and democratic space (they resist with better tools). Both outcomes represent failures of the platform governance model that assumes democratic operating conditions.
Future Research
Future research should build on this study in several directions, each of which addresses gaps that our framework identifies but cannot resolve with the present data.
Material Diffusion of Digital Authoritarian Practices: How do surveillance technologies and information control strategies transfer between regimes? Our cases suggest that hybrid regimes do not simply import authoritarian tools wholesale. Turkey’s justification of Law No. 5651 by reference to Germany’s NetzDG, and India’s framing of internet shutdowns through constitutional “public order” provisions, indicate that hybrid regimes develop distinctive adaptations: authoritarian techniques laundered through democratic legal language. Comparative studies tracking the spread of facial recognition, deepfake detection systems, and bot network architectures across regime types could illuminate this diffusion process. Hall and Ambrosio (2017) have documented authoritarian learning across contexts, but the specific mechanisms by which hybrid regimes adapt these lessons to operate within formally democratic legal structures remain underspecified. Research should examine whether the “authoritarian informationalism” we identify (building on Schlumberger et al., 2024) constitutes a coherent governance model that travels across contexts or whether it emerges independently from the structural incentives that hybrid regimes face.
AI-Driven Content Moderation in Authoritarian Contexts: As platforms deploy machine learning for content moderation, how do these systems perform in underrepresented languages and cultural contexts? Myanmar’s Zawgyi encoding problem, where 75% of users employed an encoding system that rendered Facebook’s automated moderation tools ineffective (Meta Engineering, 2019; Kissane, 2023), provides a striking illustration of how technical infrastructure gaps can have catastrophic human consequences. Research should examine whether AI moderation systems amplify biases against marginalized populations and whether regimes can exploit algorithmic vulnerabilities to suppress dissent more effectively. As content moderation scales through automation, the risks identified in our “worst of both worlds” framework intensify: platforms may deploy AI-driven systems designed for liberal democratic content norms in contexts where the same tools become instruments of authoritarian control. The intersection of AI moderation with the selective enforcement patterns documented in India (Dey, 2024) deserves particular scrutiny.
Grassroots Perspectives and User Agency: Most research on platform governance, including this study, examines the phenomenon from the state or firm perspective. Future work should center user experiences: how citizens in hybrid regimes understand platform governance, resist surveillance, and adapt their digital practices over time. Our concept of “informational learned helplessness” remains theoretically specified but empirically untested. Ethnographic and interview-based research with activists, journalists, and ordinary users could illuminate strategies of digital resilience and the lived experience of the “transmit-trap” (Parks, Goodwin & Han, 2017). The two-directional user exodus we identify in our conclusions, toward either pro-government sources or more secure alternatives, needs empirical validation through research that captures the micro-level decision-making of users navigating compromised information environments.
Alternative Governance Models: Can decentralized platforms, mesh networks, and user-governed social media communities offer genuine alternatives to corporate platforms in hybrid regime contexts? Our analysis demonstrates that the “digital obedience taxation” dynamic arises from platforms’ structural dependence on market access. This suggests that governance models decoupling revenue from national market access could fundamentally alter the incentive structures we document. What are the tradeoffs between decentralization and usability, privacy and network effects? The migration patterns we observe, where users retreat to encrypted messaging platforms or abandon digital political engagement entirely, indicate latent demand for alternatives. Research should examine whether decentralized architectures can achieve the network effects necessary for political mobilization while resisting the co-optation mechanisms that hybrid regimes deploy against centralized platforms.