
When procedural guarantees exist on paper but remain invisible in practice, they offer little meaningful protection. The gap between formal institutional design and lived user experience is nowhere more apparent than in how social media platforms communicate content moderation decisions. The guarantees at stake are not merely aspirational. They derive from an emerging corpus of international human rights standards—the U.N. Guiding Principles on Business and Human Rights — which, though non-binding under international law, have been formally adopted by Meta as the framework governing its own human rights commitments. Under these principles, corporations bear not only a negative duty to avoid harm, but a proactive one to ensure that affected users can meaningfully identify, access, and exercise available remedies.
Consider this scenario. Vipashyana is a (fictional) practicing lawyer in India accustomed to working in layered procedural systems—filing complaints, tracking institutional responses, and invoking appellate remedies where necessary. She encounters content on Instagram that appears to constitute blatant hate speech, content she believes violates both Meta’s own community standards and applicable domestic Indian law. Acting in accordance with both professional instinct and civic responsibility, she reports it through the platform’s designated mechanism.
Sometime later, Meta sends Vipash a notification. Embedded among routine alerts, it is visually indistinct, modest in prominence, and framed in language that does not immediately disclose whether it contains a substantive determination of the status of the report (although it does). There is no clear cue that if Vipash does not agree with that initial report decision, she may request a further review. Nor is it made obvious that an additional, more independent forum exists beyond the first review. In the absence of clarity, Vipash assumes the matter has reached its conclusion.
What is striking here is that even a sophisticated actor habituated to institutional review may fail to perceive the availability of further recourse, where such recourse is insufficiently signalled. The formal architecture of potential curative action remains intact; the experiential pathway to that action does not.
The Gap Between Design and Discovery in Meta’s System
What is unfolding in India today in the realm of digital governance bears an unsettling resemblance to that scenario. On paper, Meta, which has set the highest of industry standards in self-regulation, has an elaborate, layered, and formally-structured system that resembles adjudicatory due process. Its architecture formally operates in three stages: 1) users may report harmful content; 2) they may request a review of the platform’s decision; and 3) they may appeal to the independent Meta Oversight Board, which reviews content moderation decisions and issues internally binding determinations on individual cases. This “due-process” model echoes judicial hierarchies familiar to constitutional democracies. In global policy discourse, the Oversight Board has even been described as a kind of “Supreme Court” for content moderation, and has been termed as a norm-setter for social-media platforms.
In practice, however, a survey conducted in India on user awareness of Meta’s content moderation framework reveals a striking disconnect between institutional design and users’ lived experience. Respondents who had engaged with Meta’s reporting mechanisms had minimal awareness of subsequent procedural options. A substantial proportion of users who reported content were either unable to clearly identify notifications regarding the outcome of their report or were unaware that they could request a further review of Meta’s decision. More than half of all respondents had never heard of the Oversight Board, and among those who had undergone the reporting process, knowledge of the possibility of escalating a rejected review remained minimal.
Meta signals any updates on a report status through a small red dot on Instagram’s heart-shaped notification icon—a design choice that renders them visually indistinguishable from routine alerts such as “likes” and “follows.” This design choice is not neutral. In human-computer interaction scholarship, interface decisions that obscure or de-prioritize consequential information while amplifying engagement-driven signals are increasingly characterized as “dark patterns”—design architectures that manipulate user attention in ways that subvert informed choice. By rendering such important adjudicatory updates less visible, the platform dilutes the procedural significance of moderation outcomes, where the people are not “adequately” informed that a decision has been made on their report.
If one does click on the small red dot, the decision thereafter appears under the generic label “Support Request.” It does not explicitly indicate that it contains the outcome of the report (against their community standards), nor that further review or appeal options may be exercised. Users are therefore unlikely to recognize the notification as a formal adjudicatory outcome requiring attention. What exists formally as a three-step process becomes experientially flattened into a routine notification stream. In consumer protection doctrine, opacity that impairs informed decision-making may itself constitute unfair design. The interface here operates not merely as a neutral conduit, but as a structuring condition of access to remedy—something which has received attention in social-media scholarship. The result is an awkward paradox: a sophisticated system of relief exists in form, but remains functionally under-accessible to many of those attempting to use it.
To understand why this design matters, one must begin from first principles. Scholarship has long distinguished between legal legitimacy—the mere conformity of a decision to established rules and processes—and sociological or moral legitimacy, which depends upon whether affected actors can meaningfully perceive, engage with, and accept the processes governing them. Any institution that is (or claims to be) rights-serving should not be content only with the formal existence of procedures; legitimacy turns on whether decision-making processes are transparent, participatory, and experientially accessible. A right that exists only as a matter of internal architecture, but cannot be discovered, understood, or exercised in practice, may satisfy legal form while failing sociological and moral legitimacy. In such circumstances, institutional authority risks appearing insulated rather than accountable.
The Corporate Duty to Make Rights Visible
Social media platforms have become sites where a range of internationally recognized rights may be both exercised and infringed. The circulation of harmful content online may implicate rights to dignity, equality, and non-discrimination, particularly where speech targets individuals or communities based on religion, caste, gender, or other characteristics protected by the particular legal framework that is applicable (particularly certain domestic legal frameworks). At the same time, decisions by platforms to remove, restrict, or deprioritize content can engage the countervailing guarantee of freedom of expression, recognized under Article 19 of the International Covenant on Civil and Political Rights. Content moderation, therefore, can unfold within a domain of competing rights claims. And in these complicated circumstances, the legitimacy of platform governance cannot depend solely upon the substantive correctness of moderation outcomes. It must also rest upon the availability of procedures through which affected users can identify, contest, and seek review of those outcomes. It is within this framework that the question of access to remedy—and the conditions under which such a remedy becomes meaningfully available—assumes particular importance.
Guiding Principle 11 of the U.N. Guiding Principles on Business and Human Rights concerns the responsibility of corporations to “respect” human rights. In the early stages of application of its jurisprudence, the “respect” meant the negative obligation that companies should avoid causing harm. Over time, however, the term was reconceptualized to include proactive “due diligence” — the obligation to identify, prevent, mitigate, and account for human rights impacts. In other words, respect and responsibility came to be exercised not only through corporate abstention from actively causing harm, but through structured processes of anticipation, internal monitoring, transparency, and demonstrable corrective action. This transition from a passive, negative obligation to a proactive process is codified in the OHCHR Interpretive Guide (see Guiding Principle 17) and 2008 Report of the Special Representative (A/HRC/8/5), which clarifies that “respecting” human rights requires that companies actively identify and address impacts. I argue that for this due diligence to be effective in digital spaces, it must include “interface-level saliency”—where the visibility of grievance mechanisms is treated as a core requirement of the corporate duty to communicate how impacts are being addressed.
Apps and digital interfaces are, in a literal and institutional sense, products of code. Their job is not just to simply display options, but to structure them. Every menu hierarchy, notification design, and pathway to redress reflects deliberate programming choices that determine what a user can see, when they can see it, and how easily they can act upon it. Interface saliency thus becomes a jurisprudential concern because, in digital spaces, design decisions perform regulatory work. They allocate attention and determine the practical accessibility of institutional safeguards.
Scholars have described code as imposing “behavioral constraints,” meaning that software architecture shapes the range of possible user actions in much the same way that law structures (or should structure) permissible conduct offline. A speed bump slows vehicles; a login wall blocks access; a buried appeal button discourages contestation. These are functional constraints. In the present context, this matters because procedural options, such as the ability to challenge a moderation decision, depend not only on formal availability, but also on the ease and clarity with which they can be exercised. If architecture narrows the pathway, it narrows the ability to enforce a potentially-infringed right. Therefore, the normative implication is that platform infrastructure should embody options exercisable by the users, which are reflective of commitments within the U.N. Guiding Principles. The most basic of these commitments is access to remedy, which cannot be made available without awareness.
India at the Center
India hosts hundreds of millions of Meta users. It is not a peripheral jurisdiction, but central to the platform’s global ecosystem. Harmful content in India often intersects with deeply sensitive social fault lines: religion, caste, gender, language, and regional identity. The stakes of content moderation are therefore particularly high in this context. Yet awareness of appeals mechanisms within Meta’s framework remains disproportionately low, resulting in less than two percent of appeals (Central and South Asia combined), compared to a striking 49 percent from the United States and Canada. This, in all likelihood, explains why user appeals from India remain low—a concern that Meta has itself flagged—and may reflect structural barriers to engagement.
The low volume of appeals from India could be interpreted as a reflection of user satisfaction, cultural indifference toward formal adjudication, or a “healthy friction” that deters frivolous claims. However, such explanations mistake the behavioral outcome for the institutional obligation. Within the framework of the U.N. Guiding Principles, which Meta has adopted as the normative basis of its human rights commitments, access to remedy functions as an affirmative expectation of corporate responsibility. The availability of grievance and review mechanisms must therefore be visible ex ante, independent of whether users ultimately choose to exercise them. Even if a user base is largely indifferent, due process requires that the architecture of recourse remains salient and discoverable to ensure that “access” is not a theoretical abstraction. To treat low engagement as a justification for muted visibility creates a circular logic: it allows platforms to cite a lack of user interest as a reason to maintain the very “procedural insulation” that prevents users from discovering their rights in the first place.
* * *
In a jurisdiction as significant as India within Meta’s global architecture, this subtle retreat of visibility is not trivial. It reminds us that in digital governance, rights are not only written into policy, but must be written into design.
– Tanmay Durani, Published courtesy of Just Security.
