Editor’s note (April 2026): This article is part of Blog Herald’s editorial archive. Originally published in August 2005, it has been reviewed and updated to ensure accuracy and relevance for today’s readers.
In August 2005, Google added a quiet feature to the Blogger navigation bar: a “Flag as objectionable” button. The idea was straightforward enough. Readers who encountered spam blogs or genuinely harmful content could signal their concern, and Google would use that collective data to decide what to do next. Community moderation. Crowdsourced trust.
Within weeks, there were reports that the system was being used differently — not to flag spam, but to silence opinion. It raised a question that has only grown more urgent in the twenty years since: what happens when you give a crowd the power to remove a voice?
The flaw built into “community standards”
Blogger’s own documentation at the time was admirably clear about intent — and quietly evasive about risk. The flag button, the platform said, “is not censorship and cannot be manipulated by angry mobs.” The reassurance was in the first sentence of the help page. That a platform felt the need to say this so explicitly suggests the designers understood, at some level, what they had built.
The mechanics created a structural problem. Volume of flags determined consequences. Human review appeared to come after the fact, if at all. That meant a coordinated group — people who simply disagreed with a writer’s views — could in practice push a blog toward removal without any individual claim being evaluated on its merits.
This was the tension at the heart of the feature, and it was never really resolved. Distinguishing between “this content is illegal” and “this content offends me” is one of the harder problems in platform governance. Leaving that distinction to aggregate crowd behavior doesn’t solve it — it outsources it to whoever can organize the largest response.
The pattern kept repeating
What happened in that early Blogger incident was a preview of a dynamic that would play out across two decades of social platform history. In 2012, a conservative blogger in New York had his Blogger account deleted hours after a politically contentious post attracted mass flagging. The content was restored, but the explanation was never provided. In 2016, writer Dennis Cooper lost years of work — his entire Blogger account — after a single anonymous complaint triggered deletion. It took months of public pressure before Google returned his material.
Peer-reviewed research published in 2024 found that flagging tools across major platforms have been successfully used to silence users who are already vulnerable to platform censorship — activists, journalists, sex workers, LGBTQIA+ creators — often through coordinated reporting campaigns designed not to identify harm but to trigger automated removal systems. Meta’s own Oversight Board found that as few as three reports were sufficient to remove non-violating content from Instagram in some cases. The gap between “flagged” and “harmful” remains as wide as it was in 2005.
The problem isn’t unique to any one platform or political persuasion. What the Blogger episode identified early was a design vulnerability that scales: the easier it is to trigger review, and the more automated the response, the more useful the system becomes as a tool of harassment or suppression — regardless of the platform’s stated values.
Where the moderation conversation stands now
The scale of the problem has changed enormously since 2005. Platforms now process content at volumes no human team could manually review. TikTok reported that in 2024, over 96% of content removed for policy violations was taken down by automated systems before it received a single view. Meta’s automated systems account for roughly 90% of removals for violent and graphic content.
The efficiency case for automation is real. The problem is that automation inherits the same structural flaw as the original flag button, only faster and at greater scale. Systems trained to respond to volume are still vulnerable to coordinated misuse. An account targeted by an organized campaign faces the same functional outcome as Ashok Banker’s blog in 2005 — removal first, questions later, with limited recourse and no transparent explanation.
The Electronic Frontier Foundation has documented this dynamic repeatedly: appeals processes that are slow, opaque, or effectively inaccessible; automated enforcement decisions that affect livelihoods without any meaningful human review; and no obligation on platforms to explain why a specific piece of content was removed or an account was suspended.
The European Union’s Digital Services Act, which came into force in 2024, attempts to address some of this by requiring large platforms to provide accessible appeals processes and transparency about moderation decisions for EU-based users. It’s a meaningful step, though its reach is geographically limited and enforcement remains uneven.
What bloggers and creators should take from this
The clearest lesson from two decades of this history is one that independent creators often resist until it’s too late: platform dependency is structural risk.
If your writing, your audience, and your archive all live on someone else’s infrastructure, the rules can change around you without warning. A flag campaign, a policy update, an algorithmic shift — any of these can effectively end your public presence on a given platform. The blogger who dismissed concerns about the 2005 flag button as theoretical was making the same calculation that writers made about Twitter, Facebook, and Tumblr in the years before each of those platforms changed what they were willing to host.
This isn’t an argument against using platforms — it’s an argument for understanding the relationship clearly. Publishing on a free hosting service means accepting the platform’s moderation logic as a condition of access. That was true of Blogger in 2005. It’s true of Substack, Medium, and every other intermediary today.
Building some independence into a content operation — an owned domain, a direct email relationship with readers, an archive you control — doesn’t eliminate platform risk. But it changes the terms of the exposure. A writer whose primary relationship with their audience runs through something they own is in a fundamentally different position than one who has only ever published at someone else’s address.
The “Flag as objectionable” button is long gone from Blogger’s interface, and Blogger itself is a much diminished platform. But the question it raised hasn’t gone anywhere: when a crowd can remove a voice, whose standards actually govern what gets heard? Twenty years on, we’re still working out the answer.
