Editor’s note (April 2026): This article is part of Blog Herald’s editorial archive. Originally published in 2007, it has been reviewed and updated to ensure accuracy and relevance for today’s readers.
There was a moment in the mid-2000s when it genuinely felt like the internet might democratise the news. Platforms like Digg, Reddit, Netscape, and Newsvine handed the power of editorial selection to ordinary users. You voted on what mattered. The most-voted stories rose to the top. No editor, no gatekeeper — just collective judgment.
It was an idealistic model. And in practice, it immediately ran into one of the most enduring tensions in online communities: what happens when you give people power over each other’s visibility, but not accountability for how they use it?
That question — which animated early debates about social news platforms — is not only unresolved. It has become far more consequential.
The original transparency problem
The early social news sites each took a different approach to vote transparency. Digg let you see who had voted positively on a story, but buried negative votes behind a veil of anonymity. Netscape showed everything — every upvote and downvote on every submission, completely visible to all users. Reddit hid all individual vote data, showing only aggregate totals.
Each approach had predictable side effects. Digg’s semi-transparency let attentive users catch coordinated voting rings, but the anonymity of negative votes gave rise to what became known as the “Bury Brigade” — organised groups who would systematically bury stories or users they disliked, with no accountability whatsoever. Netscape’s full transparency enabled a degree of community self-regulation, but also produced social friction: users took it personally when their posts were downvoted, and ill will spread through the community. Reddit’s complete opacity left it vulnerable to manipulation from trolls and bots with no mechanism for communities to even identify the problem.
What this revealed, even then, was a structural truth: transparency creates accountability, but accountability creates friction. Designers had to choose between communities that could self-police and communities that felt safe.
The same dilemma, at a vastly different scale
The stakes of that design choice have grown enormously. The social news platforms of the 2000s had modest audiences and limited cultural influence. Today’s successors — Reddit, X, YouTube, Facebook, TikTok — reach billions of people, and the opacity of their ranking systems has become a matter of serious public concern.
Reddit, which survived the social news era to become one of the most-visited sites on the internet, still battles the same vote manipulation dynamics that plagued Digg. Brigading — coordinated groups manipulating upvotes or downvotes to influence discussions — continues to distort the natural flow of conversation, undermine trust, and drive members away from communities. Reddit’s tools for addressing it have grown more sophisticated, but the fundamental problem is unchanged: the platform doesn’t personalise feeds per user, which makes the system relatively transparent, but also more prone to manipulation like vote brigading.
At the same time, the larger algorithmic platforms have moved far beyond simple voting mechanics. What determines what you see on Facebook, TikTok, or YouTube is not a community vote — it’s a proprietary ranking model trained on engagement signals, shaped by commercial incentives, and almost entirely opaque to the people it affects. If algorithms make societally relevant decisions, it becomes pivotal who takes these decisions, and in what way — and ensuring those decisions benefit society requires transparency about algorithmic design.
That’s not a fringe academic concern. A 2024–2025 field experiment involving 1,256 participants on X during the US presidential campaign found that reranking feeds to reduce content expressing partisan hostility measurably shifted users’ feelings toward the opposing party — providing direct causal evidence that feed algorithms alter political attitudes. The ranking choices platforms make aren’t neutral. They shape how polarised or cooperative the information environment becomes.
When accountability structures break down
The Bury Brigade problem of 2007 looks almost quaint compared to what followed. Vote manipulation has evolved from small community-level disputes into a structured tactic deployed at political scale. The persistence of these dynamics reflects something the original social news designers understood but underestimated: anonymous negative power without accountability doesn’t just enable abuse — it invites it.
This is where the transparency question connects to something deeper than platform design. It’s about whether the systems that shape public attention carry any obligation to those they affect. For years, the dominant answer from platforms was: no. They self-regulated, issued transparency reports that were difficult to scrutinise, and treated their ranking systems as proprietary assets to be protected.
That era is ending — at least in some jurisdictions. The EU’s Digital Services Act, fully effective since February 2024, was designed to end an era in which tech companies essentially regulated themselves, by forcing platforms to be more transparent about how their algorithmic systems work and holding them accountable for the societal risks stemming from their services. Under the DSA, users now have the right to understand on what basis platforms rank content in their feeds, and to opt out of personalised recommendations — obligations that have already prompted TikTok, Facebook, and Instagram to offer options to disable personalised feeds.
Across 35 US states between 2023 and 2024, legislation addressing social media algorithms was introduced; more than a dozen bills have been signed into law, though many face ongoing legal challenges. The direction of travel, globally, is toward greater mandated transparency — even if the implementation remains contested.
What Digg’s revival tells us
There’s a coda to this story worth noting. Digg — one of the original platforms that sparked this debate — relaunched in 2026. Its return comes with explicit commitments to transparency, community trust, and search discoverability, and positions itself as a direct response to growing frustration with Reddit’s opaque decision-making and moderator burnout.
Whether Digg 2.0 succeeds or not is beside the point. What the revival signals is appetite. Users, creators, and publishers are increasingly dissatisfied with platforms that exercise enormous influence over their visibility while explaining almost nothing about how that influence operates. The nostalgia driving Digg’s relaunch isn’t really nostalgia for the old site — it’s nostalgia for the idea that you could understand the system you were participating in.
A lasting lesson for content creators
If you publish online, the transparency question isn’t abstract. The degree to which any platform makes its ranking logic legible to you directly affects your ability to build sustainable reach. Opaque systems create dependency: you optimise for signals you can observe without understanding the mechanism, and you remain vulnerable to invisible changes.
The designers of the original social news platforms were wrestling with a genuine trade-off — accountability versus friction, visibility versus privacy. They didn’t resolve it cleanly, and neither has anyone since. But the conversation has matured. We now have better evidence about what algorithmic opacity costs in terms of polarisation, manipulation, and user trust. We have regulatory frameworks beginning to demand answers.
What we still lack, largely, is the cultural expectation that platforms owe their users a legible account of how their choices are made. That expectation — which the Bury Brigade debates were groping toward, imperfectly, twenty years ago — remains the unfinished work.
