I think the architecture of “censorship resistance” must appeal to developers, which is why so many of these platforms exist. Unfortunately they’re solving the wrong problem. Censorship isn’t what make social media and blogging so toxic to our culture. It’s the lack of reputational cost to bad actors and the undeterred gathering of information.
But sure. Let’s have another platform that is most likely to become popular purely for illegal uses.
"Unfortunately they're solving the wrong problem."
Heres the description of Rumble:
"Rumble enables the spread of messages in an epidemic fashion using automatically formed and opportunistic local ad-hoc network. Every message sent or received with are stored on the local database and pushed to every other device it meets. By doing so, messages naturally propagates throughout the network using social links as the underlying infrastructure. Because it doesn't rely on any fixed infrastructure like the Internet, it is naturally resistant against censorship."
"About
Rumble allows the sharing of messages and pictures without relying on the Internet, in a Delay Tolerant Fashion following the Store-Carry and Forward paradigm"
Question: What problem is Rumble solving.
Option #1
Answer: Requirement for fixed infrastructure in order to exchange messages. Censorship resistance is just an incidental benefit.
Option #2
Answer: Censorship. Formation of ad hoc network infrastructure is just implementation detail.
The problem with #2, the parent's interpretation, is that formation of ad hoc infrastructure, i.e., non-reliance on fixed infrastructure, has multiple benefits besides censorship resistance. Its quite possible someone could use Rumble to solve a problem other than censorship.
If the network only forwarded/spread information that is linked with [friends],[near-friends], [distant-friends] as selected by the user, this would create a social network that more closely the uses actual social network.
If the chain of in-person interactions for a [distant-friend] becomes broke (for instance, because a friend connection link drops out) that information is trimmed from the user’s network.
Perhaps this incentivizes direct social interaction rather the false social interactions of legacy networks.
> Censorship isn’t what make social media and blogging so toxic to our culture.
Correct so far. Censorship doesn't encourage toxicity except by discouraging good actors disproportionately. I'll expand on this.
> It’s the lack of reputational cost to bad actors and the undeterred gathering of information.
Adding reputational cost has been tried, see "full name policies", Facebook comments on online newspapers etc.
It discourages the careful and does nothing to stop:
- those who think they are right and feel backed
- those who are too engaged anf don't care about their reputation
- those who doesn't understand that it is dumb to write dumb things under your full name
- trolls with fake but real-looking accounts
The result is an explosive mix: full name policies affect good, reasonable, level-headed actors far more than it affects trolls and zealots.
Don't you wonder why HN is such a nice place despite no full name policy? It is in part because everyone is exposed again and again to different perspectives from people wildly different from themselves who can dare to speqk up here.
This, together with good moderation result in compound interest (and then there is attrition so nothing grows into the sky).
Your take is right for real life name accounts, but I think the system of reputation does not have to be linked to a real name, and could work on a pseudo based system, maybe even an anonymous one.
Imagine a reddit clone, but where your karma is not useless but actually reputation points : the better your reputation, the higher your post/comment is ranked.
Your reputation could be subreddit-scoped, and each subreddit could decide to value the existing reputation of a user based on their reputation on another subreddit : e.g. if you have a high reputation on /askhistorians, you automatically get a good one on /history_lovers, and inversely if you get a good reputation on /woman_beater you get a terrible reputation everywhere.
A specific system for new users similar to the one on StackOverflow could allow for a deeper check of new users/alt-accounts.
And in the end, people could vote on post to provide reputation without actually needing to see the name of the author, providing a better anonymity part : you're not anonymous to the website, but you're not doxxable by random users either.
Such a system could be easily gamed, especially since you'd also get into a positive feedback spiral, where your good reputation brings you at the top and you receive more upvotes, just because you're at the top. All you need are some bots to upvote your posts.
Yup. I agree with the GP’s general premise, but I wouldn’t make high-karma posters have their comments appear higher. Instead, I’d weight their up/downvotes on other people’s comments proportional to their karma, but only in subreddits where they accrued karma. Thus, someone who accrued 10000 karma in history subreddits might get e.g. 10x the voting power in history subreddits. However, if they only had 10 karma in science subreddits, their votes would not be worth a lot there.
Of course, this is not perfect (what if someone posts about history in a science subreddit, or vice versa), but it’s a start. AFAIK no platform has tried weighted voting yet.
> Imagine a reddit clone, but where your karma is not useless but actually reputation points : the better your reputation, the higher your post/comment is ranked.
So, how would a new user gain reputation, if their every comment is buried under all the karma farmers in every post?
New users could gain artificial karma that slowly decays over time, pushing them to the top 10% of comments for some time to collect karma. The local votes in a comment thread should also weigh more than the karma a user has, only negative karma would have significant weight.
Additionally servers could give users some bonus starting karma for free if they have, for example, a verified phone number and email tied to the account. This can be done privacy sensitive by simply having the home server give a boolean of whether or not that is the case. Just like in mastodon, this would be an honor based system, where if you lie, you might get kicked out of the network.
And works, sometimes - there's a non-small number of people who have posted terrible things under their real names and suffered real-world consequences for it. It doesn't work often enough, however, to make it a real deterrent.
> Don't you wonder why HN is such a nice place despite no full name policy?
I dunno, it has its share of toxicity (but, really, no more than any other forum but also it's not a bastion of goodness either.)
> Censorship doesn't encourage toxicity except by discouraging good actors disproportionately
Somewhat ironically, this is the same problem with tangentially related anti-piracy efforts: they punish the honest users but don't do anything to deter the dishonest ones.
Reputational cost to whom? The platforms? Due to network effects they are quasi-monopolies. The FAANGs have moved in recent years to cement their monopolies via government influence. What did they have to offer in return? Political donations and censorship-on-demand.
Or do you mean the bad actors are the ones initiating censoring on behalf of the platforms (via scurrilous flagging of wrongthink, or by mobbing down heterodox opinion). The platforms are taking their cues from politics (government and tech culture) and don't need the reporting system to censor people.
I assume you don't mean there's a lack of reputational cost to bad actors and the bad actors are those who are being censored, as that wouldn't make sense.
That's quite contemptuous of half the U.S. voting population. The 75 million people who voted for Trump in 2016 might beg to differ about who/what put Trump in office. Are you equally sure no misinformation was spread to elect Biden in 2020?
Arguably all American (assuming you mean to refer to the US here) history is recent. In that sense, I would say their history of slavery (and more recently blatant and publicly endorsed racism) is probably a little darker than their trend of putting increasingly confused old men in power.
Without minimizing the cultural importance of Gamergate, you are vastly overstating its influence on the 2016 election. If you want to understand Trump in 2016, you'd be better off looking at the Tea Party and the rise of populism contra the ossified Republican establishment.
In America,the first amendment protects you from government censorship. It doesn’t protect you from censorship that occurs naturally in society or the marketplace.
Freedom of speech means you have freedom from the government. It does not mean you have freedom from societal consequences or that you’re entitled to a platform or audience.
There are many forms of speech which society has decided to censor. We have decided to outlaw libel, slander, perjury, dissemination of child porn or classified info, etc., and we are familiar with many which cause immediate social or business repercussions (generally being an asshole, berating waitstaff, yelling at the top of your lungs in a quiet place, any kind of unwanted speech PN private property, etc.).
What is your opinion of public/private partnerships? What would you say if the government pressures private organizations to censor individuals? Is that still 'private' censorship? What if partisan government officials take board seats or top management positions at these platforms, and vice versus? Your depiction of First Amendment protections seems incomplete if you don't account for the reality of government by proxy. It's no surprise that the state would externalize its censorship to circumvent civil liberties protections.
There’s an infinite number of non-ideal permutations you could bring up. Those can be hammered out by the court system and legal precedent, or any issues can be worked by legislature or put back to the voters.
Except the courts have failed to hammer them out due to apparently no one having standing. Even the government itself is apparently no longer allowed to bring anti trust complaints. Legislatures have been ignored recently with executive departments overriding even the federal supreme court in their attempts to exert control.
And we all know how large swaths of goo voters and a surprisingly high number of democrat voters feel about the integrity of elections.
Regardless of your views, this is a slow motion trainwreck
> In America,the first amendment protects you from government censorship. It doesn’t protect you from censorship that occurs naturally in society or the marketplace.
Correct. But even in the marketplace there are limits for things like discrimination.
> We have decided to outlaw libel, slander, perjury, dissemination of child porn or classified info
Actually those are not constitutionally protected speech. This "censorship" does not come from social norms as you seem to imply, but by the government itself.
> and we are familiar with many which cause immediate social or business repercussions
The White House in a press conference said they were flagging posts to Facebook. This can make Facebook a government actor if they were forced, paid or coerced to do so (which they probably are)
> (generally being an asshole, berating waitstaff, yelling at the top of your lungs in a quiet place, any kind of unwanted speech PN private property, etc.).
You can't mute or block or unfollow someone in most normal social situations. This comparison is dubious at best.
> It doesn’t protect you from censorship that occurs naturally in society or the marketplace.
I prefer to associate with those that don't have to be compelled by law to hold up the principle of free speech. Because following the others would only ensure that such rights would have never been implemented.
That is certainly the pro-censorship argument. The common refute is that censorship creates a more homogenous marketplace of discussable ideas - which the de-anonymised user base have their identities (and, at worst, personhood) reduced to.
I think there's a happy medium somewhere by treading very lightly towards this subject. We are deeply ignorant of own biases in almost every aspect of decision-making - which is an argument both for and against censorship.
Until there are no bad actors, anonymous platforms will be both (A) needed (if you have a reason to fear retribution for expressing an opinion—from the state, another citizen or some entity), and (B) abused (if you want to dishonestly manipulate public opinion, you can do so without risking your reputation, create masses of duplicate accounts, etc.)—it really looks like a catch-22. Platforms will keep trying to navigate A vs. B, but it’s a losing game.
In a perfect world without mentally unhealthy/malicious people and corrupt states, anonymous platforms will be useless.
Your reply (and many others) assumed that I was advocating censorship, but I did not and I am not. All I am advocating is that whatever your idea is should be reflected on your reputation.
I do find it interesting that there is such a strong revulsion to this concept (“ideas I utter are tied to my reputation”) but anonymous speech is an evolutionarily recent development. Humans have done quite well for tens of thousands of years with a complete absence of it. Somehow our current culture can look at the wasteland of human lives and endeavour brought about by unfettered anonymous speech and somehow still still see it as an unalloyed good. Curious.
> All I am advocating is that whatever your idea is should be reflected on your reputation.
I think the difference is when society figured out the internet enabled mobs and mobs can snap reputations like twigs on a whim. There are folks who deny this occurs but there are high profile cases that exemplify it, and if you keep your ears open you'll see it happen to laypeople people too. Brendan Eich is one such person, as frustrating as he can be at times, and as much as he and I may disagree at times, being cancelled from being Mozilla's CEO was a direct harm to both Mozilla and himself. A harm which was led by a mob that had fuel poured on it by OKCupid.
People didn't just invent a need for anti-censorship, it was always prevalent within our society, but mostly locked to a local level until the internet gave it a global voice.
That's the american-centric view. The global populace is still dealing with authoritarian regimes that want to control all speech, which these tools would aid in fighting against. We had political forces in the form of "think of the children" and "terrorism" which could rise to the level of an authoritarian state in the US, but we're probably a ways off from that. It would be nice if these tools existed (and were updated) by the time that arrives, if it does.
Thanks for your reply. I think you're touching on a different issue as well: "Everything I say is knowable by everyone, forever." For most of human history the words and ideas that came out of most people's mouths were not recorded and were almost immediately forgotten. This tends to winnow out many bad ideas, as only something truly remarkable would bear the effort of writing it down or of remembering it to speak to someone else later.
Our current social media world does not allow people to make mistakes or to learn, explore and develop. That is one of the many reasons I stay off it completely (forestalling any 'HN is social media' comments, I post only here (and that carefully) and from my Reddit porn account (whose reputation has been so thoroughly destroyed, I do not even try to bring good from it.)) It is very liberating. And later, when I take over the world, there will not be 20 years of public posts where I learned who and what I was about to drag me down.
Sorry if I frustrated you. If you frustrate me, I'll try to overlook it. Nevertheless, we are now in a different era, where enough people (a majority or a plurality) can move to cancel someone, and mostly get their way for a time. What to do about it? Pick user-first software that isn't beholden to coordinated cancel mobs. I'm not touting just Brave here, there are many such options now.
The world is complex and nuanced, and any "best possible world" will therefore be complex and nuanced. I did not advocate for the complete removal of all anonymity, which might have prevented important historical events. I advocated for "whatever your idea is should be reflected on your reputation".
In a "best possible world" there is a time and place for anonymous speech. But "I can create troll accounts faster than you can ban them" (which is what we have now) is not on the road to that world.
> Humans have done quite well for tens of thousands of years with a complete absence of it. Somehow our current culture can look at the wasteland of human lives and endeavour brought about by unfettered anonymous speech and somehow still still see it as an unalloyed good.
By what standard are you calling the tens of thousands of years of 'a complete absence of anonymous speech' "quite well" and the present "a wasteland of human lives"?
> Censorship isn’t what make social media and blogging so toxic to our culture.
Facebook, Google, Amazon and Twitter have more institutional power than all elected officials combined. At this point this is beyond toxic, they have become the Robber Barons of culture.
> It’s the lack of reputational cost to bad actors
No sorry but cancel culture IS toxic in and of itself. Anxiety among teenagers, especially girls, is at all times high and in adults too.
> But sure. Let’s have another platform that is most likely to become popular purely for illegal uses.
Who will get to decide what's illegal? Most likely not you or me; most likely whoever is in charge will use this power to amass more power. There are no examples where institutional censorship of public discourse wasn't prone to self-serving abuse of power and actually yielded final positive results.
> Facebook, Google, Amazon and Twitter have more institutional power than all elected officials combined.
This isn’t even close to true.
> Who will get to decide what's illegal?
The elected officials, as always. I’m so tired of this thought-terminating question. The democratic system is the piece that always closes that feedback loop. If you ever again need to ask “but who watches the watchers?!??!!” the answer is the voters.
Do you have anything better to say than just say it's not true? Zuckerberg injected millions into the elections. YouTube has censored public hearings. Twitter has censored a bombshell story and has banned a sitting president. Reddit's CEO admitted they have the power to "sway and election". And let's not forget the "fortifying" of the election (https://time.com/5936036/secret-2020-election-campaign/).
> The elected officials, as always.
The elected officials, as always. I’m so tired of this thought-terminating question. The democratic system is the piece that always closes that feedback loop. If you ever again need to ask “but who watches the watchers?!??!!” the answer is the voters.
This is a very naive view of democracy. Politicians can restrict speech, suppress inconvenient information, create and promote false-crisis and inject or cut money to media companies so that it helps their re-election; thus creating a positive feedback loop where we see the same sitting president or the same dynasty in power for generations like in a lot of African "democracies".
In your first response, you defend the idea that tech companies have more power than the US government. In your second response, you decry the power the US government has.
The government can still abuse its power even if corporations have more power in terms of the influence they have, that's not even a contradiction, those two things can be true at the same time.
Also this also makes sense if you consider revolving door politics and the merger of corporation and state that happened the last 50 years.
If this is such a tiresome question, why is there so much ink spilled about voting rights/integrity? It seems you are one of the few people who think the system ensures oversight. If it isn't corrupt, perhaps. But what if it is? That is the situation many people think we are in now. Our elected officials do not represent the voters. They largely serve the interests of the donor class.
It is a system that is being threatened to irrelevance by institutional corruption. Voting as a feedback mechanism seems not to be able to survive the capture of our political and administrative institutions. There are other countries that demonstrate elections don't guarantee anything. It matters who makes the rules, and who can buy/count votes. Your response is very glib and inadequate to the challenges of democracy.
I would argue that how these platforms are run, including censorship, is helping to elevate the toxicity present.
Good moderation is a difficult problem. Yet there is no substitute for community standards and eyeballs on text to create a reasonable platform for discussion.
None of the automated-system platforms manage it, youtube, facebook, twitter, all have garbage moderation systems.
Censorship resistance has always been a cardinal virtue of online geekdom. Information wants to be free, as the old saying goes. These systems will keep popping up and someone, eventually, may well nail the combination to create an attractive platform for open discourse and draw a major user base away from facebook, reddit, and twitter who are flirting ever more with hushing voices they don't like.
They way it solves censorship is through decentralisation, which solves most of the other problems simply by being free of the issues inherent to abusive centralised social media platforms i.e Facebook... ads, optimising for user attention on shit content, and anything else trying to extract value that is not in the users interest.
However the problem with decentralised platforms is often more to do with marketing for "normals", even if the tech is sound it will not succeed if it is too obscure. If they can make this into an app and get it passed Apple's and Google's gardens, it might have a chance to compete against facebook and twitter.
The one-size-fits-all model is the main problem. It's cultural imperialism.
I think it is mostly solved by relative ranking systems and the user having power to block any terms they don't want. You should get content similar to those that have similar preferences to you.
Yeah sure, the problem isn’t that a small handful of companies get to control what’s on the internet, it’s that we’re not efficient enough at removing dissenting perspectives from society in general.
It's hard to engage with vague claims about social and media and blogging being "toxic". But to try to make it concrete: The old media, without anonymity, with the ability to impose reputational costs on actors, lied the US into the Iraq War. They used this ability to impose reputational costs to quash dissent. What's the worst disaster that social media and blogging have caused?
> Censorship isn’t what make social media and blogging so toxic to our culture. It’s the lack of reputational cost to bad actors and the undeterred gathering of information.
This isn’t an either-or binary thing, both can contribute to a toxic platform and mistrust. Censorship and cancel culture and bad actors and venomous users all contribute a part
We should be careful of imposing irl laws on the net because this will only bring the problems of irl to the net. On the net one can have multiple identities, each with varying characteristics and this is a feature of the net. Reputation is for irl, the net works on merits because of limitless identies... if you mess up you can start over again.
Nazism isn't illegal everywhere (I concede you may have been referring to other illegal things), so it could be locally legal. It's still wrong and the fact that everyone thinks of these uses first is problematic both because censorship resistance can be benign and because apparently such folks flock to solutions that claim these things.
Sadly that tends to really muddle the waters and it becomes increasingly difficult to see the distinction between groups that opportunistically use this and the (perhaps) misguided folks that work so hard to give them a platform.
That it relies on bluetooth/wifi rather than on the internet is interesting to me. It means that the network is human-based, and really feels like underground networks in heavily-censored states from before the 00s. Think resistance networks in China who would distribute forbidden books, or how american films made their way in the ussr by exchanging VHS.
I'd argue that this is not something to evade censorship from big platforms, but rather to evade surveillance in deeply corrupt countries.
I'd say it's a rather insecure platform if it's meant to evade surveillance. In a heavily censored state, the only types of people who'd use this platform are those who are against the state, and if the state gets hold of this application it's pretty easy to find out what's being shared just by walking close to one of the users who are most likely already in a watchlist.
I wanted to try it out, but the code hasn't been updated since 2017 and the last F-Droid release was in 2016. Is this project still useful? Being updated to fix bugs?
Simple. Take a look at the dark net. Anything that exists there but not on the clear net is censored. e.g. Pro-nazi, anti-humanity, hatred, drugs, CSAM, extreme criminal information is censored from the clear web. Absolute freedom gives you absolute chaos. It is just a matter of degree of "freedom".
facebook is "dark web"[0], everything in my gmail folders is "dark web", anything that john q. public can't see on the internet is literally "dark web"/"dark net" - that is, greater than 90% of all content behind a URI/URL is "dark" - unsearchable.
That the media and politicians use it as a negative connotation speaks volumes. "Dark whatever" != illegal content.
[0] facebook by virtue of their php lineage and gatekeeping have made (and kept) most of the content behind their own walls; unsearchable on any search engine - ironically, this includes their own. contrast this to something like reddit or stackexchange, which will gladly show up in search results.
Evidently so, although one is a subset of the other. I think saying "it requires special software or authentication" merely adds to the confusion, here.
For example, a site may require tor, or it may require a VPN connection to the same network the site lives on - is there a functional difference? And gating content behind authentication would be a good definition for "deep web" too.
However i can see the appeal of having "dark web" or "dark net" signify illicit things, but we also have "dark fiber", so something will have to give.
All of which existed prior to the Dark Net. What allows unwanted marginals to have a digital presence is also what allows the dissents of today and tomorrow to exist.
Free speech is hardly the only value in a democratic state.
The First Amendment protected Internet intermediaries from obligations to censor, while at the same time rebuffing efforts to impose stricter privacy obligations on Internet enterprises. The First Amendment thus created the business model of new media, permitting it to publish vast amounts of speech but not be held liable for that speech, while at the same time earning income through advertising based on personal profiling. For the first time, individuals could now speak to the nation—through YouTube, Twitter.. profiting from lies is now a viable economic model. It’s a threat to democracy and undoubtedly will be its undoing unless we censor.
> Free speech is hardly the only value in a democratic state.
You're making a straw man argument, I never said it was the only value. Still, a democratic state can't function without free discourse.
> The First Amendment protected Internet intermediaries from obligations to censor
Not exactly. While they would not be forced by the State, people could still sue them for the content. This is why section 230 exists which allows providers not to be liable for content posted on their platform provided they do not select content unless they do it under the Good Samaritan clause (offensive content, criminal content, etc.).
> while at the same time rebuffing efforts to impose stricter privacy obligations on Internet enterprises
Unauthorized publishing of private information was never 1st amendment protected speech, what are you talking about?
> The First Amendment thus created the business model of new media, permitting it to publish vast amounts of speech but not be held liable for that speech
Again, see section 230.
> while at the same time earning income through advertising based on personal profiling
which again has nothing to do with the 1st.
> For the first time, individuals could now speak to the nation—through YouTube, Twitter.. profiting from lies is now a viable economic model.
Snake oil merchants existed way back and the democracies thrived all the same. Magazines, journals, public discourse, universities, books... all existed way back. Really, you should read old magazines from the 1920s. I personally have one from 20s where Nicolas Tesla made some outrageous claims complete with ads that sell complete BS. Making a profit out of BS is nothing new.
> It’s a threat to democracy and undoubtedly will be its undoing unless we censor.
This reads like a the beginning of a dystopian movie. A least the cat is out of the bag I guess.
The problem is, the people with that kind of power will use that same power to stay in power. This always happen; even if they are elected. Censorship never led to a better society.
At the end of the day, censorship requires sole trust, and sole trust breeds corruption. Every. Single. Time. This hasn't, and will not change. But I don't think that's a good enough argument for censorship and big-state advocates, even though it really should be.
TLDW: Operating system appends hash everytime the file is copied, making it possible to track the original source of file without requiring another tracking mechanism (ie. internet)
Unfortunately like most platforms with no censorship they are over run by racism and incels. They eventually run out of money and shut down. What a terrible business model.
Indeed, a [genius marketing ploy to/terrible gaffe not to] predict in 2014 that a ~1 year old video startup nobody heard of with the same name would in 2020 become popular due to its censorship position.
But sure. Let’s have another platform that is most likely to become popular purely for illegal uses.