Minting Discourse Cred on Likes, not Posts

While I think like minting is a step in the right direction (or at least worth exploring), I share @Beanow’s concerns here. In particular the bandwidth issue. I can barely keep up with Discourse alone these days (taking a break from my Sunday to drop in and write this because I think it’s important, but I’m sure I’m missing other things), and we’re still a relatively small group. We could easily see a situation where people’s efforts just go unnoticed and they give up before getting meaningful engagement.

There is also the problem of collusion, which we’ve acknowledged is the main attack vector for using likes. It’s easy to imagine a situation where a “cred elite” quickly forms, and uses not liking to gatekeep for profit, excluding people. In fact, between researching my article on reputation systems and DAOs and my experience so far in them, I’ve come to the conclusion that all the meatspace problems (gossiping, clicqes, coercive power relations, etc.) not only show up in virtual spaces, but are often amplified. When reputation systems based on likes are scaled out to millions (Twitter, Facebook, IG, etc.), I think we can all agree some nasty dyamics develop, including winner-take-all markets where the average contributor has little to no real power. We also want to protect “dissident” voices. If an unpopular opinion or idea is raised, we don’t want them to have their income censored (the most effective way to silence people). What if a female in a predominantly male Cred instance started voicing concerns about sexism. Would this system allow them the male leadership to effectively reduce the woman’s income to zero by not liking their contributions? Even if the chance of that happening is slim, the perception that it might could lead to unnecessary self-censoring and yes-manning/womaning/personing.

That said, I still think likes is an important signal to add. Likes seem like the best way for groups to reach consensus on individual contributions (we can’t, after all, have a community-wide vote on the value of every little contribution). There’s a reason Twitter/Facebook/IG/Google have taken over the world (and in an indirect way funded SourceCred via employment I might add). An example I draw inspiration from is Crypto Twitter (CT). Much as people bitch about it, I’ve come to view is as a revolutionary way to reach consensus on scales never seen before. It’s a consensus-reaching machine. In fact, some argue many crypto projects that lack formal governance (e.g. Bitcoin) basically are governed on the social layer, largely on Twitter. There’s some truth to that IMO. I suspect even that one reason democracies are cracking is that social media has become the main governance mechanism, and we’re just slowly waking up to that. But I digress…

I also think we can learn from other systems how to mitigate the downsides and toxicity of like-based consensus. For instance, Twitter’s main defense against unhealthy dynamics (that doesn’t use proprietary close-source AI) is blocking/muting/unfollowing and human moderation. They also just started research into decentralizing its network, allowing individuals or smaller groups to determine their own feed rules (a strategy discussed for SC in a community call a couple weeks ago (and elsewhere)). Smaller SC communities, or teams within larger communities making their own custom rules could be an effective way to mitigate these issues. We could even end up front-run these larger companies by being able to iterate faster.

To elaborate on this, a longer, less “mem’y” definition of this idea for me is, “No contribution that the community sees valuable over time should not have a node in the graph (w/ positive score)”. The danger we want to avoid here is, by rewarding everything, we reward undesirable behavior. For instance, a well-meaning but low skill contributor comes in and starts generating a high volume of posts, which are not appreciated and a distraction to other contributors. The community is friendly and welcoming (maybe), but the contributor isn’t willing or able to conform to the community’s norms. They don’t take cues on the social layer, and just keep firehosing the community with mini manifestos, slowing down work. They’re not explicitly breaking any rules that would cause a ban (which should be generally reserved for extreme cases). The community doesn’t want to be mean. In this case, if the contributor is being paid for activity, they will only produce more posts. Another example could be a contributor that is contributing valuable work, but violating the code of conduct. In many (if not most) situations, it won’t be an extreme case, where someone is straight up hateful for trolling. It could be a situation where some view the person as problematic, and others don’t.

One helpful distinction IMO is: you can’t use people’s work without flowing them cred. If, in the messy grey areas and politics of a Cred instance, a contributor is pushed out of the graph (which will be necessary at times), the community can’t turn around and use their work without crediting them (which, on principle, should flow a non-zero amount of cred). This serves as a pragmatic signal to distinguish nodes that should legitimately be 0, and those that have been treated unfairly. You can’t reject the person and use their work without compensation. This ties back into "Cred Historians" and Curators, which may be something to prioritize if we purse like-based cred.

I think having some amount of activity minting (or at least having it in the codebase so we have optionality around adding it later) is a good way to mitigate a lot of these issues, and gives us tools that could be valuable as we iterate and learn. Mixed activity/like minting cred (which is the PR now as I understand it), seems like a good way forward.

1 Like