Reputation vs Anonymity

One of the main features of SourceCred is to credibly reward people’s contributions. A natural consequence of this is that those who contribute more will have more reputation. This is generally considered a natural and good thing.

Throughout history reputation has created power dynamics within communities. Those with reputation can trade, collaborate, and communicate more effectively than those without reputation. This trust minimizes due diligence, uncertainty, and paranoia that can be associated with interacting with others. This is generally considered a natural and good thing.

What’s not natural is being in a 24/7 globally connected world where anyone anywhere is just a click away from interacting with anyone else. When our monetary and governance systems are evolving at the speed of software, face to face reputations that take years to build will not do. Our village mindset will not scale. We need new systems to establish reputation while also allowing participants to remain pseudonymous.

There are a few reasons for this:

  • Anyone anywhere can interact with us at any time. We don’t want to make ourselves vulnerable to attack.
  • While our world is generally moving towards a more fair and open society, biases are still rampant. We don’t want people to be discriminated against for any reason. Can’t > won’t.
  • The struggle is real. We live in a physical world with physical constraints and resource acquisition is still top of mind for most people. Money is our proxy for resources. When we embed money into social networks, open source projects, and communities it changes the dynamic. By allocating resources (cred) in a meritocratic way to pseudonymous actors it minimizes a whole class of political and personal debates that could arise.

SourceCred has the potential to allow participants to contribute, be recognized, and be rewarded in a pseudonymous and meritocratic way. This is possible because the SourceCred algorithm itself is transparent and publicly audit-able, but the participants are not. This design is critical to empower users and allow them to choose how and when to disclose information. AFAIK, this is the first time a protocol makes this possible at scale.

While SourceCred solves a lot of problems, it will probably create new ones too. This thread is to discuss how to design the SourceCred system so that members can participate anonymously, but also meritocratically earn reputation which they can use to influence the systems they are part of (aka governance). This is bold new design space, so please weigh in with your thoughts! :slight_smile:

1 Like

One tension I see here is sockpuppets. I’m all for allowing people to be anonymous (and potentially to have multiple anonymous identities). However, this starts to break down if people start using their multiple identities to launch Sibyl attacks and otherwise game the cred.

SourceCred has more “native robustness” to Sibyl attacks than “one-identity-one-vote” type systems. Actually, I suspect (and hope) that it will be easier to acquire a lot of cred for one identity than across two identities, because of the increasing returns to reputation that you mention. So I think rational agents will prefer to have one identity (or at least, only a few).

However, it’s also quite possible that someone running multiple identities can use these identities to game cred, e.g. having one “impartial curator” identity that just happens to always create big edges pointing towards the “eager implementer” identity.

However, this problem is not unique to one person sock puppet-ing. Even if we had ironclad one-person-one-identity rules, people could still form secret cliques and run the same attack. Therefore, the solution to this issue is not to try to control people’s identities, it’s to have a system that can detect corrupt dealing and moderate it. I’m thinking of @s_ben’s cred defenders as the agents who are empowered to search for these cliques. On a technical side, I have ideas for tools that will make it easy to see which identities are “flowing” cred disproportionally to specific other identities, so that cliques will be easier to detect.

1 Like

While malicious sockpuppets are problematic, so is defining malicious. If the sockpuppets are just out to farm cred and scam the system, that is obviously bad. Less obvious would be for instance participants coming together to fight perceived tyrany. One real danger with these systems is that, absent a formal hierarchy (org chart), hidden hierarchies can form. As Foucault pointed out, the trend throughout history is for those in power to protect themselves and increase their power by hiding that they are in power. Brilliant podcast on this below if anyone has the time.

I recently read a modern account of this phenomenon in the below article.

The author describes working at GitHub pre-acquisition, where they had a flat hierarchy and open allocation even. They were apparently walking the OSS walk until pretty late in the game when they had to start getting corporate to scale and prepare for assimilation.

For years, my co-workers told me, the absence of an official organizational chart had given rise to a shadow chart, determined by social relationships and proximity to the founders. As the male engineers wrote manifestos about the importance of collaboration, women struggled to get their contributions reviewed and accepted. The company promoted equality and openness until it came to stock grants: equity packages described as “nonnegotiable” turned out to be negotiable for people who were used to successfully negotiating. The name-your-own-salary policy had resulted in a pay gap so severe that a number of women had recently received corrective increases of close to forty thousand dollars. No back pay.

Back to sockpuppets, what if some women in GitHub were secretly chatting with each other, strategizing to use a now common organizing technique where women repeat each other’s statements to fight sexism. From article on this:

Female staffers adopted a meeting strategy they called “amplification”: When a woman made a key point, other women would repeat it, giving credit to its author. This forced the men in the room to recognize the contribution — and denied them the chance to claim the idea as their own.

Would an algorithm be able to distinguish legitimate power building from illegitimate power building?

The more I study reputation systems, the more I think this could be the key invention here. To my knowledge, nearly all voting schemes suffer from not having Sibyl resistance. SC should just inherently have it, due to the nature of meaningful work over time.

Yep. What I’m doing now:)

Keep circling back to something like this. I suspect that gaming will be ‘bimodal’ (correct term?), in the sense there will be mostly either obvious, spammy gaming/trolling, or subtle gaming, which could be legitimate power struggles between humans that could actually be healthy for the overall health of the system over time. Kill the more obvious gamers and SC is on its way. The main problem is protecting against collusion. How do you keep the cred defender, for instance, from being a bad cop on the dole? In smaller communities, I think this won’t be as big a problem, as everyone knows each other and it would be obvious. But at larger scales it is to be expected. Perhaps the cred defender is an outside party, something like an Aragon court juror. Perhaps it is a sortition randomly sampled from the community, who votes anonymously so they can be honest without fear of repercussion. Perhaps SC doesn’t scale vertically, but horizontally. With small clusters of people that can trust each other more easily, trading grain with other clusters.

One thing to keep in mind is that some cliques aren’t bad. If this is an MMO, we don’t want to prevent friends from forming guilds because that is technically corruption. But I also think a tool like this could be useful and possibly necessary, if applied equally and skillfully. Perhaps as a forensic tool of a cred defender. Or just to monitor and better understand the system (more visibility for everybody could help smaller-scale groups of people build trust and form healthy community). Many uses. I also think we’re going to need every tool in the box to solve the collusion problem. It’s maybe the biggest threat I see, and very curious to see when it first emerges.

1 Like

Incentives to discover and flag unwanted behavior is a winning strategy; whether it’s with financial markets (shorting), Proof of Stake blockchains (challenging false claims to earn their staker’s rewards), or security( bug bounty programs). Having more tools to empower the community to take on that role (and be incentivized to do so) would be a huge win.

The Aragon Court (afaik) intends to exist in the same way that the traditional legal system does: it’s there for enforcement, but it’s expensive. Often the threat of enforcement is enough to make people follow the rules. Also, it’s going to be slow (ish) relative to the speed of software.

For cred defenders you would (probably) need an active set of participants who can make judgements often and quickly, flagging content/users for other cred defenders to review. Then if after those reviews have resulted in a “judgement” by the SourceCred community the user could contest the claim. In this case they could open a case to be settled with the Court, but the Court should not be the first line of defense

1 Like

I suspect this more ‘active’ role in enforcing rules/laws will be more prevalent in society generally. Heard there is a little movement to do this with regulation in the US.

Yeah something like a court I imagine wouldn’t be used much. However I can imagine fairly regular scenarios where it gets used. E.g. moderator decisions involving often escalate into drama that would be way better directed to mediation. E.g.,

And the thing is, often these moderators are making tough calls on the bleeding edge of issues still being worked out. Not to mention the blurred line between challenging authority and being “disruptive”.

I don’t think this is something to worry about for now, but it’s an area I’m following with interest.

How does this relate to regulation? I’m talking about economic incentives within a game (game theory “game”) that incentivizes participants to take actions that benefit the network (mechanism design) - and doing that within SourceCred to incentivize people to flag spam

Just talking about a general trend, not implementing regulations per se. For instance, whistleblower laws in the US now pay whistleblowers a considerable amount of money if their case is proven, which has increased the number of whistleblowers that have come forward. Similarly, if we provide the right incentives, people will hopefully be proactive in enforcing the rules, in a more active way than typical regulators, which are more reactive.

1 Like