Separation of Powers + Checks and Balances

Nice to see the cred ‘gini coefficient’ trending lower:) These categories and scores feel about right…I like that the participation metric gives voice to the long tail.

Could be useful for voting. I imagine we might see differing balances of power depending on what was being voted on and the turnout. A big, important decision such as Grain issuance would presumably turn out the Cred Power and Grain Power. Smaller decisions impacting the day-to-day of current contributors (e.g. how should we spend small amounts of money in an initiative, or a minor policy change) may turn out the Participation Power, giving them more leverage where it matters.

I might experiment with distributing some of the weekly Grain based on Participation Power. Right now the Grain flows concentrate really strongly with people who are the most committed (and thus get the most cred). It would be cool if you could accumulate meaningful Grain just by being consistently engaged over long periods of time.

Also, if we are going to enfranchise the “participant class” with explicit political power, it just makes sense that they would get explicit Grain flows too.

Starting next week, I will likely change the Grain payout so the 20% we’re already distributed based on “weekly cred” will get distributed based on “participation” instead.

1 Like

Would have to not feed the trolls (they’re very engaged), but this might be a good catchall for contributions that fall through the cracks too, because they’re not part of an initiative or liked by those with lots of cred.

Sounds like a good experiment.

Like this idea as well. I feel like the 20% fast payout always had the goal of providing more inclusion opportunity. If this is a better formula to achieve that that seems objectively better to me. It might even be a reason to shift the balance to say a 30-40%, in line with the recruitment goals.

On top of that, how about we split this idea out into it’s own proposal topic and do a test-run of voting on it? :smiley:

It’s definitely interesting. Some limitations also stand out to me.

@Protocol at 0.6%, seems like a measurement problem. Being mostly invisible in the value they’re adding having only a Grain stake.

@mzargham at 2.8% seems low in certain contexts. In part this shows the raw activity bias, but also hints at what this metric might be useful as. I would argue this is a proxy for skin in the game, not as a predictor of who’s likely able to make an informed vote.

For a question like: do we want to spend more resources on X? The skin in game measurement seems great. For a question like: should we adopt this proposed new algorithm? I think it fails to recognize who could be domain experts.

Yeah, the fact that PL has zero cred is a bug and not a feature. I think the Retroactive Initialization will flow more cred to PL, for things like providing the operational support for the CredSperiment, helping found SourceCred, paying me to work on it.

Yeah, I expect he’ll also be a beneficiary of retroactive initialization.

I like this idea! I think I’ll first build a notebook that explores just the participation power concept, since it has some parameters (e.g. how fast participation power time-decays). Then in addition to voting on whether or not to tie the fast payouts to participation power, we can also vote on different parameter settings.

1 Like

+1000 on this! Would be great to create a simplicity artifact (or maybe a usability artifact) to help make this happen. Keeping things clear and simple is essential for success :slight_smile:

Same!

+many

If the only way to get “skin in the game” is to buy it, then yes. If you can earn it, however, then it’s a different story. Reputation that is earned and cannot be bought addresses this.

My experience with this is that it’s mostly correct, except when it’s not. If any one whale has an opinion on something then the community just backs off. They know it’s a fight they can’t win. Over time this results in people generally just discussing things that the whales approve of. Even if the whales don’t vote, they shape the community landscape considerably.

1 Like

This…I unfortunately have not seen a project without this dynamic to some extent.

One way to address this I think would be making people’s income “uncensorable”. Most people wanting a voice in a decision (and who have skin in the game) are either a) working for a traditional company and can be fired, or b) working for a crypto project with a centrally controlled dev that can defund their work. People generally aren’t going to risk their livelihood/community speaking out. Over time they just learn from the others around them and start self censoring according to the whale’s opinions. Reputation systems provide the opportunity to transcend the binaries of employment, and allow rewards to flow from the value a contributor provides to the project, as determined by the community as a whole, not just by the boss/whales. Even if a whale has enough power to sway votes their way, I believe that giving people the power to speak up and freely express themselves without fear of losing their income would be huge. That alone could make for a much better working experience and better decision-making over time, as good ideas won’t get killed on the social layer before they have a chance to make it to the governance layer (or at least inform the discussion).

Another important aspect I think is anonymous voting. Not sure where that comes into play here, but there’s a reason American’s voting system is anonymous. If someone can prove you voted a certain way, they can coerce you to vote a certain way. And will.

Another strategy is to “localize” votes. Whales by definition don’t tend to worry about small decisions. Their time and attention is limited. If there were votes, for instance, about small funding decisions within a bigger project, or minor policy change, they may be happy to abstain and let the “plebs” in the trenches with more knowledge about the subject determine the outcome, or be fine with being excluded by some mechanism.

Yet another strategy that’s been discussed are mechanisms like quadratic voting, which SourceCred could be in a good position to experiment with, as contributions over time are generally more Sibyl-resistant than other metrics for determining who the humans are.

2 Likes

I don’t think just earning Cred solves this chicken-or-egg issue. As spending the time to earn Cred comes with an opportunity cost you might not be able to risk. Though my point was more about governance in this. If there’s a new policy on the table being discussed which further favors the established contributors over new contributors. That would be very relevant to a group of people who haven’t met a “skin in game” bar and I think it would be good to include them.

Absolutely agree with the basic assertion here. I’m just more concerned about spam and trolls. There needs to be some barrier, however small, or the contributor experience is going to degrade as the project grows. I think that Discourse (created by the founders of Stack Overflow, who know a thing or two about reputation networks) for instance, has done this in a smart way with trust levels. The restrictions on permissions for brand new accounts aren’t onerous, and the gamification element (badges, etc.) provide a clear path to more permissions. Reddit is also an inspiration. I know the most toxic communities get all the press, but there are some large, skillfull communities that downright give me hope for humanity. And most make ample use of moderation tools typically customized by the mods. Perhaps what I’m arguing is that there should be some barriers, whether in the governance system or just moderation tools, but that they shouldn’t be high enough to keep an honest actor out for lack of time or financial resources. I don’t think spending the time to make a single small contribution (e.g. typo fix, post that gets 1 like) and get some cred, is an unreasonable opportunity cost to start voting.

3 Likes

Hi! Popping by here, this is maybe my favorite (and definitely the most intellectual) discourse.

It’s good to raise this question now, and as has been mentioned, unlike (for example) the American structure, in which the groups involved in the checks-and-balances are well-defined (mostly), here they are definitely not defined, which makes the otherwise-reasonable idea of giving each group hard-coded powers is probably not going to work (not that that worked so well in Lebanon).

A mental model I’ve found useful when thinking about other digital tools for self-governance is to divide the tool into three “levels”:

  1. The code itself, which generally speaking cannot be changed from “within” the system, but only from without (i.e the sourcecred code)
  2. Any “hyperparameters” which shape the behavior of the system but are designed to be changed by the participants within the system, usually via some collective decision process (i.e. parameters pertaining to grain distribution)
  3. Finally, the action space available to individuals within the system (i.e. posting, making pull requests, etc).

Level 1 is unrestricted (except perhaps by norms), as any possible change can be contemplated, while level 2 and 3 contain hard restrictions. If we assume that the most likely conflict will be between those with power and those without (i.e. “old guard” and “capital” vs “new blood”), and thus the goal is to guarantee the game for newcomers, then it seems inevitable that restrictions will need to be placed at Level 2 to limit the extent that the old guard and capital can shape the system in their favor.

I came across a charming historical example recently that I think applies. Apparently when Gödel was being nationalized as a US citizen, he found a “flaw” in the constitution by which the US could become a dictatorship – in his view, the fact that the constitution allowed for amendments to itself (Level 2) meant that the constitution in fact guaranteed nothing, since it could always be possible, under some hypothetical extreme circumstance, to “amend” the constitution and remove all protections.

Coming back to sourcecred, one of it’s strongest attributes IMO is the emphasis placed on inferring cred from real-world actions – evoking our favorite rationalist metaphor, making sure that the “map matches the territory”. It seems (as someone who is quite removed from the goings-on) that the hard mappings onto comments, pull requests, etc achieve that, while it is the attempts to “compensate” for perceived limitations by letting people adjust the cred graph (boosting, I think, and other “magician” type activities) create vectors for abuse of power. To what extent is it better to leave well enough alone, rather than provide too much discretion in making the map?

I’ll end with a nugget from James Madison, who pointed out that

in every political institution, a power to advance the public happiness involves a discretion which may be misapplied and abused.

Are there powers which are best withheld entirely?

2 Likes

Also agree on spam/troll/sibyl protection, voting is a mechanism that because of those risks comes to rely on a degree of trust. At a ballot box this is done by proving citizenship and age. Which puts, for example, people in a different country being exploited by a company based in a jurisdiction I have voting power in, at a disadvantage to change their situation. Not saying that makes the protections for a functional voting system bad, moreover that these are limitations to a voting system for achieving maximum inclusion.

The open community call is a great example of this. It’s informal to use this as a way to influence decision making, but the effects are important. Imagine every annual shareholder meeting was required to include a 1h raw call with workers all the way down the supply chain to the sweatshop workers and miners. It doesn’t give them a vote, but it gives them a voice.

Since we’re greenfield here, I’d like to think along the same lines of what it would take to achieve greater inclusion. Votes will probably have a scope limited to “community insiders” if you will. So with regards to the question of separating powers, checks and balances, I’d consider additional mechanism.

1 Like

Great to see you, @kronosapiens!

Yeah, I really like the idea of giving each group some hard-coded powers, but right now the contours of the groups aren’t clear enough. For now, I’m inclined to start an (initially non-binding) voting process in which each group has a certain number of votes, and I start seeking ratification for any changes to the system. You can think of it as a live dogfooding/alpha of SourceCred governance. As we play it, we’ll learn about the interest groups that form.


It’s true, any capabilities the system has are capabilities that can be used for abuse. However, if we over-index on this fear, we may design a system with too few capabilities to survive and flourish.

The process of amending the constitution is a good example. The self-amendment capability of the Constitution is one of the vital features that makes it so robust. In fact, the Constitution owes its own ratification to the first 10 amendments (the Bill of Rights); they were necessary to placate anti-Federalist concerns. Since then, the amendment process produced changes like the abolition of slavery, women’s suffrage, presidential term limits.

I believe it is strictly impossible to design a political system which enables amendments like the above, but cannot be corrupted. Consider the 19th amendment, which gave women the vote. If it is possible to redefine the category of people allowed to vote, then logically it can redefine the category in ways that arbitrarily favor a dictator (in the limit: “only the dictator may vote”).

I suspect there’s something analogous to Turing Completeness here: once you have a sufficiently capable political system, it can compute arbitrary political functions, some of which are very different from its creators intentions and values.

(However, even if we could come up with a theoretically perfect political system, it would still be vulnerable to side channel attacks: corruption, political violence, general behavior from external systems and actors.)

It’s timely that you bring this up, since we’re currently spec-ing / prototyping the Supernode system, which allows for discretionary modifications to the Cred map.

Basically, we are switching from an “Activity Map” (the entire graph is raw activity data) to an “Annotated Map” (most of the graph is raw activity data, but annotations guide the cred flows towards important activity).

Whoever gains the ability to operate this system (currently, me) gains enormous power over the output. Currently, if I wanted to flow all of the Cred to one person, I would struggle to do so. With corrupted Supernodes, it would be easy.

However, I think we really need these capabilities for the system to function in the medium term. The biggest issue is that the raw activity map will be very susceptible to gaming. Right now we’re in PvE mode, everyone is working together, so the activity map does alright. However, if used at 10-100x larger scale, it would absolutely break down with floods of low-effort high-activity “contributions”, and we would be presented with an un-winnable game of cat and mouse.

Also, even putting aside the fact that activity cred can get lol’d by gamers, it has other major deficiencies: specifically, it totally fails to recognize some categories of work. I think the best example of this is legal work and sensitive strategic planning: @ianjdarrow has been a great help to me on both of these fronts, but because the conversations are 1:1 chats, they don’t produce legible activity. And if SourceCred fails to recognize some contributors, it’s both a moral failing (unfair!) and a utilitarian failing (incentivizing high value contributions is important!).


I think a useful way to orient this discussion is to focus less on what powers exist, and more on the regulatory process that governs the use of those powers. The U.S. Constitution makes it quite hard to ratify amendments – you need 3/4ths of all the states to approve it. We can imagine an alternate amendment process that only requires political consensus from the three branches of government, and it’s easy to imagine such a system collapsing into overt oligarchy or dictatorship.

The question for the Supernode system then is: how should it be regulated? I don’t know the right long-term answer. For now, I’m going to regulate it directly in my role as TBD. I expect that after we’ve spent a few months using it, we’ll have some clever ideas about how best to constrain its use.

1 Like

Valid points, but I think about this: there are two ways for a system to respond to all possible states – first, all states can be handled from “within” the system, or second, there is one state for which the system completely falls over. In both cases, all states are “handled”, in that each unique state yields a unique action, but in the latter, the system does not survive by design. I would suggest that the first system is impossible to get “right” and will fail in pernicious ways, while the second contains a built-in conception of its obsolescence and can gracefully allow something to emerge in its place.

Put another way, you are at minimum given “one degree of freedom” for embedding values in a system, to make some attempt at including fundamental protections, contra collusion, charisma, or other failures.

No system can – or should – last forever. Everything is created with a certain understanding of the world, which necessarily incomplete, and serves to handle variety to the best of its ability while we continue to learn about the world and develop better methods for handling variety.

Put another way, would you rather see a government slide into an irresistible dictatorship, or see a revolution which at least promises the chance of something better? Rather than attempting to handle everything with fragility, why not handle a few things robustly and accept (gratefully) that we are embedded within a larger sociotechnical cultural system which can continue to generate artifacts from without?

1 Like

Yes.

Totally, however zero-knowledge voting requires a lot of computation. Given how SourceCred is computed offchain maybe voting could be too? Would require research, but yeah totally agree.

Another thing that might help here is Etheruem mixers. This only applies if A) voting happens on Ethereum, and B) voting tokens are transferable. If voting is not transferable (for example Cred weighted voting) then a zero-knowledge voting app would be needed.

Doesn’t work. Whales have opinions. Strong opinions. This means that if something is localized, but then happens to get on a whale’s radar: game over.

One way around this, however, is to have independent autonomous communities. For example, if you have a network maybe that platform has some governance that happens at the network level. There could be lots of organizations on that network however. Those networks would have their own reputation and governance that is separate from the network. This way the organizations can be autonomous and whales can’t affect them directly other than at the network level. If these organizations rely on funding from whales then there’s lots of potential influence, but if they don’t need the whales for much, then whales can focus on swimming in the ocean vs the pond.

YES! Is there a SourceCred wishlist somewhere that we can add this to?

Would be cool to explore and leverage this more.

Just today a subreddit integrated with an Aragon DAO so that members can vote and do stuff on the sub. It’s starting small, but this idea is to give the community control of it’s destiny and a way to engage in governance beyond just having moderators.

This is rather unfortunate. The heart of the system is that it recognizes and rewards contributions. What gets measured gets managed, so no contributions left behind! How would a protocol like SourceCred value 1 on 1 interacts like this tho? Would the cred just flow from you to them, or would there be some other way to value it if it adds value to the entire platform (like legal council might)?

I think the supernodes and manual nodes will take care of this.

For example, I can make a supernode that represents “SourceCred legal / regulatory strategy”. Then I can add manual nodes connected to this node, for example “1:1 chat with @ianjdarrow about how to launch the CredSperiment”. Depending on how we’ve configured it, this could flow a lot of cred from the “legal / regulatory” initiative to Ian. Of course, you’d need to trust me that the contribution was actually important, and not just a way of me giving cred away to my friends. Over time, we’ll need to think about how to make this system accountable.

1 Like

Could you maybe have a system where if someone thinks a supernode is not being correctly weighted they can dispute it by creating an Issue around it and boosting that Issue. Then SourceCred community members would be incentivized to engage with that Issue. This could lead to a problem being explored and either discarded or resolved. If Cred weighted voting gets enabled then there could be a vote to resolve it as well.

To take this even further, since boosting is kind of a prediction market on ideas you could boost an Issue, then have a futarchy market open on that Issue. As people engage with the Issue to help resolve it, they could vote on which way they think the dispute will go (was the supernode being fair or not), and then as the discussion evolves there’s a tangible signal as to how it’s going. Then when the market resolves (say after 7 days or something) those who voted with the winning side get rewards from the boost and those that didn’t might have some Cred flow to the winners.

This is an interesting idea, yeah. If the supernode is under-weighted, then people can directly boost the supernode, which will implicitly fix the problem (the supernode is now worth more cred, because it was boosted). The new boosters will then be incentivized to get new edges added to the supernode, so the supernode receives more cred from its dependencies.

It’s not as clear how to deal with a supernode with highly inflated value. Should we have some sort of “anti-boost” or “short-sell” mechanism? Or we could have something like cred defenders that make a case to the community that a node’s cred should be slashed, and get rewarded based on how much cred they slash?

1 Like

I am so intrigued…

From a UI standpoint this would require hacking on Discourse right?

From a community engagement perspective this might have a few consequences:

  • Much more thoughtful and carefully worded posts. This might inspire people to really work hard to add value to their content. It might also prevent people from chasing mirages as there would be short-sellers to temper the market.
  • It might also create fear and make people shy away from posting anything controversial.
  • Awesome tool for Cred defenders.

I’m really curious about exploring this, but I think initially the goal is to promote engagement as much as possible. We already have (in my opinion) some of the highest quality discussions anywhere on the internet. What we need is more engagement and growth. As such it might make sense to table anti-boosting mechanisms for now (as much as I really want to explore them more).

This is more along the lines of what I was thinking. Issues might be a new type of thing along with initiatives and artifacts. Issues would be places for discussion around an idea or problem. You could boost the Issue just like anything else. Then the community can discuss. If a decision needs to be made then it could be put to a vote or something. Maybe sometimes a problem is discussed and it’s determined that a new initiative should be formed to address the problem. Or maybe sometimes an action needs to be taken.

With an Issue the idea would be that you would be boosting a topic so that people allocate their attention towards that topic. This builds off of GitHub Issues where you can create an Issue for maintainers of a repo to explore and discuss. Initiatives are then more like PRs lol

Now… how a decision gets implemented is another question. For the Supernode example, would TBD need to make the change, or would the Cred weighting champion need to be consulted, or something else entirely?