4 posts were split to a new topic: Website Feedback
SourceCred and Colony are similar in that they both are (or contain) reputation systems. Colony is very opinionated about how reputation works, though: there are Tasks, which are advance-scoped, individual pieces of work. Someone signs up to do the task, someone else manages the task, and then both parties’ reputation is modified based on the outcome. This is a really high-friction approach to building reputation–contrast it to the regular world, where your reputation is continually and implicitly changing based on all your interactions with others, not just formalized tasks in a particular framework.
You’re right that Colony’s reputation is task-driven, but I do think there are benefits to gating reputation behind a needs-driven, semi-adversarial peer-review process. Since reputation is driven by tasks, and tasks are driven by the needs of the org, reputation becomes a (hopefully) tight proxy for the value someone contributes to the organization. As an analogy, prices capture information because of the adversarial (“PvP”) nature of their setting. So here I would hope that reputation captures information in a somewhat similar way. We frequently discuss ways to reduce the friction of the task flow, and are on the lookout for ways to incorporate automated or implicit “tasks” – but to play the devils advocate I would say that some friction in the measurement may help keep the signal strong. After all, Elo ratings are highly trusted because they are the outcome of high-friction, adversarial processes.
I think the risk of “PvE” approaches to reputation (i.e. deriving rep from activity on discourse) is that that are more vulnerable to Goodhart-style failures, where people’s behavior will re-orient around the metrics that drive reputation, “putting pressure” on the phenomena and adding noise to the signal. Undoubtedly PageRank-style graphical algorithms are more robust to these types of manipulation, and incorporating adversarial inputs like pull requests (which must be approved) will make them even more robust. But I think that inevitably the broader the base of inputs the wider the attack surface, especially when the inputs are non-adversarial or non-resource-constrained. Is it really progress if more people are posting on forums simply to increase their post count?
I think it depends on on the contribution model. If you’re in a waterfall model where first the org identifies they have a need, then someone spec’s that need into a task, and then someone does the task–then the model you describe could generate reputation that is a good proxy for value contributed (by the leaf contributors, but maybe not the need-identifiers and task-authors). But if you have more of a classic open source model that consists of people semi-randomly showing up and hacking on the things they care about, it’s not clear the task model will be able to model it. Maybe you would define a task ex-post-facto for each contribution to price it?
“PvE” vs “PvP” approaches
I really like this framing. (See: I want SourceCred to be the world’s most popular MMORPG.) My intent with SourceCred is for the system to be flexible enough to transition between PvE and PvP as appropriate, where “PvE” === you get rewards from the system alone, whereas “PvP” === you get rewards based on explicit feedback from humans.
Take the Discourse for instance. Right now, I have it configured with a nonzero cred weight on posts and topics, which means that a priori you make score for writing posts, even if no one engages. That’s an OK policy for right now because no-one is trying to game it. So we’re in “PvE mode” as a server.
However, since Discourse is comparatively very easy to game, I expect before long we’ll switch out of Discourse PvE. In that case, the default cred of a new post and topic will be zero; they will earn cred if they get likes from other players that have high cred. Also, we could have something like a weekly roundup of the best topics/posts, as curated by trusted community members, and every roundup would mint (say) 500 cred that flows out to those posts (and, transitively, to their authors and, transitively, to posts the authors liked).
I imagine before long we’ll configure GitHub similarly, where the base cred of a new pull request is very low, but the OKRs and feature priorities and such mint cred, which flows to any pull requests that contribute to those priorities. Of course, if someone makes an unexpected pull request that is very valuable, we can bless it with explicit cred, or it can get reactions, or we can define a new feature that it is connected to.
I think that gating reputation behind needs-driven tasks could be the first workable approach. This model is proven in centralized systems such as Upwork or Uber. A framework like Colony could decentralize in the sense of having reputation systems per company/org, with more flexibility in scoping tasks, perhaps a more permissionless/fair way to “move up” (get more permissions, higher pay, etc.). I think this might work better for constrained, predictable tasks such as chess (elo ratings are effective, and an interesting example to think about). However, it also does not reward participants for their own creative solutions (which may solve existing needs in a new way), or for identifying needs the org is not aware of (or it is aware of but it’s not worth the cost to scope/manage). If the solution space is too constrained, it may be successful in certain contexts (and better than PvE), but will likely not attract as much the OSS crowd. Or put another way, the ones that want work to be more of a collaborative game.
So much this.
In a former life I used to make ludicrously expensive frivolities for people who had so much money they had to really go out of their way to find more expensive stuff to buy. I say this not to lampoon the vacuous, egocentric, ultra-consumerism which accompanies idle wealth, but to disclose that the mindset of unadulterated capitalism is one to which I formerly subscribed. I embarked on that career for no other reason than to get that money.
At some point however, I realised that capitalism’s myopic focus on shareholder returns is the root of a huge proportion of the challenges the world faces. From consumerism driven ecological destruction to human downgrading at the hands of the attention economy, the frenetic, escalating, zero sum competition which drives our economies appears to me a marshmallow test failure on a planetary scale.
It seems like a really hard problem to solve though. In an increasingly globalised economy, regulations introduced to restrict free market competition in any given jurisdiction simply create competitive advantage for other less principled economies. Technology moves too fast to be responded to effectively by supranational governance, were a meaningful version of such even to prove possible.
My hope is that projects like SourceCred and Colony can prove a real alternative to shareholder capitalism, that competes with it on its own terms: rational self interest. Except this time, everyone gets to participate—not only financiers optimising for profit at any cost. Maybe that will help contribute, even in a small way, to organisations and companies adopting more holistic and utilitarian economic modalities.
Sounds interesting. Is there a technical writeup somewhere?
Domains are what you’re thinking of. They can be departments, or teams, projects—whatever you like really. It’s just a structure for moving funds around within an organisation. When you earn reputation you earn it for both the domain in which the work was done and the skills/tags associated with the work. In concert, we feel like that in most contexts we’ve been able to think of at least, that covers both expertise being demonstrated and the context (e.g. #solidity, #dev, and #ethereum tags on some work in a
That’s fair comment based on the way we wrote the white paper. However, it’s not actually accurate. We really made a rod for our own backs with some of the terminology we used and the specific use cases we outlined.
We were concerned about writing the white paper to be so general as to seem nebulous, so opted for a more specific approach. However, we really intended it to capture an explanation of one kind of maximally trustless colony you could build with Colony rather than a very rigid and opinionated structure. Nothing could be further from the truth.
“Tasks” specifically has been a real cause for confusion. We really only thought of those as the mechanism by which funds would exit a colony, rather than meaning some work fully defined up front as the term is commonly understood.
We wrote a blogpost a year or so discussing this:
Since then, the protocol has generalised further, as will be elucidated in an updated white paper we’re currently working on. TL;DR though is, yeah of course people aren’t going to meticulously detail everything they do up front. That would really suck. As it stands in “the real world” there are all sorts of ways people get compensated, and our goal with Colony is to make it easy for organisations to support whatever compensation / payment structures they need. Getting the whole possibility space will take time of course, but gotta start somewhere!
We’re not doing any of the fancy computational approaches to valuing contributions that SourceCred appear to be exploring, but if that’s something that can control an Ethereum address, if you so desired, you could easily make that the mechanism which controls payment creation and issuance in a colony. That would be really cool to see.
This is really interesting. I’d love to hear more about this: especially with regard to how the data provision, computation, and analysis/valuation of complex contribution patterns over time are provided such that DAO members don’t need to trust those providing those data. Are you using an off-chain mechanism a la Truebit?
Also interpreting how much something helped much later seems really difficult. How do you plan to identify the extent to which any particular contribution contributed to value creation ex post facto? I feel like most outcomes are a result of multivariate inputs and a healthy dose of randomness. This is one of my concerns about Futarchy as a governance mechanism—I find it tough to see how one can make good prospective decisions by rewarding correlation with positive outcomes without clear evidence of causation.
We looked into PageRank style approaches quite a bit in the early days of our thinking about Colony’s reputation system. However, we found ourselves mired in sybil vulnerabilities which seemed intractable in the economically incentivised and computationally constrained context of smart contracts. How have you thwarted sybil?
@Jack! Great to have you here!
Funnily enough, the same was true for me. At one point I just wanted to be an uber-rich financier and planned to be a hedge fund manager. While misguided in retrospect, this period left me with a lot of knowledge of finance and economics, which I appreciate.
The Star Wars mythology is all about people being corrupted from the light side to the dark side. But it seems to me that for ambitious young people, getting corrupted to the dark side is a default state, but some people later switch to the light.
I believe that these systems will outcompete capitalism. I think open-source will be a big leverage point: I think it’s a fundamentally more productive economic paradigm in the digital age, and the existing shareholder-corporate economic model doesn’t know how to metabolize it. To continue my Star Wars metaphor, open-source is like this untouchable hideout for the Rebel Alliance that the capitalist death star can’t instantaneously co-opt and corrupt. In contrast to proprietary projects like WhatsApp, which may start out idealist, but can be taken over by the capitalist system with a single shot from FaceBook’s death ray.
To use a more historical metaphor: I see capitalism versus open-source-post-capitalism as kind of like feudal aristocracy vs the democratic-capitalist state. The aristocracy had all the power and all holdings that previously mattered (land), but the democratic-capitalists were much better at deploying the new resources and technology, so they won. (To our benefit; for all the faults of capitalism, I prefer it to a feudal aristocracy.)
There’s a few different resources, at varying degrees of stale-ness. I would recommend A Gentle Introduction to Cred. The cred for this forum is also an interesting test case to dive into. I have a formal writeup of the algorithm but it’s a few iterations out of date.
Most of the resources linked describe “legacy mode” cred, which didn’t split up scores based on time. You can read a description of how timeline cred works here.
My sense is that getting a decentralized reputation/incentive metric is a really hard problem, and to make meaningful progress, we need a group that is focused entirely on this one issue in particular. If we succeed, we’ll have a protocol that we can then plug-in to all of the exciting projects building things like on-chain governance systems and smart contract infrastructure, both of which are out of SourceCred’s wheel-house.
Getting everything 100% trustless on-chain is not a current goal for SourceCred. Rather, SourceCred has a trust-but-verify model; anyone in the community can compute the cred scores, and validate that they’re getting recorded on chain accurately.
In the future, when on-chain scalability is improved by many orders of magnitude, and activity currently on centralized platforms (GitHub) has moved onto trustless decentralized platforms (Radicle?/Truebit?/IPFS?), it may make sense to create a trustless SourceCred. But it’s not needed to make progress on the core problem we’re tackling.
A great question. SourceCred has a framework for this: if a new contribution is created today, we can find all the past contributions that enabled it, and create edges from the new contribution to those past contributions. Then the value associated with any new contributions (which are relatively easier to explicitly value, e.g. through “boosting”) will pay out to all the past work, “discovering” its value.
A key question is how effectively we will find these edges. My intention is to make a sort of “curation/assessment” game within SourceCred, where individuals can use their “mana” to propose new edges in the graph, and are rewarded with cred if that edge is accepted. So the game has at least two kinds of players: contributors who do valuable work, and curators who upkeep the cred graph to ensure contributors are rewarded fairly.
I’m still working out the rules and mechanics for this game. You can read the intro here and I’d love your thoughts and contributions!
This goes back to SourceCred not being trustless. Every community will have moderators (e.g. community leaders) who are empowered to protect the integrity of cred from attacks, Sibyll attacks being a particularly obvious one. The system is highly transparent, so everyone can watch the watchmen, call out moderators who are misbehaving, and fork the cred if needed.
In the future where this is all more mature, we’ll have more formal systems, e.g. cred-weighted election of moderators, the ability to recall or appeal a moderator decision.
I feel like it might well be possible to do this trustlessly using a similar approach to colony’s off chain reputation mining. I will think on it and read the other stuff you’ve pointed me to and revert back. I’m really excited by any mechanism which could be used for allocating resources and authority which isn’t just voting on every damn thing, but for it to be really applicable in a DAO I feel like it needs to be ~trustless. It would be cool to figure that out.
Assuming you want data from GitHub, it can’t be trustless, because you need to trust GitHub.
If all the data is coming from trustless / verifiable sources, maybe (probably?) it could be done. But losing out on the real data of people collaborating is a big cost.
I may be naive, but my intuition is that actually, every interaction is already voting on every damn thing. My reply to this, for instance, is voting for the ideas in your post, as well as a host of other things, many of which are subconscious even, such as the importance of your position in Colony, Colony generally, etc., and now real cred that will translate (even on a micro scale for now) into actual dollars in the near future. What if we can just capture all that organic voting information without perverting or corrupting it?
As I think about this a little more: if the curation game is successful, maybe we could make a more trustless version of SourceCred. You would lose out on data resolution (i.e. you don’t have every individual post or comment contributing cred). However, if we get the incentives for the curation game right… that game could happen trustlessly.
Yup. Already breaking out my replies into multiple comments to increase the likelihood of getting more likes lol
+1 to this!
Yeah, so… originally I was on the same page with this, but then a week or two ago the results of the original Discourse SourceCred stats were revealed and I was not even on the list, despite Discourse telling me I was new user of the month or something. This was disapointing. I wasn’t contributing to get points, but then seeing that other people got points and I didn’t made me want to get points. Now I’m breaking my replies into multiple comments to create the possibility to get more likes. I could also not like other people’s stuff as much or just create new threads rather than replying to the threads that exists. Those last two options seem like they would degrade the contributor experience to the point of being redundant, but I have noticed that I’m starting to move towards actions and preferences that align with the way the game is designed. I expect this to only increase, and it should, because how do we know if it’s good or not unless we actually test it!
That would be great
- recognizes contributors in a concrete visible way
- makes it easier for people to catch up and see what’s going on in the community
Agreed. It should be open and feel like a collaborative game vs a bureaucratic corporate or political structure
If it makes you feel better, if you take a look at the latest scores, you’re the fifth highest cred on the Discourse.
This kind of strategy is really in co-evolution with the norms of the community using cred; if people get tired of seeing lots of tiny posts, they may withhold likes from them. If the style of post is basically “stream of consciousness split into many small posts”, I think it would add a lot of extra noise, and I would personally be less likely to like it. On the other hand, if the style is “distinct posts are making distinct points”, it could be beneficial to have multiple comments (easier to link to, etc).
This strategy might work, but it’s def on my “to-fix” list, it’s a big bug in SourceCred. So I ask you not to use that one. It also makes it less fun for folks.
At the moment, you don’t actually get much more cred from a topic than a post. However, I think a well-written topic is easier to discover, and easier to reference. So, the well-written topic might get more cred down the line from more references and engagement. IMO, that’s actually a good thing.
Thanks for your responses! It’s silly, but I do like seeing that I’m on the cred board lol
Agree that gaming the system in stupid ways that degrads ux is a suboptimal lose-lose plan. Ultimately though, payouts on any given plan are determined by the culture of the community and what they choose to support. In addition to improving the scoring system to weed out bugs, it might also be fruitful to think about how to establish healthy positive-sum cultural norms for a community. I think this group is doing a great job with that so far, but as the system scales it might become increasingly important
Same. I’ve seen some talk about the negative dynamics that leaderboards can introduce. But if done right I think visualizations like the one we have could be a great way to visualize/negotiate. Video games seem to do okay:)
So am I. Agree it’s a good thing. That self awareness should help us as we play the game and give feedback.
Have also been thinking about how generally, more information/communication is good. Lacking human interaction, these virtual spaces are often actually low information environments. At least compared to what we’re used to processing face-to-face. And that can lead to some unwanted dynamics (paranoia, group think, assuming bad intentions when they aren’t there, etc.). Especially if money is on the line and stakes are higher. That communication is also generating valuable data for the graph. I keep finding myself thinking of the new eye logo.
Not in a bad, Orwellian way. More like, I’m being seen. By other people in the project. Plenty of ways this could go Orwellion, don’t get me wrong. But those have more to do with what we do with the data I think.
One thing I do think we should look out for, which I’ve noticed in myself in similar environments, is the false consensus effect. For instance, if someone posts an idea. Perhaps a slightly controversial idea, as it suggests a change to an existing process. You see that obody else has replied. You’re more likely to think that everyone feels the same way, and don’t want to go against this imagined consensus. It’s kind of like ‘avalanche consensus’ algorithms in blockchains. This effect will also be more pronouned in environments with hidden power structures (most DAOs) and money on the line. Compare this to an IRL situation, where someone suggests something in a meeting. Even if nobody replies, you’re likely to at least share a glance (eye) or IM with someone in the room you trust. Or talk about it over beers later. Because you have developed trust face-to-face beforehand. Such trust is harder in political online environments.
This is true. The amount of snafus due to miscommunication and assumptions are unfortunate. Making data more transparent and providing more reference points to inform viewpoints beyond just words is a huge step to help improve that situation
Dystopia is here, it’s just evenly distributed. As I mentioned in a long rant, what’s essential is that the data is accurate and communities/individuals can verify and govern that data appropriately.
When this happens I assume that no one cares. This assumes that no replies" also includes emojis and likes. I figure that if people support something they’ll say so or give it a thumbs up, if people don’t like something they’ll speak up, and if they don’t care then they won’t do anything. Am I the only one who thinks this way? lol
Most of the time this happens (no response), it means no one cares. Or just that the group has “lazy consensus”. This is actually an efficient way to reach consensus on small things. Also, I actually thing emoji’s, thumbs up, etc., are fairly rich signals. At least they are to me when interpreting things in a DAO. But when someone makes a controversial statement, even accidentally (because they didn’t know a subject was taboo, for instance), I have also observed silence that I interpret as nobody wants to stick their neck out. And as time passes, and you know other people have seen it and not commented (in Matrix, you can literally see avatars of who has seen the message piling up), the perceived risk of going against the imagined consensus goes up. Another reason to have pseudonymous participation.
I’ve noticed this too. That’s actually one of the original goals of having burrrata as my user profile is that it was completely arbitrary and would allow me to say what I want without providing any other information. This, at least I hoped, would lead people to focus on what I was saying vs who was saying it. Now that I’ve used this profile quite a bit in various communities and established reputation that benefit is wearing off considerably. It’s tough because you want to allow anonymous participation, but at the same time humans are social creates that operate based on reputation and relationships. These qualities are essential to build a community, but can also lead to political battles and group think. Still not sure how to balance those lol
Have had a similar experience. I just can’t be bothered to create new pseudonymous users these days. Which I suppose is a good thing. You want some friction there. And also, yeah, you want to keep building that reputation. It’s key to all this. Why we’re obsessed with SC!
I do see other people creating new identities more often that I do. Between Matrix and Twitter, sometimes I feel a little skitzo, talking to new’ish avatars that I’m sure are people I know, but I’m not sure who… I generally think it’s good though. It means someone is communicating with me in a way they didn’t feel they could with their regular identity. And knowing that person might be doing so to subvert hierarchy, real or perceived, makes it feel generally right… Though it also brings up issues around trust and managing your own reputation. In general, I think someone’s “subversive” or “shadow” self, expressed pseudonymously is a legitimate actor in the system. And we might want to keep that in mind (if we’re we in agreeance) as we’re creating these models. Perhaps fodder for a separate post.