SourceCred and DAOs: A Sketch

I may be naive, but my intuition is that actually, every interaction is already voting on every damn thing. My reply to this, for instance, is voting for the ideas in your post, as well as a host of other things, many of which are subconscious even, such as the importance of your position in Colony, Colony generally, etc., and now real cred that will translate (even on a micro scale for now) into actual dollars in the near future. What if we can just capture all that organic voting information without perverting or corrupting it?

As I think about this a little more: if the curation game is successful, maybe we could make a more trustless version of SourceCred. You would lose out on data resolution (i.e. you donā€™t have every individual post or comment contributing cred). However, if we get the incentives for the curation game rightā€¦ that game could happen trustlessly.

1 Like

Yup. Already breaking out my replies into multiple comments to increase the likelihood of getting more likes lol

+1 to this!

Yeah, soā€¦ originally I was on the same page with this, but then a week or two ago the results of the original Discourse SourceCred stats were revealed and I was not even on the list, despite Discourse telling me I was new user of the month or something. This was disapointing. I wasnā€™t contributing to get points, but then seeing that other people got points and I didnā€™t made me want to get points. Now Iā€™m breaking my replies into multiple comments to create the possibility to get more likes. I could also not like other peopleā€™s stuff as much or just create new threads rather than replying to the threads that exists. Those last two options seem like they would degrade the contributor experience to the point of being redundant, but I have noticed that Iā€™m starting to move towards actions and preferences that align with the way the game is designed. I expect this to only increase, and it should, because how do we know if itā€™s good or not unless we actually test it! :slight_smile:

That would be great

  • recognizes contributors in a concrete visible way
  • makes it easier for people to catch up and see whatā€™s going on in the community

Agreed. It should be open and feel like a collaborative game vs a bureaucratic corporate or political structure

If it makes you feel better, if you take a look at the latest scores, youā€™re the fifth highest cred on the Discourse.

= I think the issue is that SourceCred (by default) only shows summarized scores for all time, and at the time that I took the snapshot youā€™re referencing, youā€™d been one of the most active participants for ~2 weeks out of the Discourseā€™s >1 year history. If we had a ā€œcred for this monthā€ view you would have been in the top five there.

This kind of strategy is really in co-evolution with the norms of the community using cred; if people get tired of seeing lots of tiny posts, they may withhold likes from them. If the style of post is basically ā€œstream of consciousness split into many small postsā€, I think it would add a lot of extra noise, and I would personally be less likely to like it. On the other hand, if the style is ā€œdistinct posts are making distinct pointsā€, it could be beneficial to have multiple comments (easier to link to, etc).

This strategy might work, but itā€™s def on my ā€œto-fixā€ list, itā€™s a big bug in SourceCred. So I ask you not to use that one. :slight_smile: It also makes it less fun for folks.

At the moment, you donā€™t actually get much more cred from a topic than a post. However, I think a well-written topic is easier to discover, and easier to reference. So, the well-written topic might get more cred down the line from more references and engagement. IMO, thatā€™s actually a good thing.

Yup, agreed!

1 Like

Thanks for your responses! Itā€™s silly, but I do like seeing that Iā€™m on the cred board lol

Agree that gaming the system in stupid ways that degrads ux is a suboptimal lose-lose plan. Ultimately though, payouts on any given plan are determined by the culture of the community and what they choose to support. In addition to improving the scoring system to weed out bugs, it might also be fruitful to think about how to establish healthy positive-sum cultural norms for a community. I think this group is doing a great job with that so far, but as the system scales it might become increasingly important

1 Like

Same. Iā€™ve seen some talk about the negative dynamics that leaderboards can introduce. But if done right I think visualizations like the one we have could be a great way to visualize/negotiate. Video games seem to do okay:)

So am I. Agree itā€™s a good thing. That self awareness should help us as we play the game and give feedback.

Have also been thinking about how generally, more information/communication is good. Lacking human interaction, these virtual spaces are often actually low information environments. At least compared to what weā€™re used to processing face-to-face. And that can lead to some unwanted dynamics (paranoia, group think, assuming bad intentions when they arenā€™t there, etc.). Especially if money is on the line and stakes are higher. That communication is also generating valuable data for the graph. I keep finding myself thinking of the new eye logo.

image

Not in a bad, Orwellian way. More like, Iā€™m being seen. By other people in the project. Plenty of ways this could go Orwellion, donā€™t get me wrong. But those have more to do with what we do with the data I think.

One thing I do think we should look out for, which Iā€™ve noticed in myself in similar environments, is the false consensus effect. For instance, if someone posts an idea. Perhaps a slightly controversial idea, as it suggests a change to an existing process. You see that obody else has replied. Youā€™re more likely to think that everyone feels the same way, and donā€™t want to go against this imagined consensus. Itā€™s kind of like ā€˜avalanche consensusā€™ algorithms in blockchains. This effect will also be more pronouned in environments with hidden power structures (most DAOs) and money on the line. Compare this to an IRL situation, where someone suggests something in a meeting. Even if nobody replies, youā€™re likely to at least share a glance (eye) or IM with someone in the room you trust. Or talk about it over beers later. Because you have developed trust face-to-face beforehand. Such trust is harder in political online environments.

2 Likes

This is true. The amount of snafus due to miscommunication and assumptions are unfortunate. Making data more transparent and providing more reference points to inform viewpoints beyond just words is a huge step to help improve that situation :slight_smile:

Dystopia is here, itā€™s just evenly distributed. As I mentioned in a long rant, whatā€™s essential is that the data is accurate and communities/individuals can verify and govern that data appropriately.

When this happens I assume that no one cares. This assumes that no replies" also includes emojis and likes. I figure that if people support something theyā€™ll say so or give it a thumbs up, if people donā€™t like something theyā€™ll speak up, and if they donā€™t care then they wonā€™t do anything. Am I the only one who thinks this way? lol

Most of the time this happens (no response), it means no one cares. Or just that the group has ā€œlazy consensusā€. This is actually an efficient way to reach consensus on small things. Also, I actually thing emojiā€™s, thumbs up, etc., are fairly rich signals. At least they are to me when interpreting things in a DAO. But when someone makes a controversial statement, even accidentally (because they didnā€™t know a subject was taboo, for instance), I have also observed silence that I interpret as nobody wants to stick their neck out. And as time passes, and you know other people have seen it and not commented (in Matrix, you can literally see avatars of who has seen the message piling up), the perceived risk of going against the imagined consensus goes up. Another reason to have pseudonymous participation.

Iā€™ve noticed this too. Thatā€™s actually one of the original goals of having burrrata as my user profile is that it was completely arbitrary and would allow me to say what I want without providing any other information. This, at least I hoped, would lead people to focus on what I was saying vs who was saying it. Now that Iā€™ve used this profile quite a bit in various communities and established reputation that benefit is wearing off considerably. Itā€™s tough because you want to allow anonymous participation, but at the same time humans are social creates that operate based on reputation and relationships. These qualities are essential to build a community, but can also lead to political battles and group think. Still not sure how to balance those lol

Have had a similar experience. I just canā€™t be bothered to create new pseudonymous users these days. Which I suppose is a good thing. You want some friction there. And also, yeah, you want to keep building that reputation. Itā€™s key to all this. Why weā€™re obsessed with SC!

I do see other people creating new identities more often that I do. Between Matrix and Twitter, sometimes I feel a little skitzo, talking to newā€™ish avatars that Iā€™m sure are people I know, but Iā€™m not sure whoā€¦ I generally think itā€™s good though. It means someone is communicating with me in a way they didnā€™t feel they could with their regular identity. And knowing that person might be doing so to subvert hierarchy, real or perceived, makes it feel generally rightā€¦ Though it also brings up issues around trust and managing your own reputation. In general, I think someoneā€™s ā€œsubversiveā€ or ā€œshadowā€ self, expressed pseudonymously is a legitimate actor in the system. And we might want to keep that in mind (if weā€™re we in agreeance) as weā€™re creating these models. Perhaps fodder for a separate post.

1 Like

How about this post?