I may be naive, but my intuition is that actually, every interaction is already voting on every damn thing. My reply to this, for instance, is voting for the ideas in your post, as well as a host of other things, many of which are subconscious even, such as the importance of your position in Colony, Colony generally, etc., and now real cred that will translate (even on a micro scale for now) into actual dollars in the near future. What if we can just capture all that organic voting information without perverting or corrupting it?
As I think about this a little more: if the curation game is successful, maybe we could make a more trustless version of SourceCred. You would lose out on data resolution (i.e. you don’t have every individual post or comment contributing cred). However, if we get the incentives for the curation game right… that game could happen trustlessly.
Yup. Already breaking out my replies into multiple comments to increase the likelihood of getting more likes lol
+1 to this!
Yeah, so… originally I was on the same page with this, but then a week or two ago the results of the original Discourse SourceCred stats were revealed and I was not even on the list, despite Discourse telling me I was new user of the month or something. This was disapointing. I wasn’t contributing to get points, but then seeing that other people got points and I didn’t made me want to get points. Now I’m breaking my replies into multiple comments to create the possibility to get more likes. I could also not like other people’s stuff as much or just create new threads rather than replying to the threads that exists. Those last two options seem like they would degrade the contributor experience to the point of being redundant, but I have noticed that I’m starting to move towards actions and preferences that align with the way the game is designed. I expect this to only increase, and it should, because how do we know if it’s good or not unless we actually test it!
That would be great
- recognizes contributors in a concrete visible way
- makes it easier for people to catch up and see what’s going on in the community
Agreed. It should be open and feel like a collaborative game vs a bureaucratic corporate or political structure
If it makes you feel better, if you take a look at the latest scores, you’re the fifth highest cred on the Discourse.
This kind of strategy is really in co-evolution with the norms of the community using cred; if people get tired of seeing lots of tiny posts, they may withhold likes from them. If the style of post is basically “stream of consciousness split into many small posts”, I think it would add a lot of extra noise, and I would personally be less likely to like it. On the other hand, if the style is “distinct posts are making distinct points”, it could be beneficial to have multiple comments (easier to link to, etc).
This strategy might work, but it’s def on my “to-fix” list, it’s a big bug in SourceCred. So I ask you not to use that one. It also makes it less fun for folks.
At the moment, you don’t actually get much more cred from a topic than a post. However, I think a well-written topic is easier to discover, and easier to reference. So, the well-written topic might get more cred down the line from more references and engagement. IMO, that’s actually a good thing.
Thanks for your responses! It’s silly, but I do like seeing that I’m on the cred board lol
Agree that gaming the system in stupid ways that degrads ux is a suboptimal lose-lose plan. Ultimately though, payouts on any given plan are determined by the culture of the community and what they choose to support. In addition to improving the scoring system to weed out bugs, it might also be fruitful to think about how to establish healthy positive-sum cultural norms for a community. I think this group is doing a great job with that so far, but as the system scales it might become increasingly important
Same. I’ve seen some talk about the negative dynamics that leaderboards can introduce. But if done right I think visualizations like the one we have could be a great way to visualize/negotiate. Video games seem to do okay:)
So am I. Agree it’s a good thing. That self awareness should help us as we play the game and give feedback.
Have also been thinking about how generally, more information/communication is good. Lacking human interaction, these virtual spaces are often actually low information environments. At least compared to what we’re used to processing face-to-face. And that can lead to some unwanted dynamics (paranoia, group think, assuming bad intentions when they aren’t there, etc.). Especially if money is on the line and stakes are higher. That communication is also generating valuable data for the graph. I keep finding myself thinking of the new eye logo.
Not in a bad, Orwellian way. More like, I’m being seen. By other people in the project. Plenty of ways this could go Orwellion, don’t get me wrong. But those have more to do with what we do with the data I think.
One thing I do think we should look out for, which I’ve noticed in myself in similar environments, is the false consensus effect. For instance, if someone posts an idea. Perhaps a slightly controversial idea, as it suggests a change to an existing process. You see that obody else has replied. You’re more likely to think that everyone feels the same way, and don’t want to go against this imagined consensus. It’s kind of like ‘avalanche consensus’ algorithms in blockchains. This effect will also be more pronouned in environments with hidden power structures (most DAOs) and money on the line. Compare this to an IRL situation, where someone suggests something in a meeting. Even if nobody replies, you’re likely to at least share a glance (eye) or IM with someone in the room you trust. Or talk about it over beers later. Because you have developed trust face-to-face beforehand. Such trust is harder in political online environments.
This is true. The amount of snafus due to miscommunication and assumptions are unfortunate. Making data more transparent and providing more reference points to inform viewpoints beyond just words is a huge step to help improve that situation
Dystopia is here, it’s just evenly distributed. As I mentioned in a long rant, what’s essential is that the data is accurate and communities/individuals can verify and govern that data appropriately.
When this happens I assume that no one cares. This assumes that no replies" also includes emojis and likes. I figure that if people support something they’ll say so or give it a thumbs up, if people don’t like something they’ll speak up, and if they don’t care then they won’t do anything. Am I the only one who thinks this way? lol
Most of the time this happens (no response), it means no one cares. Or just that the group has “lazy consensus”. This is actually an efficient way to reach consensus on small things. Also, I actually thing emoji’s, thumbs up, etc., are fairly rich signals. At least they are to me when interpreting things in a DAO. But when someone makes a controversial statement, even accidentally (because they didn’t know a subject was taboo, for instance), I have also observed silence that I interpret as nobody wants to stick their neck out. And as time passes, and you know other people have seen it and not commented (in Matrix, you can literally see avatars of who has seen the message piling up), the perceived risk of going against the imagined consensus goes up. Another reason to have pseudonymous participation.
I’ve noticed this too. That’s actually one of the original goals of having burrrata as my user profile is that it was completely arbitrary and would allow me to say what I want without providing any other information. This, at least I hoped, would lead people to focus on what I was saying vs who was saying it. Now that I’ve used this profile quite a bit in various communities and established reputation that benefit is wearing off considerably. It’s tough because you want to allow anonymous participation, but at the same time humans are social creates that operate based on reputation and relationships. These qualities are essential to build a community, but can also lead to political battles and group think. Still not sure how to balance those lol
Have had a similar experience. I just can’t be bothered to create new pseudonymous users these days. Which I suppose is a good thing. You want some friction there. And also, yeah, you want to keep building that reputation. It’s key to all this. Why we’re obsessed with SC!
I do see other people creating new identities more often that I do. Between Matrix and Twitter, sometimes I feel a little skitzo, talking to new’ish avatars that I’m sure are people I know, but I’m not sure who… I generally think it’s good though. It means someone is communicating with me in a way they didn’t feel they could with their regular identity. And knowing that person might be doing so to subvert hierarchy, real or perceived, makes it feel generally right… Though it also brings up issues around trust and managing your own reputation. In general, I think someone’s “subversive” or “shadow” self, expressed pseudonymously is a legitimate actor in the system. And we might want to keep that in mind (if we’re we in agreeance) as we’re creating these models. Perhaps fodder for a separate post.
How about this post?