DONUTS: a case study in tokenized incentives

Yes was wary of posting that number, as it’s a bad proxy for activity, but gives a general sense of a sub’s size. Definitely think a larger community could have a higher mcap, especially if its sub’s eyeballs are as valuable as r/ethtrader’s (arguably on the high side).

I see the current banner ad is going for 100,000 DONUTS/day, which at current price is ~$256/day. If that was average, you’d be seeing ~$7,700/mo. Not bad. Have you seen demand for special memberships (badgers, GIFs in comments)? Just bought another month for 5,000 DONUTS (~$12 at current price) just to experiment and give back. But aside from enthusiasts, is there organic demand for the features (admittedly I’m not so excited about such features)?

Btw, looks like my membership purchase has been hanging for a couple mins… image

Presumably because of high gas fees and Reddit waiting to see a payment tx go through. Don’t mind personally, but a regular web 2.0 user might be worried about that.

We implemented something similar for Maker actually. They were seeing brigading on applications for new collateral types (i.e. handful of new accounts would be created that only liked content supporting their coin). Obviously didn’t want to pay shills for this. So we implemented a feature that changed the amount of Cred a like mints (creates) based on Trust levels in Discourse,

image

This likely isn’t needed as much for other SourceCred plugins, as the Cred score of the person liking, commenting, or otherwise interacting with the contribution will influence how much Cred is flowed, similar to how a high-ranking webpage on Google will confer a higher page rank on other sites it links to. However, we do mint an equal amount of Cred on every like in Discourse, as we found that minting Cred on likes instead of activity (posts/comments) increased the quality of scores (review is good signal).

I’m personally generally wary of hard thresholds, as they can be hard to set properly, and are potentially exclusionary and amenable to collusion attacks (i.e. let’s just set the threshold high enough we can shoot down any competing voices before they have real power). That said, it still seems unclear how to balance this with the legitimate need to create and maintain healthy ingroup/outgroup boundaries. Perhaps it’s just human nature and binaries will win :man_shrugging:

Have my eye on tools for detecting manipulation. Since SourceCred is a graph, there are general tools that address this (e.g. it’s relatively easy to do data viz to detect dramatic changes in the shape of a graph, then investigate). How to make the distinction between malicious gaming and healthy gaming (and it is a fun game), and deciding how bad actors are punished raises yet more interesting open issues around decentralized governance however:/

2 Likes