Dogfooding SourceCred via OpenCollective

This quarter, I’d like to start dogfooding SourceCred on SourceCred itself. That is to say, I want to pay SC contributors based on their cred scores. This is the best way for us to get lived experience on how SourceCred really performs in practice and under pressure. We’re the best equipped to deal with the issues that come up, and we’ll learn so much from the experiment. So let’s do it!

Now that we have timeline cred, we have the technical pre-requisites on the SourceCred side to be able to do this. So there are two questions: funding and implementation. On funding, happily @Protocol Labs is willing to chip in, and I’m also hoping to find other supporters of the project who will help finance the experiment. On implementation, we need a system that:

  • makes it easy for people to contribute financially
  • makes it easy for people to get paid
  • makes it easy for project maintainers to set a cred-based distribution
  • makes it easy for project backers and contributors to have transparency and see that the funds were allocated appropriately

Initially, I was planning to use crypto (see discussion here). But the truth is that crypto would actually be a PITA in almost every dimension. (IMO, the breakthrough feature of crypto is the ability to create “incentive focal points” via token generation, using it for actual payments is still super frustrating.) I think we should go with OpenCollective instead; they are interested in working with us, already have the right feature set, are well-aligned with us, and as an added bonus lots of other OS projects are there; if this experiment works well, it will be easy for more people to adopt this approach.

So, as we decide how to run this experiment, we should both view it as a great dogfooding experience for SourceCred, and as a potential template for other collectives to start cred-based payments.

OpenCollective seems to operate on month-long reporting intervals, so I propose the following procedure for our experiment:

  • Every month, the collective has revenues R. The project chooses a payout ratio p, which determines what fraction of revenues will get paid via cred, and what fraction will accumulate in a treasury or is otherwise disbursed through non-cred mechanisms. For the SC experiment, set p=1.0. Then each month a total of R*p will be disbursed.
  • SourceCred calculates cred on a weekly basis, aligned on calendar weeks. At the end of the month, we calculate monthly cred for all contributors as the sum of their weekly score in each of the 4 weeks.
  • The project maintainer(s) (in SourceCred’s case, me) review the cred distribution. They may change weights (e.g. increasing/decreasing weights on individual nodes) to subjectively improve the quality of the distribution; if so, they will publish the new weights. This produces the “canoncial cred distribution” for the month.
  • If we haven’t yet added good in-band filtering to remove attackers/spammers, they may also exclude users from the cred distribution, by setting their cred to 0 in the distribution. Maintainers shouldn’t make arbitrary out-of-band changes to the distribution; that should happen through the weights. Letting maintainers zero out some users’ scores is just a fallback to make sure that egregious attacks won’t be successful.
  • Then, supposing that D is the modified cred distribution, and c represents the identity of a particular contributor, we calculate the monthly payout for each contributor: D[c] / ∑D * R * p to each contributor. (I.e. we split the disbursement amount evenly based on cred)
  • Based on feedback from OpenCollective, we will define a minimum payout amount (e.g. $5) as sending smaller payments is likely to be uneconomical. As a long term solution, we should keep track of contributors unpaid payouts, so they can get paid once they are owned at least the minimum payout amount. For the purposes of the experiment, we may just filter out tiny payments and re-normalize the payments for the month.
  • The maintainer (me) will then post a public thread, along with the payout totals, the weights used, and a link to a SC instance showing the cred for those weights. People in the community will have the chance to review the weights, discuss whether certain contributions were under/over-valued, and suggest changes.
  • After a week (or when consensus is reached), the payment amounts will be finalized.
  • We’ll then send those payments to the contributors via OpenCollective. (I’ll need to check with OpenCollective to find out what APIs are most applicable; I suspect every contributor who wants to get paid will need to make an account.)

The most controversial point here will be that the maintainer has a free hand in choosing the cred weights, and there’s a conflict of interest because they can choose weights so that they get paid more. A knee-jerk response would be to exclude the maintainer from payments, but that would be a mistake: in many cases, paying the maintainers is the most important part of making OS sustainable, so a payment mechanism that excludes maintainers is flawed from the start. Instead, we will need better governance and oversight mechanisms. In the long run, the maintainer shouldn’t be unilaterally changing the parameters; all stakeholders (contributors, funders, maintainers, community) should be involved in that process. But trusting the maintainers is a reasonable approach at the start.

I propose we scope this as a 3-month experiment, and communicate that to potential funders. At the end of the 3 months, we’ll evaluate how well this worked along many dimensions:

  • What pain points did we encounter within SourceCred itself? Did we feel that the cred scores were doing a good job of reflecting value creation?
  • How did the experiment change the overall level of engagement with the project?(Impossible to know for sure, but we can get a sense.)
  • How did the experiment change the behavior of project participants? Which “cred gaming” behaviors were helpful / favorable? Which ones were bad?
  • How did trusting the maintainer (i.e. me) work out? Do we have confidence in my decisions? Was there good transparency in those decisions, and would it have been easy to tell if I were choosing the weights in a self-serving way?
  • How well did OpenCollective work as a platform for this experiment?
  • Do we want to continue funding SC itself in this way?
  • Would we recommend that other projects try the same approach?

As a bit of skin in the game, I’m planning to help personally fund this experiment (independent of contributions from PL or others). If people are concerned about my conflict of interest in getting paid from it, I can commit to providing enough funding that my net cash flow from the experiment is less than or equal to zero, although I think it actually makes a better, “realer” experiment if we don’t paper over all the conflict of interest issues. :slight_smile:

As for timeline, I’d love to have August be the first month of the experiment, which means we’d have our first payout at the beginning of September. On the SourceCred side, the main thing I’d want to have before we go live is the Discourse plugin; I want to make sure that community involvement is being rewarded alongside technical work from the start. I’m confident this is achievable before end of month. The other side would be working with OpenCollective to ensure that we can ship it from a technical perspective, and lining up at least some initial funding for the first month (e.g. from Protocol Labs).

I’d love any thoughts and feedback on this proposal!

3 Likes

I think this is a super exciting initiative. I especially like the 3-month trial. Quick iteration-cycles through which we reset the cycle sets appropriate expectations with regards to the experiement and should help get us to what can be a longer-term solution fastest.

Given this, I’d heavily optimize the funding so it can be used for at least 3 cycles. The first set of dimensions you lay out seem good. I think we might be missing a bit more by way of “did funding actually help project development?” You mention engagement in your second bullet, but I wonder if there’s a non overoptimized way of digging a bit deeper into what funding should achieve? Perhaps engagement + quality? Perhaps some objective metric can be used (bad e.g.: PRs). Obviously, each metric has a downflow, but I do think the gamification will be more effective if contributors have an optimization function in mind (and thereafter the knowledge that rewards are not just programatically assigned but there is “intelligent” oversight from maintainers).

A couple additional thoughts:

  • I do worry about maintainer conflicts in time, and the dynamics this could create in terms of e.g. zombie projects kept alive (but not vibrant) for monetary gain. “Fork it” is a fine response, but can we do better?
    • One idea there would be to set maintainer rewards to some fixed portion of cred. This at least removes an incentive to overly play with the weights with ST gain in mind. Ideally, this would mean the maintainer is incentivized to popularize the project (ie make it useful) to increase the top line rewards, which aligns them to actually rewarding the most useful contributors (since it indirectly benefits them). Food for thought…
  • I think this could really be a watershed moment for SC (and so for OSS, maybe I’m naive) but in that sense, what are simple ways this experiment (and SC itself) can be used to create viral coefficient around the project? Obviously this helps with SC longer-term anyways, but I do think there is special potency to money-based gamification and simple dashboards/UIs/metrics could make the experiment that much more potent.
3 Likes

What do you mean by this?

In its current iteration, SourceCred gives the maintainer(s) the ability to manually assign weights to pulls, issues, etc. My intention is to make good use of this tool (along with public justification). So the rewards will have a fair degree of intelligent oversight.

This is a really interesting idea. I feel like there are probably some interesting real world analgoues? I brought it up on the SC office hours call and I believe @mzargham had some thoughts in response to share.

I agree that this could be a watershed. I’m not sure that I want to work too hard to encourage fast virality, though–my mantra for growth lately has been “slow and steady” (aka deep before wide). Running this experiment on SC will kind of be “easy mode” because we’re a small community that are all bought into the idea of SC and have a lot of context and want to make it work. If SC grows by 100x, it amps up the difficulty in a way that we may not be prepared for.

So, my inclination is to let the experiment be what it is, and let SC interest grow organically by word of mouth, as it “earns” the organic growth through being a genuinely useful and promising technology, rather than by hacking on the virality coefficient. Once we have a basis for more grounded confidence in the product/algorithm itself, we can start experimenting with creating positive feedback mechanisms around project growth. (That said, we will build tools like dashboards and metrics because we’ll need and want them, and other early adopters will need and want them. And we should/will publicize the experiment, esp. once it’s done, and post about how it went and whether we encourage others to follow. I’m more saying, let’s not rush into things like creating cred incentives for people to recruit more people into SourceCred before we are ready for the consequences.)

1 Like

Makes complete sense on deep before wide, and I agree with the approach. On the intelligent oversight point, we’re in full alignment. It was me saying in spite of the oversight, gamifying the optimization function should be effective.

What I meant on the 3 cycles was purely tactical: I would emphasize this as an interated experiment, where we plan for funding over 3 (or multiple) 3-month trials, so we can learn whether we know how to rapidly do minor course corrections based on “staked-SC” data yielded by each “phase” of the experiment.

Hi! Here to confirm that Open Collective is indeed interested in working with SC. Happy to help set up & work through any issues.

OpenCollective seems to operate on month-long reporting intervals.

This is accurate, but you can export your current transaction ledger (or fetch it from the API in real time) so you can decide the interval you want for the payouts. For example the Open Source Collective (Fiscal Host of most open source projects) pays expenses every week on Friday’s.

Right now, this is doable but painful since you can’t filter expenses per user. There’s an open issue to improve this, but we don’t have an ETA atm. It might be taken as a bounty in the OSCA Lagos meetup this weekend.

You will be able to do this on the collective as well soon, in the future threads will be linked to expenses or donations that can be executed once a decision is made.

Indeed. But we can work with you in improving our API if needed.

I think the Open Collective transparent ledger can help here. Also, in general, trust by default is the right place to start! :slight_smile:

2 Likes

2 posts were split to a new topic: SourceCred Trust Levels

This is very interesting! You could totally configure an Aragon DAO to accomplish the goals laid out here in a decentralized manner. If you’re already going with OpenCollective tho then totally give that a shot. I’m super super curious what a DAO model would look like for you guys tho… Happy to chat if you guys want to create a DAO based plan B and/or plan to move towards a more decentralized solution :slight_smile:

(also just happy to chat in general lol)


EDIT: Also sorry if this post was off topic. The topic here is technically “Dogfooding via OpenCollective”, but the initial post by @decentralion pretty much described all the problems DAOs are trying to solve so I responded and replied. Anyways, please let me know if my post would be best elsewhere, otherwise all good

@burrrata: Your post is welcome in this thread (though I expect before long we’ll have threads specifically about running a SourceCred DAO).

I’m going with OpenCollective for this experiment because because both fundraising and payouts are easier to manage in fiat than in crypto. In the long run, I do expect that we’ll have a SourcecCred DAO, and that financial supporters of SourceCred will contribute either to the OpenCollective, or directly to the DAO, as suits them. (The OpenCollective funds will be managed by the DAO, or something similar.)

An important part of the long-term vision for SC is the creation of tokens (grain) that accrue to a project’s contributors and financial supporters. Setting up a SourceCred DAO will be necessary once we’re reaching that step.

1 Like

A bit of a retro on this assertion, since I was confident I’d be ready to launch this experiment by the end of the month, but as discussed in the new intro, I’m pushing the launch back by a month.

I was actually fairly well calibrated on the amount of technical work involved in launching the Discourse plugin. It’s nearly done (at least in an initial state) and could be merged by the end of the month.

However, I didn’t realize how many other things would need to go in before we can do the CredSperiment, namely:

  • integrate identity resolution, so that cred flows between Discourse and GitHub properly
  • build some tooling for tracking the CredSperiment, so we aren’t doing all of the accounting by hand or in some Google Doc
  • spend a little more time giving dedicated thoughts to the game mechanics before we launch it

So, basically, I did alright estimating the timeline for specific work, and a bad job of estimating scope.

3 Likes