Moving from a discord chat.
Currently it looks like you can farm cred by spamming.
I tried reading up on the timeline cred algorithm and from what I understand. Nodes like issues, comments, PRs and PR reviews have a base value equal to the weight. That value then flows to other nodes it has a relationship to, such as the repository and the comment author.
In small volumes of spam, this would mean so long as you don’t “feed the trolls” by avoiding interacting with these threads the cred gained this way would stay very low, perhaps negligible.
But because attackers have the means to create cred out of thin air and can have some of that flow to themselves, at scale this might be a way to boost your account. An overt attack could be to use this cred-creation property to gain significant cred. For example a few accounts flood the repository with hundreds of issues and comments all feeding into each other. Each issue and comment creating a small amount of new cred and accumulating them in those accounts. That in itself could be the attack, for example to try and earn money that way when cred is rewarded financially. Or as a stepping stone to use the larger cred of these malicious users for other ways of gaming the algorithm. For example by using these spam accounts as expendable ones expecting them to be banned, but letting some of it flow to your real account hoping that will escape scrutiny.
Other means of farming are definitely conceivable. For example trying to avoid detection by trying to appear more legit. Automatically sending some “LGTM” to PRs or “What do you mean by that @some-involved-user?” (hoping to get replies as that would be more indirect cred for the attacker) to issues with a couple of comments. It would be less obvious whether they’re a bot or real user.
Some thoughts on handling this. The previous idea of cost may help: Cred, Cost, and 'Resistance' An overt spam attack would have low value but could be offset with increasing it’s cost. As it now required moderation effort. Depending on the implementation, a bot account might even work it’s way into “debt” or create such high resistance for themselves they’re in a kind of isolation, to make further spamming pointless in terms of using it to earn money.
As the thread suggested though it also means you need to figure out how to attribute costs and it has implications for all cred flow, not just abuse.
Perhaps a simpler approach would be to think of moderation tools that discourage future misbehavior. A one-off penalty in cred, temporarily 0 out a users cred for N weeks (timeline windows), permanently blacklist a user cred. The usual moderation tools. They could be encoded in the graph by a new node with specialized edges. Or could just be code paths to bypass the algorithm for enforcement. What I don’t like about this approach though is that it’s reactive. It also rewards spending precious time to deal with abuse and nullifies the damage done at best. Worst case would be you escape notice and get payed out in currency before moderation. You’re more or less in the same boat as email there. Anyone can attack, the attacker loses nothing if it fails, and defending is costly. You can do sophisticated pattern scanning (kind of like anti-malware in email), coordinated global blacklisting, machine learning for new patterns, etc. But you won’t catch everything and there’s still no reason to stop attacking.
Different from email though is, what if we protect the transition from cred to currency? When you’re paying people based on cred, what if maintainers need to approve users for this and users need to claim it with some simple protections like a captcha. That would at least eliminate overt attacks I assume. However it’s still an administrative burden.
What are your thoughts and ideas?