I wanted to open this thread to get a general discussion going on the potential for using randomness to fight gaming SC. The idea here is that any policy creates an incentive structure, which even if not gamed, will orient contributors. I thought using some form of noise/randomness to make the incentive structure less consistent or deterministic could help direct contributors towards this property.
This could be injecting randomness in the parameters of a grain distribution policy, but it also could be a level deeper, adding some noise to the graph itself, probably in the edges.
Don’t focus too much on my examples. I’m more curious about what y’all think about noise in general.
re: noise in general: I have generally found that attempts at vote
fuzzing are ineffective. It is trivial to write a scraper to achieve
arbitrarily good confidence intervals around (say) Glassdoor salary
ranges or Reddit upvote counts. I’ve done so, and reported it; Glassdoor
gave me a “wontfix”.
This generalizes. If you want to provide an unbiased estimate of a
quantity, then you cannot get around disclosing the expectation of that
quantity as the number of samples increases.
So, given the obvious drawbacks of introducing randomness—confusion,
poor inspectability, irreproducibility, increased variance of cash
flow—I suspect that I would oppose many such attempts. But I’m not
opposed to the idea in principle! If you think that you have an approach
that doesn’t hit these pitfalls, I’d be interested to hear it.
I previously worked at a company whose core product began as a quantum physics algorithm for bubbling up signals in extremely noisy data sets (i.e. quantum physics). The level of sophistication they had brought to it was beyond the scope of your average individual or team of hackers / programmers’ abilities to mimic, but a lot of the underlying algorithmic work was not so complex that it wouldn’t be impossible to repurpose for something like countering SourceCred-related noise systems.
I’m with @wchargin in that the drawbacks would likely far outweigh the benefits. But personally, this feels like a technical solution to a human problem, which (to me) is not aligned with the core vision behind SC, which I would characterize instead as a technical supplement to solving human problems with human collaboration.
It’s not a bad analogy: in the quantum case, the relevant formalism is
the threshold theorem, which indeed is an important part of why
quantum computing is viable. It basically says that you only incur a
polylog penalty for arbitrarily good error correction.