What
This is a proposal to maintain a limited number of code experiments and use them for the CredSperiment.
Why
We have a review process in place which reliably creates the necessary feedback to make sure smelly code doesn’t creep into the project, and awesome discussions can spark about the changes we’re making.
However, I think it’s worth acknowledging, it hampers our ability to conduct experiments, especially when it comes to the code.
There are 2 code initiatives that I’m championing:
Working on this surfaced a lot of smells or quirks in the code, so great refactoring efforts came out of it. Those refactors I believe will pay off in the long run. But for the short term, it’s also painful we haven’t been able to use Initiatives for the CredSperiment in the ~3 months this feature has been planned and talked about.
Roughly speaking I think there are two general approaches to move these experiments faster:
- Lower our review standards, so hacky code for the sake of experiments is accepted.
- Maintain experimental code in it’s own branch/fork, not accepting it as good enough long-term code, but using it for the CredSperiment.
Personally, I think option 1 would be a mistake that will come back to bite us. So my proposal is to consider option 2.
How
Each Maintainer will have 2 experiment slots.
An experiment must have a Champion, and the experiment should be community approved. With “should” I mean the TBD can veto, but should avoid it.
For simplicity, the maintainer and champion are required to be the same person. I’ll refer to them as “the champion” from here.
Rationale
Maintaining an experiment will require frequent attention, and will take time away from the “real” code. That’s why I think there should be limited slots. Even then, I think both the Champions and the community should question whether they have the time and commitment to maintain as many as 2.
It should also prevent the “everything is an experiment” trap. So if we want more experiments but think the current ones are too valuable to stop. The only way is to have more maintainers, or to make sure existing ones are implemented as “real” non-experimental features.
The reason why only Maintainers should do this, is because it has very real consequences for the CredSperiment. It requires skill, responsiveness and high community trust to be allowed to influence the CredSperiment, especially with relaxed reviewing standards.
Example lifecycle of an experiment (click to expand)
Please make sure not to get too caught up in details of this example for this proposal.
1) Proposal
The champion writes a proposal for the experiment.
- It’s should at least include the “what” and “why”.
- Noting that you would be the champion.
Proposals for starting new experiments, should be separate from ending one. If they don’t have any slots left, they should first propose ending one of their current experiments.
The TBD makes a judgement call about when there is a rough consensus.
2) Developing
Each experiment has it’s own branch.
For example: experiment/beanow-initiatives
.
And we’d have an “official” branch such as credsperiment
which combines them. This branch shouldn’t be directly edited. It’s only purpose is to frequently reset to master
, and merge the community approved experiments in. (Manually for now, perhaps a nightly CI to test it)
To make changes to an experiment/*
branch, PRs and reviews are used. But review standards here are taking into account that it is an experiment. The main goal of those PRs is to have a sanity check: Is this in line with the proposal? Are we not doing something that is really broken/flawed? Is there low-hanging fruit to make this significantly better?
The champion, is expected to frequently make sure we can reset credsperiment
from master.
Not doing so allows the TBD to end the experiment, if it prevents the weekly update.
3) Amending an experiment
Hopefully the result of an experiment is that we’ll learn more and learn it sooner. So an experiment needs to be amendable.
This also should be in the form of a proposal. Make sure it includes:
- Summary of lessons learned.
- The “what” and “why” of the amendment.
4) Ending an experiment
Ideally the outcome of an experiment is:
- It was valuable, so we’ve built it as a non-experimental feature!
- We learned it wasn’t too great, so we’re abandoning it. (Yay! We learned things and saved time!)
That said, ending an experiment can be disruptive for the CredSperiment. So it shouldn’t be taken lightly.
The champion may end an experiment themselves. For example if they’re overburdened this makes sense. And as mentioned above, the TBD may end one if the experiment falls behind in maintenance.
But it’s strongly encouraged to use proposals whenever you can. Especially if the reason could use feedback, like: “I think this experiment hasn’t turned out to be very valuable”. Not only is it worth checking whether that’s subjective. Even if people agree, it’s good to have a record of lessons learned.
Requesting feedback
I’ve provided some concrete workflow of what it would look like. The details of it may change. Most of all I would love to hear whether we should implement a system like this.