Maslow Home Maslow Community Garden Newsletter

Proposed release process

for a long time, we made firmware/GC releases every 2 weeks, Then as development slowed, Bar started making releases every 2 months. During this time we went from Bar doing everything to a github based process where anyone can submit a PR that will get auto-merged if more people vote for it than against it. We keep the firmware and GC version numbers in sync so there isn’t any confusion about what matches with what.

With Bar taking a break at the beginning of the year, we broke the process and have not had a release since January.

One other thing that has changed is that we have multiple teams shipping hardware rather than only Bar.

Currently Bar is waiting for the teams to test on their hardware before formalizing the next release.

I would like to propose the following schedule for new releases with the intent being to remove any dependencies on any individuals.

  1. At the beginning of even numbered months, we tag and build a possible new release.
  2. We then wait 7 days to allow people to test the release
  3. If nobody reports problems, the release becomes official

I’m hoping that we can automate this process similar to how the mergebot works today (a bot creates the tag, builds the packages, creates the issue, and then 7 days later checks the votes and either makes the release or sends out a notification that there are problems to be fixed)

Because I expect equipment providers to come and go over the years, I am listing this as a default go rather than a process that requires an explicit approval by any list of people

I suggest even numbered months as this avoids Jan 1 and July 4 holidays in the testing window. This and the 7 day test period are straw-man suggestions to have a policy to debate,

more background: regular, predictable releases reduce the tendency for people to ruch stuff in beacuse if they miss a release, the next one is not very far off.

I would suggest that when we go to version 2,[1] we consider going to version 2.YYMM so that it’s obvious in the version how much time there is between releases

[1] I think that holey calibration and webcontrol are a combination of features that could justify a version 2 change but that’s just a personal opinion.


It is important that we know that testing has been done. I suggest that if fewer than -pick-a-number- people report successful testing, the release does not proceed.


shure, the bot currently requires 2 votes to merge (it’s own plus 1) that logic
should be easy to add

Is there a way to tell if nothing has been merged? That is, do you want to generate a new release if nothing has changed just because the time window has closed?

1 Like

I’m talking about more than adding an emoji, how about counting the PR comments that contain “I’ve tested this and it worked here.” Testing is important.


Have any significant “upgrades” happened since January? We still need to test the files bar sent. Agree with blurfl, one week is too short.


It looks to me like releasing Firmware v1.27 at this point would capture:

Merged #480 Change originalChainLength to 1651 mm
Merged #481 Chain length calculation improvement: stretch compensation
Merged #486 Step further back to PlatformIO 3.5.3
Merged #489 Fix to EEPROM write issue
Merged #490 change firmware link
Merged #494 detach() should run once per state change
Merged #498 Move directWrite outside the loop
Merged #499 Avoid compiler warnings about aux3…aux9
Merged #501 Avoid PWM value 255 for TLE5206
Merged #507 rename RingBuffer to maslowRingBuffer
Merged #510 When (pinCheck >= 6) the board version matches the version strapping
Merged #512 Alarm if board version is unrecognized
Merged #513 recognize a soft reset command in the serial stream
Merged #514 Limit PWM frequency value
Merged #519 Adds directWrite(0) at end of B11

This last one is important.

It looks to me like releasing GC v1.27 at this point would capture:

Merged #789 Change default extend chain length to 1651 mm
Merged #794 Clean up serialPortThread() connection handshake


Yes, on the basis that it’s extremely unlikely that there has been no change to
either GC or the firmware, and we want to keep the two versions in sync, so make
a new release, even if nothing has changed.

but yes, it is possible to check that nothing has changed if we wanted to

David Lang

that is just as easy to be invalid, and much harder to implement (the exact text
is going to vary)

I am biased towards making a release, and I don’t care if there are 50 “it
worked for me” posts if there is one “it did this funny thing for me” post

David Lang

So how long is reasonable?

David Lang

I agree, a hint of trouble should postpone the release.


A separate comment would indicate a level of involvement beyond simple interest in a new feature - a commitment to check that it works as expected and doesn’t cause other issues.

I am a big believer in reducing the friction to make things happen. It seems
like a small thing, but it does add friction.

and I don’t believe that it will really mean anything. Why do you think somone
would be willing to check an emoji but would not be willing to post ‘ship it’?

Testing is good, but you can test for months and still miss a bug because you
don’t have the exact conditions needed to duplicate the conditions that trigger
the bug.

case in point, the bug that has been there for years in B11

so just because it passes the test (whatever that is), dosn’t guarantee that
it’s good.

But the vast majority of the time, the patches are good, which is why I want to
bias the process towards shipping. All the individual patches were approved
already, it’s good to do a final santity check, but it’s all too easy to define
a test process that slows things down without actually adding value.

David Lang

Have a care for the guy who loads the new “release” and finds that he has to re-do the calibration instead of cutting the project he’s excited about making. Or the guy whose machine is damaged by a change not sufficiently tested. Move fast and break things is an approach that doesn’t take into sufficient account of the effects.


and that is why it is up to us in the community to watch out for such things.

We should be doing this as the patches are introduced (as we are discussing on
the holey thread right now), not wait until the release.

Either the community is going to care about such things, or we aren’t. adding
extra steps to the release isn’t going to change that.

making fewer releases is not going to improve things like this. In practice
more, smaller releases have fewer problems than fewer, larger releases that
theoretically get ‘more testing’

David Lang

I think we agree that testing patches is the important part. The present robot approach has no safeguards to assure that any testing has happened at the patch level of the process. Pushing a release for patches that haven’t been tested is what I’m warning against.

1 Like

I completely agree, ideally we need to brainstorm and agree on a documented testing methodology and get people to sign up as testers.

In our quality system at work we validate changes that have the potential for impacting customers on three categories, Installation qualification, Operational qualification, and Performance qualification. Each of these has benchmarks that have to be satisfied in order for a change to be eligible for acceptance.

I’m in the medical device manufacturing industry so the potential for liability is pretty high but it’s a system that shouldn’t be hard to adapt to this case.


We may want to consider having a standard set of questions that need to be answered when someone submits a PR. As a sample:

  1. Does this firmware change affect kinematics?
    a) If so, does this change require recalibration?
    b) If so, is there an option for user to opt-out of the change until ready for recalibration? If not, explain why this is not possible.
    c) Has the calibration model in gc/hc/wc been updated to agree with firmware change?
    d) Has this PR been tested on actual machine and/or in fake servo mode (indicate which or both)?

I think this type of information would be useful in determining whether a PR gets merged. I’m definitely not trying to build a monster, but it at least forces us to think about these things and the potential unintended consequences.


Good idea. A variation on 5 whys:

I’m not sure how useful this would be. I think we would quickly get to a huge,
intimidating list of questions that are either not going to be answered, or will
scare off contributers.

David Lang