Calibration code / understanding

I’m having a dig around in the calibration code (I like a good algorithm, and geometry), can I check my high level understanding is right before I ask some code questions.

In essence calibration does:

  • Unwind X length of belt for all belts (measured via belt-teeth which nominally map to an exact length, barring the current talk on stretching).
  • Tighten the belts for an initial length of each belt at an arbitrary point, giving us measurements[0]
  • Slack a couple of belts, move the Maslow a little, tighten again, giving us measurements[1].
    • Repeat this a few times, giving us an initial array of measurements[].
  • With this initial measurements[] array try an initial fit of possible solutions of co-ordinates for the anchor points
  • If there is not-bad fitness, we can have the belts tight, and try some variation of 3x3, 5x5, 7x7 grid to populate our measurements[] array further.
    • Spiralling out and based on the initial location and the calibration area the user specified.

And the fitting is run in the browser of the device controlling the Maslow.

I guess my initial question (other than do I understand right) is whether we keep all measurements done in every stage in measurements[] or if we chuck any initial ones / only fit for a NxN set of results? I’m sure the answer is in the firmware code but I haven’t got round to digging into that yet!

1 Like

Dave wrote:

I guess my initial question (other than do I understand right) is whether we
keep all measurements done in every stage in measurements[] or if we chuck
any initial ones / only fit for a NxN set of results? I’m sure the answer is
in the firmware code but I haven’t got round to digging into that yet!

I believe that it keeps all measurements as it goes.

David Lang

2 Likes

@dlang is right, we are keeping all of the measurements.

There has been some playing around with cherry picking some measurements (say throwing out the worst 10%) but experimentally that doesn’t seem to give better results (although a lot more testing is needed)

Bar wrote:

There has been some playing around with cherry picking some measurements (say
throwing out the worst 10%) but experimentally that doesn’t seem to give
better results (although a lot more testing is needed)

If you are getting good tension and no arm/frame collisions, there’s not a need
to throw any of them out.

detecting that some are way off from others is an indication that something is
wrong and you need to go back and re-measure and figure out why those points are
bad.

David Lang

1 Like

That’s a good point - any valid data point is good! Though more points means more processing time if we’re limited to browser calculation.

From reading around, it seems like the main way we’re avoiding bad data points is avoiding overly large calibration grids where the corners are likely to be having arm collisions.

It is not clear from the code we’re doing any sanity checking of the data points but maybe I’m missing something?

I feel like if we have at least some idea of the size of the frame (from user measurements of their frame for example) we can calculate whether a data point is ‘risky’ (might have arm collisions) and potentially reject it.

I wonder if that can be taken even further - in my head, for horizontal, it seems like if we could get a user to plonk the Maslow top left & bottom right of a potential calibration area, and take a quick belt measurement from each, then we could run the calibration in an area defined by that - I think those two measurements would give you enough to run with tight belts but i’m not certain. :thinking: But I also need to get a better grasp of the code for moving the Maslow / controlling the belts to get an idea of what gotchas there are.

I also wonder in a much more general sense if a step back to what problem we’re trying to solve might help. I haven’t thought this through :laughing: but part of me wonders if we’re looking at this as ‘measure the frame size, calculate how many belt teeth to move for a given distance, calculate the number of teeth if I need to move X centimetres for a cut’
BUT
given the belt stretch questions (and more generally) I wonder if you can get a better result with some form of non-linear control where we sample and build up a map of ‘this point requires these many teeth on each belt to reach’ and only do small local interpolations between the points in the map. It might imply some level of user measuring in the calibration - I guess it might have come up before and been rejected for that reason?

Dave wrote:

That’s a good point - any valid data point is good! Though more points means more processing time if we’re limited to browser calculation.

we are using the browser because even people’s phones are so much more powerful
that they can do the math faster in javascript than the ESP32 can do in C

From reading around, it seems like the main way we’re avoiding bad data points
is avoiding overly large calibration grids where the corners are likely to be
having arm collisions.

correct

It is not clear from the code we’re doing any sanity checking of the data points but maybe I’m missing something?

we are not

I feel like if we have at least some idea of the size of the frame (from user
measurements of their frame for example) we can calculate whether a data point
is ‘risky’ (might have arm collisions) and potentially reject it.

frame collisions are only one cause of bad data points.

belts snagging on things (wasteboard, hoses, etc), frame flexing are also
significant risks.

I wonder if that can be taken even further - in my head, for horizontal, it
seems like if we could get a user to plonk the Maslow top left & bottom right
of a potential calibration area, and take a quick belt measurement from each,
then we could run the calibration in an area defined by that - I think those
two measurements would give you enough to run with tight belts but i’m not
certain. :thinking: But I also need to get a better grasp of the code for
moving the Maslow / controlling the belts to get an idea of what gotchas there
are.

it’s worth a try. There are codes that you can send to control the maslow belts
directly, and I believe you can also query the belt lengths. one issue you will
run into is figuring out when to stop pulling on a belt.

Part of the goal of the current calibration process is to require as little
human input and measurement as possible.

the earliest calibrations tried moving large distances between the measurements
and that caused problems with belts getting caught up in the gears. the new
shields were created, but also the calibration was changed to not try to move as
far before pulling the belts tight again so that there is never too much slack.

I also wonder in a much more general sense if a step back to what problem
we’re trying to solve might help. I haven’t thought this through :laughing:
but part of me wonders if we’re looking at this as ‘measure the frame size,
calculate how many belt teeth to move for a given distance, calculate the
number of teeth if I need to move X centimetres for a cut’ BUT given the belt
stretch questions (and more generally) I wonder if you can get a better result
with some form of non-linear control where we sample and build up a map of
‘this point requires these many teeth on each belt to reach’ and only do small
local interpolations between the points in the map. It might imply some level
of user measuring in the calibration - I guess it might have come up before
and been rejected for that reason?

The problem is how do you define ‘this point’, especially with high accuracy.
the webcontrol project (running the original maslow) tried an ‘optical
calibration’ setup where they put a camera in the router to spot the crossing
points of a grid for exactly your approach. They ran into the problem that there
is no good way to get accurate grids to calibrate on at our scale. They tried
getting posters printed, and (at least at the time) found that such prints by
commercial printers were not reliably accurage along the roll.

David Lang

1 Like

Yeah, i used to work on phone chips :slightly_smiling_face: In the future it would be interesting to look at whether the fitting could be offloaded to the GPU using WebGL, I don’t think there’s any reason it shouldn’t be possible :thinking:

True, true!
I wonder if there’s scope for trying things like sub-sampling from the set of points into 2/3/4 sets, fitting each one, taking the best and testing it against the rest.

And/or returning to previously tried points that are suspect but seem like they should give valid results.

Yeah, I understand the drive to minimise user interaction, in my experience it can be a double edged sword sometimes - it can force you into assumptions that you then never test against the users assumptions and end up tripping you up.

With what I was thinking it was aimed at being more interactive but lessening the amount of belt spooling done automatically before we’re in tight-belt-calibration (I think horizontal gives opportunities because the device can just sit there without support) - I’ll think on it a bit more and maybe try a few ideas over the weekend as to how it could work.

Yeah, I see that - it’s not a trivial problem and what you can get the user to do is limited - off the top of my head I wondered, if you get the user to plonk a full 8*4 as the spoil board, move the Maslow around till it’s near a corner, you can get the user to measure distance sled-edge to board-edge and repeat a few times (partly because the sled is round which helps). It’s not super accurate with a tape measure, but laser measures are not exactly expensive anymore :thinking:

Dave wrote:

With what I was thinking it was aimed at being more interactive but lessening
the amount of belt spooling done automatically before we’re in
tight-belt-calibration (I think horizontal gives opportunities because the
device can just sit there without support)

that’s what we are trying to do now, we get an initial guess with everyting
tight roughly in the center (belts close to the same length)

then we do a few short moves to get a few more points, which don’t require
letting out much belt, then make some calculations on those points. That usually
gets fairly close, enough that we should not have a lot of loose belt after
that.

then we do the next ring, all requiring small amounts of belt movement and then
refine the calculations.

earlier, instead of defining the grid in terms of size and number of points,
define it in terms of distance between points, possibly making the distance
smaller initially when we are less confident (so that less belt needs to be fed
out)

Yeah, I see that - it’s not a trivial problem and what you can get the user to
do is limited - off the top of my head I wondered, if you get the user to
plonk a full 8*4 as the spoil board, move the Maslow around till it’s near a
corner, you can get the user to measure distance sled-edge to board-edge and
repeat a few times (partly because the sled is round which helps). It’s not
super accurate with a tape measure, but laser measures are not exactly
expensive anymore :thinking:

it’s worth experimenting with.

David Lang

2 Likes