Holey Calibration Error Checking

So I had a thought about two days ago that I have been working on. The Holey Calibration has two squares that look like this:

Point 1 * --- M1 --- * Point 2
        |\          /|
        |  M6     M5 |
        |    \   /   |
        M3     X    M4
        |    /   \   |
        |  /      \  |
        |/          \|
Point 3 * --- M2 --- * Point 4

Now that is actually one more measurement than we need to affix all of those points on an XY coordinate plane.

So I thought, what if I placed all of these points using 5 of the measurements and then calculated the missing distance from these points and compared it to the unused 6th measurement. My thought was if I repeated that 6 times, I could find the most outlier measurement. This could be used to prompt users to check what appears to be the worst measurement.

Except, I don’t think it works … I think my premise is wrong.

Sadly, I built all of the work already, it was at least a good refresher on deriving geometric functions again (Law of Cosines, Intersecting points on two circles).

The results I get look like this:

{'M1': -4.769083263379116,
'M2': -4.768156399617851,
'M3': -6.492332606734294,
'M4': -6.444498656751762,
'M5': 3.840564042951428,
'M6': 3.8360705815812253}

Where the number reported is the measured distance - the distance derived from plotting the points using the other measurements.

What I think I discovered, is that even if it is only one bad measurement (and it never is) I can’t discern which is bad. If the bad measurement is included, then the error calculations of all of the other measurements look bad. And when the bad measurement isn’t included, its error calculation is high because, well it is an error.

Anyways, is there something I am missing? Could one actually learn anything from doing these calculations.

Also, my errors seem very odd to me. It is entirely possible I have made some math mistake. Related lines all look very similar to each other. Plus the error amount which should be in mm seems way too high. I could be off my a mm on some measurements at worst, but I would be surprised if it averaged to even that high.

I added assert statements to ensure that the distance between the derived points matches the initial measurements used and they all pass. So I don’t think it is my code.

Have I discovered the error rate in my tape measure? It seems accurate against know things. For example when measuring 1651mm of chain it matches up with the chain. I have this bad boy, https://www.fastcap.com/product/procarpenter-flatback-tape-measure, which I really like on the Maslow because it is flat. It also seemed like a decent company.

Hmm.

where are your formulae? I could try to derive it, but that would take me way too long. I can do plug and chug math though especially while I babysit Leroy… you know, Leroy Maslow, the big guy cutting stuff for me.

I am heading out for the weekend.

This is an iPython notebook that is a real mess. If you want you can poke around and find everything plus a lot of my other ideas in testing. Sorry for the mess.

https://colab.research.google.com/drive/1O2EACPC6YJuY3aZAwgNT_J_RC4-jZG1N

I like the solver built into holey calibration and maybe you could utilize that (wasn’t my solution). I believe it just tries to find a set of values that minimizes the residual error. You could run it with all the values and then run it multiple times removing one value at a time from the data set and see if there is a value, when removed, drastically lowers the residual error.

Yeah, that would work great if we trusted out model to be accurate. But maybe not such a good idea if it isn’t accurate.

I tend to think that process would generally result in better values as long as our model is reasonably accurate, which I think it is. Isn’t this what you were suggesting? Give the user an opportunity to remeasure something that seems to be an outlier?

Yes, agreed. And you may be right. I am just worried that when testing out which measurement to drop, a bad model may fit better with a bad measurement than it does with a correct measurement.

But this could be a good idea. The rub, is it would require the calibration routine to run 12 times. So taking 12 times longer. Not bad, but that may be 20 seconds.

Anyways it is worth testing to see what happens. Good suggestion.

1 Like

OK, I had some more time today. Good news-ish. Inputting the Ideal Lengths for measurements results in errors about 1e-8 (otherwise 0). So my function and math is working.

It looks like the error really is caused by my measurements. Looking at them they really don’t vary much from the Ideal measurements, most are under 1mm in variance. I no longer think it is my tape measure, I think it is the limitation of reading a tape measure incremented in 1mm, plus just the ability to measure finer than this at this distance on this work surface.

Anyways, interesting that an error of ±1mm across 5 measurements leads to an error of 6mm in the distance between two points.

1 Like

multiplicative error?

Well, except it is + or -, so at least my intuition was that it wouldn’t be that large, that at least some of the error would cancel out. Clearly I was wrong.

Well some good news. I did in fact have an error in my measurements. I was off by 5mm!! on the distance from the top center to the bottom center. Part of this was probably caused by the fact that I normally measure to the outside edge of the cut circle, but for the M12 measurement I was estimating the center of the circle.

Anyways, that is a lot of error and fixing it, well fixes everything. My measurement checker now returns a result that looks like this:

{'top': 0.18520152819087343,
 'bottom': 0.18516282817017782,
 'left': 0.2501751423641281,
 'right': 0.25031585999715844,
 'up': -0.14896281149367496,
 'down': -0.14875150792386194}

So not bad, it looks like my measurements have an error of about ±1/4 of a mm. Which is much more what I expected and not the 6mm I was seeing before.

Back to my Original Idea
So initially, I was looking to create a method to use the extra measurements to test for user errors and it looks like it has some value??

Remember that this error checking is done on each “square” of the measurements (see my first post).

So in this particular case, since M6 was wrong, I was in the ultimate bad scenario, because M6 is used on the left and the right square. This meant that error checking reported that both the left and right square were bad.

If any other measurement had been wrong instead, only one of the two squares would have reported as bad. Meaning the user would only have to recheck half the measurements?

That said, I don’t know what it looks like if a user has measurement errors all over the place.

I dunno, does something like this have value? Maybe, attempting to split into halves is too much. Maybe we should just check the overall error and if it looks high then warn the user? We already have some warnings for obviously bad numbers, but they are a bit limited because they are based off of the ideal setup and we have to give a lot of leeway for bad initial settings.

1 Like

I wonder if maybe you plot the points, and visually highlight the distances that deviate to visually show the point that may be off? I think that could help the user remeasure just to make sure in case they fat fingered the input and hit the wrong key. I would find it useful, but probably not as a standalone tool. I would probably only use it if it were integrated. Great job though on sub mm accuracy!

Well it isn’t technically sub mm accuracy in the machine. Just sub mm accuracy in my measuring. Still get an error of about 2.5 reported in the calibration. But at this point I think that is there error in our model.

How good do you think it should be?

Well the 2.5 error reported is root-sum-squared, so some errors are still larger than 2.5mm. plus the error is the error to each related point not the whole work surface. So in reality, the error is more like +5mm from the top left to the top right.

I would like it to be about 2mm error and I think that is doable. Particularly since the error looks to be in the model not in the mechanics. I am going to try and create a transformation function to fix this last little bit of error, but I think it is doing to require measuring 25 points, not 6. So I have my work cut out for me.