Calibration Process Brainstorming, Part Deux

So, I was responding to a message regarding finger joints and started to become long winded and off topic and thought it better to go into a different post. Here’s what I wrote (minus the part I wrote about precision and accuracy).

When you need two pieces to fit together perfectly, it’s important that those two pieces be cut to the correct dimensions (i.e., high accuracy). This is something many people are working on to improve. What happens when the machine accuracy is off is that your “fingers” that are supposed to be 2 cm wide are, precisely, cut 1.9 cm wide on a piece that’s cut from the plywood above the center of the sheet and 2.1 cm wide on a piece that’s cut below center. I’m not a carpenter, but I suspect that wouldn’t fit together well (or at all).

The calibration routines today try to improve the accuracy level of Maslow by trying to find the right coefficients to apply to the model of the machine that’s been programmed into the firmware. distance between motors, rotation radius, distance to top of worksurface, etc. are all calibrated so that the firmware can control the motors to get the sled in the exact right spot. What’s recently been added is calculations on chain sag. So what’s happening is that the model is being refined to improve accuracy… but there’s also another way of accomplishing it that can also be used to augment the current calibration method.

I hypothesize that one could make measurements of the accuracy of their particular Maslow at multiple points around the workpiece and then apply compensation factors to their gcode to produce highly accurate cut. I don’t think software exists for this, but the concept is that if you know that a 2 cm cut in a certain location would come out 1.9 cm you could change the gcode to make a 2.105263 cm cut that would actually result in a 2 cm cut being produced. This is, I hypothesize, is possible because of Maslow’s high precision. This could possibily be incorporated into GC and applied on the fly to the gcode. Maybe would work in the firmware, but I don’t know you want to add a lot more computation requirements to the arduino.

I’m curious as to what people think of this concept. Is it sound? is it implementable? Is the juice worth the squeeze? I imagine making cuts (or maybe marks with a pencil bit) all over the board and measuring them. Length/width, distance from center… something… to come up with a compensation matrix.


In theory, it is a valid solution, and one I have been considering. There are a few downsides though:

  1. Calibration would take a long time. The more accurate you want the machine, the smaller your calibration steps would be, which would mean more time (and more used space of a board) to accomplish
  2. Depending on measurement step size, the memory requirements to store the calibration table could be large (and possibly beyond the capability of our current platform)
  3. Anything that necessitated a re-calibration would potentially require the entire process be repeated, especially if it modified the sled weight, chain weight, or bed lean angle (as these all affect chain sag)

I’ve considered trying to do a hybrid system, where the current calculations are used as a starting point and then a relatively small table, with steps in the range of 6-12 inches, are used to fine tune the final chain lengths. But I don’t know how much this would get us, and the calibration process would likely still be lengthy. Could be interesting to try out if you had time though!

1 Like

Interesting enough to give it a good thought.
On the CAD side I was thinking about a ‘background image matrix’ to adjust the file.
Something like the simulation.

Reference points with interpolation in between (hope that is the correct English word).
On the software side that could work as well, but I assume the maths to be enormous.

1 Like

If I only had the time. I agree. It would be a great deal of work to accomplish. It would be best if the whole calibration could be automated. I have a Livescribe pen that uses a built in camera and a special printed pattern on paper to accurately track where the pen is. If we had a 4x8 sheet of paper with a registration pattern and a camera attached to the sled, the same concept could conceivably work. Move the sled to a position, take a picture, determine error from the picture, and then move to next spot.

1 Like

I actually think the math would turn out to be relatively simple. Its a matrix transformation in reality… think image processing (i.e., apply a distortion to an image, same concept)

Need @thormj to chime in… I think he’s done something that related to what I envision (with image processing and registration… he’s got a PR up in github on registering a camera image to the GC’s depiction of the 4x8 workpiece)


Or roll the calibration up from the other side. Move to fixed points (measured by hand, ok) and tell the software this is x100,y100 and so on. Time consuming…

Replying to post #2

I think using a pencil is much easier for me. Of course this is how I started. I think maybe it should be an option . _ “Advanced or High resolution mode” - When selected it could Prompt. " you are choosing to do calibration in “expert mode” . this may take a long time please be sure you have at least X amount of time to work through this process.

Thank you

1 Like

I’m not thinking of a matrix operation for the math. It would be more like an integral equation (yea, all that calculus that I haven’t touched in over 20 years). To me, the error looks more like a continuous gradient. With the sweet spot having an error small enough to be invisible.

If someone can do the maths (not it, I’ve been away from it for way too long), it should be possible to have the calibration use a small enough sample size (something like 8 measurements) to actually generate the values to approximate this gradient curve.

The error correction could be applied in GC, where it can perform more complex operations that we’d probably want to do in the firmware.

@madgrizzle I agree that we can dial the machine in further. I think the current routine is a little too spartan in collecting data. In my experimentation with the linkages, even the 45 degree had trouble in the top center as far as straight up accuracy. The current routine gave me great accuracy across the bottom half of the machine, though.

The level of distortion I saw was visible in the benchmark test, so maybe the calibration routine could use that program? You can see the mid vertical measurement (the highlighted one) is way off, and if GC could use that error to help correct the sled position than that would go a long way in giving us better overall accuracy.

Unfortunately, I don’t know python well enough to know how difficult that would be to implement. I have started learning, but I’m only about to lists and dictionaries so I have a long way to go before I’d know what I’m doing with the Ground Control code.

It’s been 25 years since I’ve had to calculate an integral and probably will be another 10 years before I will need to help the twins with Calculus (they are in first grade… i’ve got time). You may be right regarding the use of a gradient to and curve fitting algorithms are readily available… so I don’t think the math will be complex. Once the coefficients are determined for the equation (which would be done as part of calibration), it’s relatively simple to calculate the required correction from it (which would be done on the fly in GC of FW).

Still, the accuracy improvement will depend upon how accurate you can make measurements. I’ve always been concerned about how well we can measure, say, the distance between two cuts from one side of the board to another. That’s why I like the potential of an “optional” high accuracy calibration that uses a preprinted large sheet where you could either automate the process (i.e., camera) or use a pencil bit in your router to make marks on it and then use a set of calipers to measure the error. I know people can’t print the sheets at home so they would have to be professionally printed. It may be a pie-in-the-sky dream… but we should never stop dreaming.

I’m just skimming this topic but could this same theory be accomplished by having a 8.5x11 paper with calibrated/measured dots on it. The user would then put it in specific spots (dead center using tick marks on the paper), outer most corners using the edges of the paper matching the factory cut 4x8 sheet, etc. Then the user would move the sled so the bit was right on top of the dots and then push a button to lock in the position of each dot. A small bit like a 1/8 diameter would help.

So you would calibrate by picking up on a few dots in different locations on your machine. Just throwing it out there.

1 Like

Maybe… Can we can depend upon the factory cut edges… is a 48x96 really 48.00 x 96.00? Also, would doing so at the four corners be enough? Part of my thinking about doing it all over the entire worksurface is that you aren’t trying to develop coefficients for a model (as we do today). It’s sort of a throw-you-hands-up-in-the-air approach were you say, “Ok, we can’t reasonably improve our model any more, so let’s just finish this by compensating for what we can’t model” It could be a two step process where you calibrate using the current method (to get reasonably accurate) and then optionally you add a high-accuracy compensation step where you can fine tune with a set of calipers. The current calibration model produces an accuracy that has worked for everything I’ve tried to do so far… but if I want to try build furniture, maybe I need more accuracy.

1 Like

Maybe the exact location of the calibration sheet isn’t needed. It’s the spacing of the dots at some position that is really important. The software knows where the sled is so if you say in this spot, the measurements were this far off, the software could account for that. You’d basically be using the sheet as a measuring device (between like four dots in a square) and the software would do all the magic. I’m no software guru so take my wild idea with a grain of salt.


I see what you are saying… I don’t know it solves the problem and neither do I know it doesn’t solve the problem. I tend to think it won’t because today we measure from one side of the plywood to the other and I think we do that for a reason. A calibration guru would be able to answer why.

That’s a very interesting idea!
A non-conductive rectangle with four grounded conductive ‘spots’ and one of the AUX pins connected to some kind of bit in the router would work as you describe, with minimal extra electronics or software (outside the chain sag calculations, of course). This is similar to the “Zero Z-Axis” setup in the present software.
It might work like this:
One would place the sheet somewhere on the workarea, drive the router to near, say the top-left ‘spot’, and start the process. The software would find the top-left spot, note the location, then find the other three and calculate chain sag for that part of the workarea from those measurements. Now move the test piece to a different spot and repeat.


Can this process work without actually knowing accurately where the points are with respect to, for example, the center of the workpiece. We can know the distance between the dots with high accuracy, but you won’t know with high accuracy where the dot is on the workpiece. I think this is a problem, but maybe not.

1 Like

This wouldn’t measure absolute position. It would characterize the chain sag in a region of the workarea. I don’t think chain sag would differ greatly over short distances, with the possible exception that the direction of travel probably matters. As an automated process, it could take repeated measurements from a variety of directions at each area measured. How big or small those regions would need to be would be a matter of experimentation.
Chain sag is affected by a number of variables (how about the change of friction due to accumulating wood dust during a cut?), and it might be unreasonable to dive too deeply into this. Automating the process could help to improve accuracy, though.

I disagree here. Based on testing from @MeticulousMaynard, it looks like the error is localized in particular areas to different extents. We may have individual gradients over portions of the work areas, but it looks like at least 2-3 different gradients at work (say top-center is one, corners may be another, etc).

One problem we had with the current calibration routine was the effect of frame flex. @bar and @blurfl spent countless hours across their different frame designs, working to see what would work unanimously. Through this work we arrived on the current routine. Having said that, it looks like we may be seeing other physics aspects of the components at play now. When I get my machine in the February shipment I’ll be able to do my own tests too to see what’s going on too, and won’t need to bother @bar, @blurfl, and others quite so much!

I think the idea of a camera-based feedback system is awesome, although would have additional questions such as would an absolute position system be used, and if so, how would it be implemented? Camera calibration would also be required, but this wouldn’t be hard.

This is a great idea! I like the minimalism of the additional required hardware too, and could make calibrations extremely quick!



I am glad you commented on this. I have several ideas related to calibration, and I intend to implement them. However, I have many commitments (kids), and am slow. Exclusively related to the calibration math after data is collected/entered, the process I am implementing could handle as much data as the compute-platform (Raspberry Pi) could contain; limited by processor power, memory, etc. It would not be limited by analytical solutions to equations. The calibration I am envisioning uses similar methods to what Google uses to train ~billion-node Artificial Neural Networks (minus the GPU farms). This is something that can be implemented by one person, and is very achievable.

Related to the compensation matrix, the implementation is very easy. Imagine a translation mapping (X,Y)=>(X’,Y’). Apply this translation before the existing kinematics calculations. This is exactly what you, @madgrizzle, described. This could be any curve-fit: e.g. X’=a1+b1X+c1Y+d1X^2+e1Y^2+f1XY, Y’=a2+b2X+c2Y+d2X^2+e2Y^2+f2XY. This may not be the best implementation, but a good start. Now calculate the kinematics inverse as a function of (X’,Y’) rather than (X,Y). This translation captures simple skews, scales, and offsets on the order of millimeters.

Related to the pencil drawing idea, I think it is reasonable. Here is an idea. Use a red pen to draw the lines on the workpiece. Flip the workpiece upside down, and re-draw the lines with a green pen. By flipping the workpiece upside down, all the bowing that is shown in the pictures is flipped upside down. If the red lines bow up, the green lines bow down. This should be something that could be precisely resolved in a picture. It could be used in a calibration.

1 Like

Err… I cheated. I used OpenCV to do a matrix-warp of an image; I blatantly copied the code for “Perspective Transformation” on this page:

(Just adding bunch of wrapper stuff to “find dots” [also an OpenCV tutorial - tracking a tennis ball on video], etc and put them into Maslow’s Kivy image space).

How it actually works… well, that I don’t have any more than a passing understanding, but I don’t think the underlying matrices will work in this app because (AFAICT – I’m completely new at this and just thought “hey, if Chase Bank and CamScanner can do this, it’s gotta be a known problem – I wonder if OpenCV knows how”) the matrices make a linear stretching transform… if you made a map of “sagA/B vs Position” I think we could find a way make the matrix math be able to figure it, but you still need to add something to that to get “delta X/Y vs commanded X/Y”.

Now… there is something my bit can help with… and it might be interesting to try:
If you remove the router and put a “heavy thin puck” in place (to minimize parallax while maintaining the correct weight) with a marker that indicates where the bit would be… You could probably take a lot of frames, align them to the workspace, and not have to cut while figuring out the corrections.

You need to shoot with a real camera (not video) b/c at 8’, to get 1mm accuracy, you need 2400 pixels wide (so double 1080p; for video 2k/4k sensors – otoh, a >5MP camera is theoretically capable [but unknown if the lens is sharp enough, etc). OpenCV can make up some change with interpolation, but you also lose some in optics correction, parallax, etc… would be interesting.

1 Like