Calibration Process Brainstorming, Part Deux

A little more thinking on this. My original process was to align the camera with the router bit and when the user “calibrates” the camera, it calculates the difference between center (0,0) and the graphic pattern. This way, the machine knows just how far off origin the center actually is (I doubt many people’s machines end up precisely at the center point of the 4x8 ply after they calibrate). With the offset camera, this might be accomplished one of two ways (and I’m not completely sure).

First, if you accurately measure exactly where the camera is located with respect to the router bit (i.e., distance and angle or x,y and camera pan/tilt values) then you could use the picture the camera takes of the calibrating graphic pattern to determine the offset. However, I’m not sure this would account for the rotation of the sled. If the sled is rotated 2 degree CW and it’s not taken into account during the calculations, I suspect there will be an error in determining the calibration error… I suspect, I don’t know

Second, you could require the user to precisely place the router bit at the center of the calibration graphic pattern. Perhaps use a v-bit so it precisely aligns with a dot in the middle of the center square. The software can then use the captured image to determine where the camera is (distance, angle, pan/tilt). Here, I think a rotation of the sled would throw things off because the software might assume the camera was slightly rotated. I’m not sure we can tell the difference without something measuring the tilt of the sled/router…

Drawing2

Of course, with the offset camera, the math becomes significantly more complex and therefore would take significantly more time to figure out how to do it.

1 Like