Optical Calibration Demo and Three Hours Working on a Bug

Awesome, I might give cleanup a shot tonight. Any chance you remember what version (or git sha) of the firmware and groundcontrol you started with when you posted the first commit to your repo? Did you just pull whatever was in maslowcnc/groundcontrol and maslowcnc/firmware at the time you made the repo? That will help me apply your changes on top of the groundcontrol/firmware histories.

I believe firmware was 1.20 and gc was 1.21 and i changed the version number of gc down to 1.20 to eliminate the nag about different firmware/gc versions. Oh, well, firmware might have been 1.19… I’m not sure, but not much has changed in firmware for a while.

1 Like

Ok I’ve updated madgrizzle/Firmware and madgrizzle/GroundControl to have all your changes/history. I also pulled in the latest versions of the official MaslowCNC ones while I was at it. It seems to work for me but it would be good for you to sanity check it when you get a chance!

If you want to see a nice diff of all your changes since your repo diverged from the official one you can go to https://github.com/MaslowCNC/GroundControl/compare/master...madgrizzle:master and https://github.com/MaslowCNC/firmware/compare/master...madgrizzle:master

1 Like

Thanks for doing that! That’s a big help.

I’ve noticed that the errors in the far corner (lower left) that I’ve tested is still having trouble… over 1 mm error… At 3-inches in, the sled seems firmly placed on the frame (I have no skirt) but maybe it’s just hard to tell.

got a typo in the link I think. is it supposed to be mathworks.com?

@madgrizzle so other than bug fixes what else needs to be done for this to be ready for general use? Does the current firmware you’ve written already do the interpolation? For sections I’ve calibrated can I now try calibrated cuts? What other major tasks are remaining?

Interpolation is in, but extrapolation is not (only needed if sled moves closer than 3 inches to a side… it just uses the value as if it was 3-inches in… so not a huge issue). It would be nice to implement saving the values into EEPROM. That way, you don’t have to go into the optical calibration routine and do a “Save and Send”. I’m also looking at curve fitting as an option, so doing some reading on that.

I’ve got a couple tweaks in my development branch to PR (nothing big, just correcting a few display things) and will try that out.

1 Like

Yes. Sorry for the typo.

Thanks for the high level status! So right now those values are in RAM and go away after restarting the Arduino? I wonder how much EEPROM space we have left. This is a task someone else could pick up if they’re watching this thread! Otherwise potentially something I could do later after I try my optical-center-finding routine.

I asked about this in another thread and we have sufficient room. I made them ints to conserve space (originally were floats and used up a lot of available memory). @blurfl suggested putting it 2048 and above. I don’t think it would be too hard to do. There’s already code to wipe the settings that I think works.

As I understand it, the arduino also resets whenever it loses connection to ground control… therefore, the values go away whenever you close ground control.

This code currently only clears the portion needed. You’ll have to provide a routine to service the space you use. Look at Settings.cpp:settingsWipe() and System.cpp beginning line 545.

I wrote up an optical centering routine and have it posted in madgrizzle/GroundControl#2. I haven’t wired it up to any of the math yet --I just wanted to make sure I could find the center first.

It’s pretty noisy / error-prone when I’m rotating the sled (I think because it’s hard to not nudge the sled in one direction or the other when rotating it – and because the sled only rotates 90 deg or so). But when I rotate the camera, it has very consistent results (and looks right based on the movement of the wood grain). This makes me think that if we implement your idea to mount the endoscope where the router bit would go, we could find the router’s center in the image by rotating the router chuck.

The less math-y option is to just have a way to click to add the center of the image, since it becomes reasonably obvious from the movement of the wood grain where the center is.

I wired up the optical centering routine and I think it’s an improvement on my results even without any fancy mounting to the router (just using my 3d printed endoscope holder. The routine does seem to use my calculated center just fine (but please double check!), and after calibration, ‘Return to Center’ lines up my optical center with the home square almost perfectly.

I chucked a pencil into my router with a little 3d printed part I made and dropped the z down until it hit the paper and It was within 1-2mm from center of the square (though a little bit below – likely because of the added weight of the router).


I was curious about what sort of results my pencil test would have yielded last night, so I ran the calibration again without using my optical center calculation. The results at the home position were significantly worse! My second pencil mark wasn’t even inside the square. This was not surprising, since my optical center is pretty off center in the frame. It’s likely due to my janky off-centered endoscope.

I think this has promise. Maybe I’ll try to 3d model a router attachment tomorrow and see if I can get the calibration to land right in the center of the square. Time to go to bed for now though :slight_smile:

Edit: Here’s the pencil collet model if anyone else wants to use it!

2 Likes

I’ll review what you’ve done. It’s quite possible we are talking about different rotations…I always try to be cognizant of the possibility people are talking past me.

I don’t think we can mount the camera where the router bit goes because the cable comes out the end and making a mount to allow that cable to slide out will be long an result in the camera being too close to the calibration pattern.

image

I think we can build an offset mount… something like this:

image

Using your routine, you could “shift” the center (xA, yA) to the center of the camera rotation (based upon your centering routine) so you aren’t aligning to the center of the camera, you align to the center of the router bit. It might require that the camera and mount be perfectly oriented such that up on the camera image is along the radial of the rotation of the camera mount. Without that, my theory is that when you move to a spot and take an image, the software won’t have enough information to tell the difference between the rotation of the sled and rotation of the camera within the mount… all we have is in image of square that is offset and rotated.

However, it might be possible to calibrate the rotation of the camera within the mount during the center calibration. This way, you know you have to rotate the image about the center a fixed amount and then rotate that image about the new xA, yA until you get one sides perfectly level/plumb. It requires that the camera not move at all within its mount during calibration so the camera would have to be well secured so that it’s rotation doesn’t change… maybe a clip to secure the cable:

image

1 Like

I ran the routine and when I did the measure, it looked good. However, I then tilted the camera inside my mount a fair amount and redid the process. The calibration values changed considerably and they really shouldn’t… After I exited the program and came back in, I sent the calibration to the controller and ran the measurement… it was off by over 6 mm in the Y axis.

I think the problem is that this routine. Currenlty, xA, yA is considered the center of the image and the rotation that’s performed compensates for the tilt of the camera only. But if xA, yA is the “target” center (i.e., center where the router bit would be), then this routine then compensates for the off-centered-ness (?) of the camera, but no longer the camera’s rotation around it’s axis. So it’s like we need two rotations perhaps. First rotate xB, yB by a fixed amount determined during the centering-calibration with xA, yA being the center of the image and then rotate that xB, yB around the modified xA,yA… does this make sense? If so, we’d need to figure out how to calculate the camera rotation during the centering routine you developed. I think that’s doable.

I don’t think it’s relevant, but I to make sure you know how the calibration is used in the firmware. First, the coordinates of the motors are adjusted based upon the offset of 15, 7 (i.e., 0, 0 on the board). This changes the machine’s dimensions so that 0,0 is 0,0… even if there was no other calibration adjustments. Then, for each point you tell the machine to move to, it it takes the target x,y and adds the appropriate offset from the calibration matrix but then subtracts the offset associated with the center (because it’s been accounted for by the machines dimensions). The machine then moves to these modified target coordinates.

Yeah I sized that out too. I could kinda jam my camera in the router but it definitely wasn’t elegant.

I’m going to take a stab at writing this all out for my own understanding. We have 3 potential rotation axes we care about 1) camera axis 2) router axis and 3) sled axis. Last night I rotated only the camera to find the point in the image that represented the camera axis, and calibrated on that, effectively making the assumption that 1 and 2 are aligned (ie that my 3d printed camera holder was perfectly centered in the router mount). The sled axis is not accounted for in this scenario. With the router offset mount you diagrammed, we would fix the camera axis so that it can’t rotate (perhaps with a set screw in the camera mount), then use the optical rotation routine to find the router axis and use that for calibration. We would still not make any affordances for the sled axis.

I think it’s probably ok to ignore sled axis and focus on the router axis. In my testing I had a hard time finding a repeatable sled axis point in the image – probably because i can only turn the sled a little bit (and I also suspect my ring is not perfectly round). Assuming the sled is nearly centered and the sled stays more or less level, the interpolation should already handle the translation differences due to differing chain positions at different spots on the work area right? This is the case without the optical centering routine as well I think? I struggle to understand if we can possibly account for all three axes. It sounds really hard to me. Maybe we can pick the one that most meaningfully improves the measurements and call it good.

I’m struggling to understand why this is necessary. I apologize if I’m being dense :slight_smile: Assuming the camera does not rotate in its mount, won’t the optical center of rotation be the router axis? Maybe I’ll try to draw some diagrams of my own to understand this better.

This is definitely surprising & concerning. I’ll try to repro this evening. After exiting the program and coming back in, did you re-set the optical center, either with the detection routine or by entering it into the text boxes manually? I didn’t get to the point where I was saving it between app opens, so right now you have to do it every time.

Ow my brain :smiley: If we rotate the image around the axis we’re interested in (ideally the router axis), can’t we just find the translation from that point to the center of the square? I’ll chew on this for a while and see if I can make it fit in my head. Maybe i’ll cut out some shapes and pin them to eachother and see if i can build an intuition around this.

Thank you! This is helpful. Why do you change the entire machine’s dimensions based on the 15,7 square instead of just storing the full (un-subtracted) calibration values for each square. Is it to make the routine more likely to find all the calibration squares?

Pegboard has nice circles on 1" grids. Might be a good, available-anywhere, not-too-expensive solution.

2 Likes

Problem is that the holes are circles… we need something we can detect the rotation of (why I chose squares)

True, but since the circles are only an inch apart, you could use multiple circles to establish orientation.

2 Likes

That seems feasible. The software would have to assume the circle closest to center is the correct circle to calibrate to… So as long as the machine is fairly well “estimated”, that should be the case.