Calibration Process Brainstorming, Part Deux

And priced like one of my engineering school text books… geez… i’ll look for alternatives.

I think there is a *.pdf somewhere on the internet that can be accessed for free.

well, one thing you could try is to put a computer mouse where the router goes and see what you can get

a modern optical mouse is a camera pointed down that figures out the movement based on the image shifting.

It doesn’t need any special registration pattern, is cheap, and as a result, will be a good test of your idea.

3 Likes

That’s a very interesting idea… hmm…

⁣Sent from BlueMail ​

There is a downloadable PDF, easy to find with giggle. No idea if it’s a legal copy…

50 years isn’t a particularly old for this type of reference. Scrapers are still scraping today with the same techniques. YouTube makes a poor reference manual.

I’ve been reading a book on Babcock and Wilcox steam boilers from the early 1900s (when the electric power generation industry was rapidly expanding). While it may not be particularly relevant to gigawatt nuclear power plants it’s got some useful info about backyard wood boilers

2 Likes

I was just in a meeting with them and GE (was Alstom was Combustion Engineering) and a few others on Friday. :slight_smile:

Work stuff…

2 Likes

I wonder about Drawing a Grid - Via Chalk line, I believe available in blue, red and orange.

https://www.homedepot.com/p/Milwaukee-100-ft-Bold-Line-Chalk-Reel-Kit-with-Red-Chalk-48-22-3986/207005252

Thank you

1 Like

I’m looking at the possibility of using Augmented Reality markers and OpenCV to accomplish this as, though it would require a custom print, it looks like it will make the coding easier.

singlemarkersdetection

The OpenCV detectMarkers routine return the four corners of the markers, so with that (and knowing the physical size of the marker) along with the assumption that the center of the image is where the router bit would be, I should be able to calculate the error.

5 Likes

I think something like this as a printout would work. The registration pattern in the middle is used to calibrate the camera. Then, depending upon the level of accuracy needed, there are two different marker patterns. One set of markers is placed every 3 inches and a different set is placed every 6 inches. 3-inch spacing results 464 markers while 6-inch spacing results in 105 markers. Since the center tends to be more accurate than the edges, then a hybrid model can be used where 3-inch spacings are used along the top/sides/corner and 6 inch or less used near the middle.

The picture has been scaled down since its 9217x4609 raw image…

This is the middle area at full resolution…

3 Likes

FWIW… (I like the printed pattern better, but this is cheap), http://amzn.to/2HBU8Ij
is 1" dots, 1.25" apart (according to the mfr). Doesn’t look it to me though…

I too like the cheaper option, but if I’m going to try to accomplish this, then I need to make it as easy as possible. It looks like ArUco library in OpenCV will greatly simplify what’s needed and the library requires the use of registration markers. As a proof of concept, I probably will just print out a few 11-inch x 17-inch sheets, tape them to the spoilboard, and see if I can get it to calculate an error. If that works, then I’ll either get a vinyl print or find a local print shop that can plot out on Arch E sized paper (would need three to cover the spoilboard) to continue work on it.

Perhaps as an evolution, the AR/registration markers can be substituted with some other pattern matching technique. I’m just afraid it will get too complicated too soon if I try to do that now… if you follow. None of what I’m trying to accomplish is something with which I have experience.

2 Likes

And now I am banging my head because I can’t get ArUco library’s python wrappers installed on Windows. sigh.

Doh! That’s why I like generic blobs, but generic blobs have the distinct disadvantage that they’re all alike so if you think you’re at 2m,3m but you’re actually at 2m,2m, you’d have no idea.

OTOH, If you can pick the center point at 0,0 manually or if you can use a colored dot, you can use your 0,0 image to pick the router’s position by having it drill a hole at 0,0 (keep the weight the same when you’ve removed the router and installed the camera).

1 Like

I was basing this calibration process to be an augmentation to the current process such that you are already pretty close… you wont be 2m,2m when you think you are at 2m,3m. If I can’t get the ArUco routines running, then I might have to go back to pattern matching using just OpenCV… so polka dots might work.

2 Likes

Ubuntu on virtual box maybe?

Thank you

Yeah, but I’m trying to make it compatible with Windows (most people seem to use windows… like me)

2 Likes

There is a disproportional vector between windows and open source projects. I feel ya, I support it all.

Thank you

There’s other python OpenCV methodologies for Augmented Reality… might just forgo using AruCo… just looked real easy with it.

Edit: Well, polka dots it is. Looks like it might be relatively easy to determine the distance the center of a circle is from the center of a camera image.

  1. Process would be to drill a very small hole at 0,0 with your router
  2. Install a camera in place of the router body (into the router housing) and adjust the position of the camera so the center of the image is dead center with the hole.
  3. Move to point 1 and take image
  4. Calculate distance from center of polka dot to center of image.
  5. Have a beverage while waiting for all test points.
  6. Save error array
  7. a) When running GC, modify coordinates of gcode on the fly as they are sent to the controller, or
    b) Send error array to controller to incorporate into kinetics calculation.

Edit: Well, maybe a squares and not polka dots. If the the sled rotates at all, you wouldn’t be able to tell that by just looking at an image of a circle.

Just saying Thank you for doing this and the interesting reads. :beers:
This might belong in the ‘no judgement’, I should keep my keyboard quiet if I don’t understand.
But… is the marker / recognition thing something in the line of what bCNC does with the gcode orientation?

2 Likes

I’m not really sure what that does so it would be hard for me to say. The marker/recognition thing is intended to remove manual operation. This whole thing could be done if you manually clicked on the center of a marker with your mouse… but doing that 100+ times would get tedious. It looks like what you posted is a manual method to aligning and image to gcode?

1 Like