I am kind of liking the AR stickers idea. What if you put a bunch of stickers all over the wood stock, scanned, flipped the stock over 180 degrees, and scanned it again? Does it help if the AR stickers are all precisely the same-dimensioned robot barf? EDIT: seems the distance calibration lines/patterns are probably still required. but flipping the 4x8 sheet 180 degrees might have the benefit of needing fewer patterns and not disturbing the original pattern, by vertical and horizontal shift of said patterns to another area that didn’t see calibration before.
A box of plain imperial continuous feed 9.5x11 inch 20# perforated paper just arrived from Amazon. The pin feed holes are pretty clean, maybe a few here or there with a slight bit of grit remaining but the circle is plainly visible and still gets you the center easily. I saw one dangling chad that was easily weeded. Repeatability and alignment is very high for a pair of eight-sheet banners laid one atop the other on a long countertop, whether laid exactly sheet-on-sheet or shifted half a sheet or only one pin feed ribbon overlapped with the sheets parallel to each other. Also, when I layer a sheet with pin feed ribbons perpendicular to another sheet beneath, the holes in the intersecting pin feed ribbons align perfectly.
I am nearly done rebuilding my frame (just a minor snag where I got too creative – maybe fixed this weekend). Then I will mount the sheets on my spoil board and report back. If the sheets align well, everything is consistent corner to corner, and the dimensions are good over the large work surface, I will take a look at the sources and see if I can understand the optical recognizer code. I work in Java and have done C/CPP but I’ve never worked on image processing code before.
If I can figure out how to change the pattern to rows and columns of small dark circles, and how the compensation logic works, then I will see how far I can get with a local mod for tractor feed paper calibration. Anyone else who is a regular to working on the source code, I am happy to post a manila envelope with enough paper sheets to enable working on the same. The pattern I propose is five overlapped horizontal rows, eight sheets per row. That’s 242 holes spread over every half inch, each hole 0.15 inch diameter. Then, perhaps, add hole-aligned vertically hung sheets during a second pass, superimposed perfectly over the horizontal rows. If you want more resolution, then shift the vertical sheets over by a few columns and scan again. Whatever is required in the corners, center, etc to reach the target accuracy over the entire area.
Are my posts too long? It takes forever for people to reply |-:
There’s only two people (me and @johnboiles) that have any experience with the code. The current code works by recognizing a shape (square) within the camera’s view and determining where the sled is in relation to that square. The code currently works when only one square is in view and its entirely in view (not cutoff). It requires that the maslow be roughly calibrated so that when its told to go to a position where one of the squares is located, it gets close enough to that square so the camera can capture it. A “next-generation” idea I have is to use AR codes that would allow for many squares to be within view and do a “pose-estimation” to determine where the sled is. This will be much more robust than the current system.
To use ‘rows and columns of small dark circles’ will require and extensive re-write of code. The code could conceivably treat the intersection of the rows of circles and columns of circles as the known registration point, but the code will have to be written to do that.
This is what the methodology is based on, but its been greatly modified:
@madgrizzle is the code in Python? I write protocol parsers in that as well, it’s be an easy start for me.
Yes, webcontrol is entirely written in python.