If we ever did what I’m discussing, the “curve fitting” only needs done once, and could be done on a computer, or one of the curve fitting web sites. There are a few of them. All the Maslow would need to do is calculate the basic chain payout, and then add to subtract the amount it would calculate in the equation that would be derived. Then send that length to the “whatever” makes the motors turn.
As far as the Goliath is concerned I just bought one online. I’ll let you know how it works when it gets here.
If we ever did what I’m discussing, the “curve fitting” only needs done once,
and could be done on a computer, or one of the curve fitting web sites. There
are a few of them. All the Maslow would need to do is calculate the basic
chain payout, and then add to subtract the amount it would calculate in the
equation that would be derived. Then send that length to the “whatever” makes
the motors turn.
the thing that sends the length is the arduino, the computer sends ‘go to x y’
commands. The arduino then figures out where things actually are and what the
lengths need to be (and how fast the motors should be turning), it does this
length calculation every 50 ms or so. It would have to have the results of all
this curve fitting data (since the errors are not consistant across the entire
area, there would need to be a table of “in this area use these correcting
factors, in that area use these other ones…” and interpelation based on this
table for every calculation.
Yea, I saw that. I waited even longer for my Shaper Origin. Backing startups, kick starters, etc. is always a risk, but I enjoy doing it. I have several great products, and some failures also. I use the products that do come through, and wish the other ones had.
Well I’d guess in that case when the computer sends the goto X,Y, that would be the time to put that X,Y into the derived equation, which could either calculate the “offset” X’ and Y’ values, add the to the X, Y, and send that value on to the Arduino as the “modified place to go”, which would actually place the bit at X, Y. I don’t know why tables and interpolations are needed, the "equation would take are of all of that. Like I said before, I haven’t programmed in decades, so maybe it’s done differently now.
The Maslow is clearly an “open loop” system, and the best that we can get is to tell it where to go, and hope that it gets there. I was simply thinking that instead of trying to calculate every possible thing that could throw it off of course, and end around might be simpler and net net take into account all of the things we are currently trying to model, and all of the other issues we are not calculating, such as all of the dynamic forces.
Well I’d guess in that case when the computer sends the goto X,Y, that would
be the time to put that X,Y into the derived equation, which could either
calculate the “offset” X’ and Y’ values, add the to the X, Y, and send that
value on to the Arduino as the “modified place to go”, which would actually
place the bit at X, Y. I don’t know why tables and interpolations are needed,
the "equation would take are of all of that. Like I said before, I haven’t
programmed in decades, so maybe it’s done differently now.
that doesn’t work
you are at location x,y you tell the machine to go to x2 y2
the machine then calculates where it needs to be every 50 ms, if it just tries
to go straight without compensating all the intermediate points, you will get a
curve, not a stright line.
The Maslow is clearly an “open loop” system, and the best that we can get is
to tell it where to go, and hope that it gets there. I was simply thinking
that instead of trying to calculate every possible thing that could throw it
off of course, and end around might be simpler and net net take into account
all of the things we are currently trying to model, and all of the other
issues we are not calculating, such as all of the dynamic forces.
there are multiple levels of control
the computer running ground control which sends g-code (go to x y, or go to
x y relative to my current location)
the g-code interpreter running on the arduino that figures out that to get
from where it is to x y, it needs to first go to x1 y1 (somewhere it can get to
in 50 ms) and start the motors moving to head in that direction. It then checks
where it is every 50 ms and adjusts the motor speed to keep it where it should
be along the line that it is cutting.
So it’s the arduino that would have to do the compensation for all the possible
error sources in it’s every 50 ms position checks
Thanks for that explanation. Now I’m curious what it does now in relation to all of the “corrections” for sag, stretch, etc. Does it do those calculations each time within the 50ms?
U mentioned that the position/angle the chain is feeding off the sprocket at each angle nees to be compensated for? What if we put a small sprocket or nylon roller a couple inches in from the sprocket for the chain to ride on which would create a more precise point to measure/calculate from possibly eliminating a variable from calculations?
It does steel a couple mm off the total length though say from when the chain is 45% compared to when its at 15%. I made a model in fusion360 which showed this.
Yep. Take a look at the thread and let us know what you think. it took a long time for me to figure out how it was done (didn’t have a drawing)… so I documented it in case I forget it.
I agree. We can’t assume we know what the causes of inaccuracy are, so a system that that just gets to the answer, without having to care about why, seems like it would be best. (Though perhaps deeply unsatisfying to an engineer mind?)
This is why I keep coming back to the idea of make reference cuts, take a picture and feed to the sw ( or another piece of software that generates a calibration file) and go.
I’m not sure you can get the accuracy you want by making a series of cuts around the board and taking a photo of it. Even with the high resolution cameras, I just think detecting the cut points really accurately will be difficult. Trying to hit a 0.5 mm accuracy in results is challenging. That’s why I’ve taken to using a printed calibration pattern. I got three ANSI E blueprints printed at Staples for $21 to cover my board… seems cheap enough. The only thing else that’s needed is a cheap usb endoscope and something to get the endoscope perfectly centered in the router body (I’m using a turned block of wood at the moment but need to improve upon it).
a 25 MP camera is theoretically able to have one pixel per 0.5mm, since you need
at least double that resolution, you would need a 100MP camera, and this is
assuming a perfect lens (which none are) as you could not tell the difference
between lens distortion and cut distortion (unless you have mapped out that
particular lens at that particular distance)
what @madgrizzle is doing of trying to look at a smaller area and graft those
together has a better chance, but he’s reinventing the way that an optical
mouse works (but at a slightly larger scale), so I think he is going to end up
running into the same issue.
The challenge in my method is accurately determining the center of the square in the picture. I use a series of filters/edge detection to do it and its not perfect. I’m sure it can be improved upon. I have 0.1 mm per pixel resolution at the distance the camera is at so I think that’s fine. If I better positioned the frame/top beam and had the center closer to the center, I could lower the camera and gain more resolution… and/or use a better camera. This one is pretty low res.
the issue is less the camera resolution (a bigger camera will let you see more
area, which is a speed upgrade more than anything else). The bigger issue is
what error is there joining the various pictures together (or rather their
results)
if you have a 2" picture (not counting overlap), that means that there are at
least 48 pictures to logically join together, an error margin of 0.01mm per join
adds up to our desired 0.5mm total error budget.