Triangular Linkage Evaluation Criteria and Measurements

@MeticulousMaynard your kit went in the mail today! Depending on what Christmas is doing to the poor folks at the USPS you should get it in just a couple days!

I am excited to see how this shakes down!

1 Like

I programmed the above nc file with a true 1/4" (6.35mm) bit, using tool compensation in the post itself. Therefore, the file is assuming the bit is 6.35mm in diameter. Honestly, based on the sheet I created, it shouldn’t matter what the actual bit size is. It compares the differences between the X and Y measurements, the diagonal measurements (which determines how square it is) and the positional error. The only thing that would get affected by the bit size would be that positional error. So yes, we should probably have a cut width recorded on the spreadsheet to account for positional error. That way, if someone with a true 6mm bit cuts out the pattern we don’t immediately assume their setup has terrible positional accuracy.

Order of cutting doesn’t matter in the slightest, as @dlang suggested. I generated toolpaths in a manner that (hopefully) should provide the fastest cut pattern. I’m not sure if there is a method for outputting SVG files from Fusion, but I didn’t see one in a cursory search through the features.

This is the current state of the linkage test header:

image

@pillageTHENburn Thanks! I saw the notification this morning. I’ve read the assembly guide a couple of times now, so I’m excited to put it together and test it out.

@All I should have some time this weekend to at the very least test the stock kinematics and possibly also the top mount version. Depending on how fast I can get Logan’s kit together I might be able to test it out next weekend, but we’ll see how my schedule pans out. I look forward to seeing other people’s tests come in as they are able to get them done. I also get that we’re all humans with jobs and families so there is no rush on getting the results back.

4 Likes

Hey all, I’m finally getting some good weather so I am beginning to make headway on this comparative test. I just finished installing the stock (quadrilateral) kinematics on the new sled and just got it dialed in with the new calibration process. First off, the new process is great! I got it dialed in with just 3 rounds. Thanks so much to everyone that worked on it!

Now, the bigger question. There have been a lot of big changes to the firmware and ground control since we last discussed this. How has this affected our test? Is the calibration benchmark test going to be the standard that we want to use? Should we still use the above .nc file I posted?

If it’s the calibration benchmark, that could be good, because it looks like it should run faster than my more meticulous test. I worry it may not give us enough comprehensive data across the entire bed, but I may be overthinking this. If we do use it, I will need to figure out where the g code file went, because ground control is having trouble locating it right now.

If you could do both and see if the calibration test misses anything that your
other test showed it would be great.

I expect that the quadrilateral kinematics is a worst-case test for this as
there are so many variables that show up in different ways.

1 Like

On it, I’ll start up the full-sheet right now one because I already have it on the laptop.

Same here. I see it as the “Control” for this experiment. At the very least it should give us concrete numbers as to how much better the triangular kinematics ares.

1 Like

I spent much of last night consolidating the control data from the stock kinematics. The first charts here are the data summary. I wrote in the calibration values for the sled at the top to show my work. On the side are the average, range, and median results of each of the criteria. The color coded charts are the errors I recorded in relation to where the test square is on the bed.

image

And here’s the raw data:

image

As to be expected, the stock kinematics is kinda all over the place. Along the top, there is significant distortion. Across the rest of the bed, the X-Y tolerances aren’t bad, though. The X-Y positional errors are pretty small until you get to the top and sides. It has a lot of trouble cutting square, however. I think because there are two fixed mounting points it is very easy for the sled to “drift” off the intended cut path. I believe that is partly due to the cutting forces, as the squares were consistently off. Around square 15, where there was a 1.7mm error in square, I saw the sled getting pulled a bit by the vacuum hose. I think this caused that large error there. This is another disadvantage of the stock kinematics. If there are any forces pulling on the sled, like the vacuum hose, it cannot adapt as well as the linkages and will cause distortion. So the machine needs more a more attentive operator.

So small parts will probably come out alright, but I would expect to see some distortion on larger parts. This lines up pretty well with our current observations.

My next goal is to run the benchmark test with the stock kinematics. I had a little trouble loading them from the advanced menu yesterday. I got this error message any time I tried to load it from the menu:

BenchmarkError

I’m running Windows 7 using the portable version of Ground Control (v1.03). I went to github and found the .nc file for the benchmark test, so I copied that to my C:\Post folder where all my G-Code is. I plan to run it like a normal program, because I can’t figure out which directory the file should go in so I can call it from the advanced menu button.

On a side note, I’m not sure that I will score very well with the benchmark test. Based on the program preview in Ground Control, it looks like the test squares are way out in the bad-tolerance areas of my last test. I will still run it, however, as it will help to compare all the data points. I don’t think that’s happening tonight, but I may be able to do it as early as tomorrow night.

I’ve now used up 1 side of my “testing” CDX, I can get 2 more tests out of the other side, but I may need to buy more plywood to do all the tests. Good thing sheathing is pretty cheap.

3 Likes

You’ve already put in so much time running your 27 square test pattern, I hesitate to suggest that the 5 square test pattern would probably be adequate, (and much faster!) – Also, you could put the results in the form that @bar and @blurfl developed – Total large dimension error/6 and total error from the 5 small squares (100mm) /10, so we can easily compare results – Thanks for the effort you’re putting in to improve Maslow accuracy!

1 Like

The missing file would make a good issue in the GC repository :wink:, it looks like the Windows packager doesn’t include the ‘gcodeForTesting’ directory where the file lives. FWIW, the file in the repository doesn’t cut the center square.

1 Like

Totally makes sense to me. Don’t worry too much about it, I’m not going to take it personally. If my username wasn’t Meticulous Maynard I probably would have just run the 5 square one :stuck_out_tongue: but I’m somewhat of a perfectionist.

But this was something I was debating yesterday before running the 27 square test. I know that my test is probably overkill. However, I wasn’t sure if the 5 square one would give us enough of an indication of where the accuracy is good and where it falls off. I feel like the longer test can pinpoint areas where accuracy might be an issue. In fact, I developed the test as a way to determine where on my bed I have the best accuracy, where I have acceptable accuracy, and where the accuracy is just straight up off.

Thank you for posting the formula for determining the accuracy index, though! I was going to dig through the forum and Github to find where it was located so I could add it as another “calibration” value in the spreadsheet.

This is a very good point. I need to be better about reporting issues on GitHub. I will continue this discussion in the issues section as to not divert this thread.

EDIT: Issue 588 opened, we’ll talk more about windows portable there.

1 Like

I think the ‘Calibration Benchmark Test’ file only cuts the outer four squares, but 5 or 4, the comparison should work as it’s an average.

I think this is somewhat an apples to oranges comparison. The calibration test now built in to GC was designed to provide a lightweight quantitative benchmark regarding the overall machine accuracy, allowing us to track our progress as we improved the kinematics and calibration processes. And it serves that purpose very well! But as we loop back around to try to further improve the kinematics, the data from your 27 square test would be extremely valuable in helping us understand the mechanics and physics involved in the machine, and how we can continue to progress.

3 Likes

Not sure where I got it from, but our test includes a 5th small square in the center of the pattern

1 Like

So I have some numbers but I think I messed up the long dimensions. I had another blasted chain jump :face_with_symbols_over_mouth: while the sled was traveling to the last two squares and I didn’t realize until I sat down now that I needed the top and bottom long measurements. I need to make an independent top beam for my machine so I can move the motor plane outwards. I’m using the 1st hole on the stock brackets right now to keep the chain from jumping again. I will have to re-calibrate and try the test again tomorrow.

The center long dimension is 1898, and the center short dimension in 896. I did not know until now that I needed to measure to the outsides of the squares as well. Is there a wiki tutorial for taking measurements and calculating the calibration score?

Much better successes today. I’ve been able to determine that the calibration benchmark for the system I used in the above test to be 3 : 0.66. Not great by any stretch of the imagination, but I think this setup will always perform poorly with the long dimensions. There is so much distortion that happens with the stock kinematics and we saw how quickly accuracy drops with them. If you refer to the X position error chart in my earlier post, you’ll see that by 350mm (~13 3/4") off center we loose 1 mm in length.

Now this is most certainly affected by the distance between motors. I have a short frame myself and only get 4’x6’ of usable bed. A longer frame, speculatively, would score better on the benchmark test and my own (tedious) bed accuracy test.

Anyways, here’s the results of my test:

The chain skipped again (and in the same place!) but this time around I was able to catch it and reset without loosing my place in the program. I was able to use my painted links at 0,0 to determine where the chain should be, and shifted the links. I think it worked, because the long dimension was about in the same range. The right squares were considerably more off than I would have thought they would be. Even if the numbers were closer to what I expected, the benchmark value wouldn’t be vastly different (0.675 instead of 0.66).

I’m going to edit my version of the program to optimize the order of cuts better for my machine. I think the rapid movement across the top center is what caused the chain skip. I should probably swap my motors to the fronts of the brackets, but I’m not sure if changing the distance between motors will skew the results of the next couple of tests. I would like to try to make everything identical for every test where possible.

Next up is @pillageTHENburn’s 45 degree linkage set. Hoping I can do both the 27 and 4 square tests this weekend. The temperature is supposed to be in the low 50’s on Saturday, and I’d like to take advantage of the weather :wink:

2 Likes

it’s probably a good idea to post a picture of the machine so that we can understand the differences between machines

Great test! Thank you for taking the time to do this! Looking forward to seeing your next set of results.

Based on your measurements here I calculated your error to be 4.0 - 0.83. Although I may be doing it wrong. :hushed:

2 Likes

I did the math wrong. Sorry, this benchmark process is new to me and I’m still figuring it out. If it hasn’t already been done, there should be a wiki page written up with instructions on what dimensions are important and how to calculate the benchmark value.

I just redid the math and got exactly your result. This means that it’s even further off than I had thought in my last post. O.O

Gonna be sooo glad to go back the triangular kinematics. I’m thinking that they’re going to score much better on these tests.

3 Likes

Now it really feels like I’m making progress! I have thoroughly tested Logan’s 45 Degree linkage system :smiley:

Calibration%20Values%2045%20Degree

I really like the new calibration routine for triangular kinematics. I was quite easily able to establish quite tight tolerances across the entire bed. Scored 1.58 : 0.5 for my calibration benchmark. Below are the results of the benchmark test:

So I dove right into the more tortuous test. I was quite pleased that the sled stayed stable during the entire program. I didn’t have to babysit the chains and make sure that they didn’t skip. Here’s the results of the test:

One thing that I found interesting is how I lost a lot of X-Y tolerance at the center of the bed. My guess is that I’ve calibrated it for the long dimensions, not the tight tolerances in the middle. I’ll have to spend some time messing around with the simulator to see how I can tighten my tolerances a little.

As far as assembly, I thought the kit was a little more involved than the top mount version. The top mount you bolt together a handful of components and mounted them to the sled. The 45 degree linkage kit I did much more prep work on. Sanding and assembling each of the components took more time. However, the assembly is solid. The parts move smoothly, and there is very little slop to the system.

The other design consideration is that the lower right mounting point really crowds the Z-Axis. It’s not impossible to fit both the standoff and the axis’s brake, but it’s something to be aware of:

All in all, I like the 45 degree system from the testing I’ve done today. The above tables would suggest that it has better accuracy than what I was seeing with the top mount. However, the last time I ran the top mount was before the chain sag calculations and the new calibration system, so the jury’s still out on that one.

Next weekend I’m planning on testing that :grin:

4 Likes

I had my mount the same as you and when I tried to extend the router bit close to it’s limit, the router head would get caught up on the edge of the Z-Axis motor mount and the lead-screw thingy would pop out of the router’s tab.

having to wait an entire week :frowning:

Looking forward to the test. Which version of my top mount kit do you have (the horizontal links are different, 3/4" for the first run, 1/2" wide for the second, and 1/4" for the third) If you have the first run kits, those have the most slop in them and will generate the worst possible results. I think that with chain sag compensation, they will be comparable with the 45 kit, but it will be very good to find out.

I may just send you a new kit if you have the earlier one, so we can compare them and see how much improvement/degredation there is between the version.

1 Like