I love this community and for its helpfulness, responsiveness, positive tone, and deep knowledge. I want to cautiously offer this idea for consideration within the context that nothing we do should compromise the quality of the community. The stewards of this community do an excellent job of nurturing it, so perhaps after consideration by the group the idea will get the public death it deserves, with no offense taken on my part.
What if we have an annual accuracy contest?
OR we could create an ongoing running list and make a to-do when someone moved up the ranks.
- Measurements taken from the accuracy benchmark. Should there be a repeatability element? Average of 3 runs?
- To be eligible, the build must be documented according to a standard template (in the wiki?), including bits, speeds and feeds (or should this be standardized to focus on the machine rather than technique?)
- Standardized material, readily available wherever Maslows may be found.
- 3 Classes of competition: Stock (<$500), Standard (<$1000 US), and Unlimited?
Bragging rights (or maybe sponsors would pony up some goodies). The rank holder as of some annual date could make that year’s trophy with their super accurate Maslow, to be passed on to the next 1st place holder that comes along.
Problems / Concerns / Risks
- Does the competition create negative energy in the community?
- Is a relentless focus on accuracy as opposed to precision or other factors a net negative?
- Drives the accuracy of the Maslow up over time, in a well documented way.
- It might be fun?
So that’s what I could not get out of my head at 4:48am…
Being a Top Gear fan, love the idea.
Although I think maybe a less ambitious start where a simple table of existing accuracies and basic build details (frame design, top beam size etc) is reported would go a long way to developing a baseline of current accuracy. Then as new ideas and changes are explored new tests can be performed on the various configurations to help capture whether the changes yielded meaningful results or not.
How do you do an apples to apples comparison of DIY machines? I actually like this idea since it is fun. Regarding doing actual data collection and research, like the census, in order to make claims about doing something one way or another, there has to be an apples to apples comparison. That will be a challenge for maslow cnc due to the variability in frame design. The only way to filter out “noise” like that is to have a lot of sample data, which is going to be difficult to come by.
Also, there are a lot of sources of error, so trying to understand all the metrics of these errors and how each unique machine fares against these known sources of error is another challenge. Am I thinking about this incorrectly? I definitely don’t claim to know everything.
Sorry for coming off as negative.
Also, there is this: The Zipper Tree Challenge... because
I don’t hear negative at all. Just constructive, critical thinking. Keep it coming.
At this point I would call this more “data gathering”. The comparisons - and hopefully constructive discussion - will come when you see your $5k carbon fiber custom Maslow being bested by a newbie with pallet wood. My hope is that it all leads to focus on areas of work (and investment) that contribute to the desired outcome for a given user, which are some combination of accuracy, repeatability, and cost.
Ah, carbon fiber. Last night I was out walking in my neighborhood and ran in to an inventor who had a rather odd surfboard-like thing with him. We got chatting. It’s an aerofoil and it was made of custom carbon fiber (!). Has them made in Slovenia. Haven’t found his north american site yet but this is the Slovak one if curious: http://flyingrodeoefoil.com/experience/
This is what the accuracy benchmark was made for (a box in each corner with
measurements of how square each box is with a cheap digital caliper and how far
the boxes are from each other top, bottom, left, right with a tape measure.