I have been working on Holey Triangular Calibration for a while. Because there are still a few missing elements, it is not quite ready for everyone to start using it.
I would like to create a calibration benchmark database, so there is some baseline for Holey Calibration. I am thinking it could be just an excel spreadsheet with everyone’s benchmark result, with the addition of some metadata like process (e.g. Triangular or Holey or Optical) attempt (e.g. first was mediocre vs. second try was better)
I can review the forums to find where everyone has posted their calibration benchmark result, but if everyone could attach any historical calibration benchmark data to this topic, it would help quite a bit.
At this point, the spreadsheet is a little rough. There are likely some formatting changes that could be improvements. Currently, each calibration is one line in one of the worksheets. I haven’t put any statistics calculations in, not intentionally.
You appear as “Anonymous Bat”. I had to google search to figure out what was going on.
The “Anonymous Bat” shows up as an icon on the top-right. I clicked on it, and now one of the cells is highlighted as “Anonymous Bat”, and I haven’t figured out how to fix it.
OK. I get it now. The cell labeled “Anonymous Bat” is the one you have selected. When you select a different cell, the “Anonymous Bat” moves to that cell.
Was done with a 15kg sled and the weight is why i could go to almost to 8°.
Will go to 15° with my frame and a new light weight sled to match a comparison with ‘standard’ frame angles.
Ok, an other edit. I just thought that the frame angle should be a fixed value if the benchmark is analysed.
When you publicly share a google sheet, any user that is currently viewing or editing it other than the owner is anonymized. you’ll get anonymous [insert animal name here] with little animal icons as their profile pictures.
edit: added some formatting to sheet. if you need any help with formulas or formatting, lmk.
I think these are both very good links. I may not have sufficiently done my homework prior to creating this topic.
I think both of the links have the intention of identifying the accuracy/precision of one machine and one calibration. The intent of the Calibration Benchmark Database is to identify the accuracy/precision across machines and calibrations. It could help identify which machine features result in better accuracy. It could provide data to inform the direction for building/calibrating machines in the future.