Loose belts during calibration

Lee H wrote:

For my vertical-ish M4 setup, I ended up doing the following:

  • Modifying my frame changing it from 17 degrees to 23 degrees (where I use the M4 the floor results in it being 22 degrees) - this yielded a massive improvement in not having the sled tilt away from the from while calibrating.
  • Having one modified anchor point, which is higher than the others, the uppermost (relative to the sled) arm is attached to this anchor point.
  • Changing the arm layout to (relative to which is closest to the sled): BL → TR → TL → BR
  • Making the corresponding changes to the Z values in the maslow.yaml to march the new arm layout

The result was a much smoother calibration process with no ‘weird’ behaviours.

I just posted a PR for the “new source of error” thread (the anchors an arms
don’t flex in the Z direction, so the Z distance from the anchors to the arms
happens over a shorter distance, and therefor a steeper angle than the old math
showed, resulting in the incorrect belt length being calculated)

David Lang

After my changes above I ran the results through @Ulmair 's work to get this result

I’m gonna call that a win

@bar I also reckon that there should be a few changes to some of the recommendations:

  • For the vertical-ish frame the minimum angle should be 20 degrees - in fact I’d say the goal should be somewhere between 22.5 and 30 degrees - probably needs some testing to find out what the ideal angle is
  • For ‘on-the-ground’ M4s I still think that someone needs to try the arm layout of (starting topmost ‘down’ to the sled) BL ->TL —> BR → TR and see what impact it has on their calibration
  • For vertical-ish frames I believe the ideal arm layout is different from ‘on-the-floor’ - and that this should be clearly called out.
  • For vertical-ish frames I’d go with the layout of (starting topmost ‘down’ to the sled) BL → TR → TL → BR
  • For vertical-ish frames the principles guiding arm layout are:
    • Opposite corners should be next to each other in the stack of arms, i.e. the pairs are TR<->BL and BR<->TL
    • The ‘highest’ and ‘lowest’ arms in the stack should be separated by a long edge, not a short edge
2 Likes

I think that this is a SUPER interesting idea. Can you explain more the reasoning behind these?

In particular:

and

Bar wrote:

In particular:

one thing is to not have the lowest arms on the bottom, so that you have more
clearance for the dust collector. We’ve had quite a few reports of people having
the lowest belt catch on the dist collector hose/fitting.

and

I think this is also making it less likely to impact the dust collector.

David Lang

Which is why I did my own dust collector design

This layout of the arms I’m calling planar, as per this representation of the ends of the arms from above in this thread


It doesn’t work for vertical-ish because the height difference between BL and TR results in the sled being lifted off the frame towards the TR anchor point.
However, I think gravity and the ‘on-the-floor’ orientation may be able to sort that out.
With a ‘planar’ arrangement of the arms, the maths, and the forces between them should be more uniform resulting in a better calibration result.

The first bullet point comes from what I discovered trying the ‘planar’ arrangement with my vertical-ish frame, and of course seeing the result of how the sled was tipping from a simple ‘apply tension’ told me that the arrangement wasn’t going to work for vertical-ish. So that principle is effectively going fully in the opposite direction from what I was trying to achieve with the ‘planar’ arm arrangement.
The second point is really from an observation about sled tipping on a vertical-ish frame with the current arm arrangement. In that the ‘highest’ arm is TL, and the ‘lowest’ is BL, TL ↔ BL is a short edge. For a vertical-ish arrangement there is noticeably more sled tipping on the left side than the right. The ‘planar’ arrangement eliminates ‘highest’ to ‘lowest’ being along any edge, but that doesn’t work for vertical-ish. So, the next compromise back from that is to ensure that ‘highest’ to ‘lowest’ is along a long edge.

I’ve updated the pull request to merge the changes here into the calibration simulator. Overall it seems like it’s working really well, but one bit of strange behavior that I’m seeing is that at the end of converging the result seems to start to oscillate which is a bit concerning:

GIFMaker_me(3)

Edit: This result is with data var measurements = [{bl:1805.88, br:1794.81, tr:1799.43, tl:1799.21},{bl:1742.67, br:1729.06, tr:1867.51, tl:1867.38},{bl:1549.52, br:1929.60, tr:2055.74, tl:1692.36},{bl:1620.86, br:1990.97, tr:1994.95, tl:1614.83},{bl:1698.86, br:2051.95, tr:1937.04, tl:1542.45},{bl:1875.45, br:1863.81, tr:1735.10, tl:1733.83},{bl:2063.84, br:1687.06, tr:1542.02, tl:1935.67},{bl:2002.37, br:1609.47, tr:1613.31, tl:1992.56},{bl:1944.52, br:1536.92, tr:1691.54, tl:2055.14},{bl:1892.13, br:1465.85, tr:1774.07, tl:2127.40},{bl:1683.87, br:1668.33, tr:1943.22, tl:1947.59},{bl:1483.11, br:1874.46, tr:2125.07, tl:1778.54},{bl:1293.29, br:2086.40, tr:2314.98, tl:1625.56},{bl:1369.49, br:2133.37, tr:2253.70, tl:1535.45},{bl:1451.10, br:2185.75, tr:2196.19, tl:1449.29},{bl:1535.14, br:2242.88, tr:2143.58, tl:1367.70},{bl:1626.79, br:2304.56, tr:2095.99, tl:1291.44},{bl:1778.81, br:2115.50, tr:1882.86, tl:1481.08},{bl:1951.07, br:1934.29, tr:1675.00, tl:1681.48},{bl:2132.74, br:1765.35, tr:1474.94, tl:1886.73},{bl:2323.11, br:1612.43, tr:1286.07, tl:2101.44},{bl:2262.45, br:1522.29, tr:1364.73, tl:2145.13},{bl:2205.26, br:1435.94, tr:1446.49, tl:2195.94},{bl:2152.85, br:1354.09, tr:1531.26, tl:2254.54},{bl:2105.64, br:1277.80, tr:1621.77, tl:2315.70},{bl:2064.02, br:1208.86, tr:1711.01, tl:2382.42},{bl:1845.61, br:1410.02, tr:1858.19, tl:2199.83},{bl:1631.47, br:1616.67, tr:2019.26, tl:2025.07},{bl:1423.24, br:1829.38, tr:2196.79, tl:1863.25},{bl:1224.30, br:2045.86, tr:2381.44, tl:1717.86},{bl:1039.61, br:2263.16, tr:2572.74, tl:1593.70},{bl:1120.59, br:2299.57, tr:2514.42, tl:1492.59},{bl:1207.11, br:2343.20, tr:2457.31, tl:1394.06},{bl:1298.07, br:2390.20, tr:2404.60, tl:1298.60},{bl:1393.28, br:2443.73, tr:2356.49, tl:1206.77},{bl:1490.88, br:2500.56, tr:2313.30, tl:1119.70},{bl:1593.60, br:2563.92, tr:2275.30, tl:1038.21},{bl:1717.64, br:2373.39, tr:2054.27, tl:1225.46},{bl:1864.91, br:2188.47, tr:1836.25, tl:1422.45},{bl:2027.54, br:2013.64, tr:1622.64, tl:1628.86},{bl:2204.15, br:1851.88, tr:1415.16, tl:1841.11},{bl:2389.71, br:1706.56, tr:1217.18, tl:2058.23},{bl:2579.98, br:1582.57, tr:1033.90, tl:2274.34},{bl:2522.70, br:1481.42, tr:1115.83, tl:2309.31},{bl:2466.00, br:1382.63, tr:1202.87, tl:2353.65},{bl:2413.65, br:1286.91, tr:1292.60, tl:2401.45},{bl:2365.88, br:1195.00, tr:1385.82, tl:2454.53},{bl:2322.96, br:1107.63, tr:1484.64, tl:2512.13},{bl:2285.22, br:1026.00, tr:1585.05, tl:2574.39},];

oscilation in gradient descent is usually a result of the learning rate being too high.

I think the 0.9 term in var scalor = -1. * Math.pow(0.9, retryCounter + 1); is acting as the learning rate? Although I could be misreading the code.

1 Like

Running 0.85.1 with the same configuration of arm heights as I did above*, and here’s the result

It tells me that the M4 fitness is 1.039 - although I haven’t transferred over its suggested values to the maslow.yaml

The original calibration result came in at a fitness of 0.82427

The new calibration tweaks from dlang’s new calc code seem to have made a noticable difference to how calibration progresses.

For example, the transition through the grids seems more ‘in line’, by which I actually mean ‘not as lumpy’:

  • 3x3 - 1.73107

  • 5x5 - 1.3869

  • 7x7 - 0.99358

  • 9x9 - 0.82276

  • Final - 0.82427

  • For vertical-ish frames I’d still go with the layout of (starting topmost ‘down’ to the sled) BL → TR → TL → BR

1 Like

Actually that is the technique to overcome a local minima and to find the global one. Therefore, it is not a bug it is a feature :wink:

You are right. That is the learning rate aka step size. But the oscillation comes from another additional code. The gradient descent technique finds only local minima. To overcome this, the currently best found result is randomly disturbed and the gradient descent technique is applied again. This is done a couple times to ensure, that the found minima is really the global minima.

1 Like