I had a conversation with a robotics friend and he was describing iterative time parameterization, which I knew nothing about and he knew nothing about gcode interpretation… and apparently I have much to learn on that front too, but in the robot operating system (ROS), they use this technique as an entry-level simplified version of more complicated techniques to path plan and effectively chop up movement segments into time slices based on acceleration constraints. Sharp curves move slower than broad curves or straight lines… or so the explanation goes. Does anyone reading this know about this and could you teach me? I’d like to add acceleration to maslow firmware. Since I want Maslow to go faster, and the luxury of telling someone else to do it isn’t availalbe, I’m volunteering to learn and make it happen. Please help me if you can.
I would suggest that you read through the grbl source code where they have a
large section detailing the problem and how they address it.
It may actually be easier to add maslow support functionality to grbl than to
add acceleration to the maslow firmware.
If you take this approach, there are three sets of things to work in the grbl
code.
-
make grbl support DC motor/encoder as an option instead of steppers (includes
adding the pid loops) -
add the maslow kinematics as an option (grbl already supports 2 kinematics
options -
add the B codes (or equivalent) for maslow/dc motor specific commands
David Lang
- add stepper support for z axis?
Is there a grbl fork you think is most advanced to look at first? I discovered that hobby servos are angular devices that move 180 degrees. Industrial servos are dc motors with encoders… I found these:
any preference on either one? If the encoder code works…
if it’s possible to keep the existing stepper support you just need to be able
to select between stepper and motor/encoder per axis
two options here
-
stick with the main tree
-
look at the various 32 bit/arm trees and see if there’s one that multiple
people are working on and have updated recently
the main tree has the advantage that if you can get it to work, you may be able
to use existing arduinos and motor controlers.
the 32bit option has the advantage that you won’t be limited by the performance
of the board.
and if you get it working on the main tree, I expect that you could get help
porting it to the other boards.
David Lang
thanks @madgrizzle for that link. That looks like what I’d like to get through and beyond. It appears that will work with the hardware we have unless we want to go faster, which would mean motor upgrades and possibly a board upgrade for the H bridge. The documentation on that page is quite well done also.
It looks like August was the last Due board development… or is it done? Is the esp32 board attempting to use this spline methodology for acceleration planning?
I’m working on a new firmware version (still in the early stages) which will be built on the esp32 version of GRBL which will include acceleration planning. I’m still a couple months away from having anything to show, but I absolutely agree that it’s a great idea.
I’m trying to figure out where best to dig in on this and it looks like going from GRBL1 with the Due code to ESP32 would be the most direct route after establishing functionality of the ESP 32. This sidesteps the time parameterization approach and goes directly to the more intense spline fitting version of path planning.
A second look at the gcode my system is running shows it is anything but intuitive in its movements. It would make a ton of sense to optimize the gcode sequence first. Would it make sense to optimizing the path on the webcontrol or controller as a speed optimizing approach?
Will it be compatible with the current Maslow?
This would probably give you a bigger speed increase than acceleration planning would and would love to see this. When I first started with CNC a couple of years ago, I was able to shave a lot of time off a cut just by optimizing the gcode. I had found a shareware program online and it worked pretty well, but I think it fails if you try to do a tool change… so you have to break each tool into individual files.
There likely are better optimizers out there and if you can find python code for one, that would be awesome.
optimizing the g-code should not be done in the firmware (it would be an
interesting option to add to GC/WC
you want the firmware to do what it’s told, not change it.
acceleration planning will enable higher speeds that the current motors are able
to do, because currently if you try to run it at max speed you end up tripping
the ‘not keeping up’ error because the movement (and for that matter the PID
loops) don’t go from a dead stop to full speed instantly.
so both are valuable, but if you are looking at the firmware, acceleration
planning is what’s needed.
David Lang
I just downloaded that gcode optimizer this morning and I was ready to firebomb my laptop because I couldn’t get visual studio code to find its own stdio.h path so it would compile!! so I went out to cut rather than mess with that for too long. I need to start a thread to discuss what happened during the cut - it was very strange.
@Orob Yeah, The controller really does not have much processing power. Fusion360 does a lot of planning and optimization of cuts. Sounds like you would be reinventing the wheel.
Sounds like a nice addition to webcontrol.
reinventing isn’t my thing, but the thought occurred to me. Good think I can bounce my dumb ideas off you guys before wasting time on it. Thanks for the feedback!
I tried that optimizer on my little router, worked well in my test.
Unfortunately I really wanted to be able to minimize z movements for my diamond drag bit which had very slow z moves (like a Maslow). Seems my CAM software is pretty stupid about combining adjacent cuts without a safe height retract that a little intelligent planning would eliminate. Can’t find an optimizer that fixes that