This release adds in the merge command, which is the compliment to split.
merge takes the output of split, which is a folder containing individual files for each cutting action, calculates a ‘better’ way to order those cutting actions to reduce the travelling distance, and then merges all of those files back into one. This is known as the ‘Travelling Salesman’ problem.
For my main test file this reduced the total travelling (G0) distance from just over 20 meters down to just over 15 meters.
It’s not necessarily the ‘most’ optimised path possible, but the objective here is that it should be ‘better’. And yes, you can take the resulting file, and rerun it through the whole process again (clean, split, merge) and it may yield a ‘better’ result.
Also changed is the License, from MIT to AGPL. This shouldn’t affect anyone, but please let me know if you have any questions.
And also changed is the library I used to handle the command line inputs. Which leads to …
Inject blank lines before significant actions - Also low priority.
File splitting - Done! .
More support for other GCode syntaxes - A BIG piece of work this one, parked for now, but I still think this is well worth it - first focus would be the RepRap syntax for 3D printers…
Unit conversion - Change from inches to millimeters - on the back - back burner
Path optimisation - DONE! - WHAT A MONSTER
Path inversion - swap the start and end of a cut - super interesting problem, but parked for now.
New New Features
GCodeClean is now a single executable. The zip file it comes in also includes two pdb files, if you really wanna try debugging the thing sometime (or you can just delete them), and the tokenDefinitions.json file, which you really really need - especially if you wanna use --annotate. So yeah, 2 files total, that’s it.
Pain Points
#1 is definitely the issue that lots of Windows (or Mac) based users have never used command line / terminal based software before. I’m still gonna look at adding a UI. But that’s now priority #2 I think
Next Priority
Porting the main library to TypeScript, then maybe it can be included with the new Maslow 4 code.
I’ve been watching my machines make ridiculous travel moves for years, and no matter how loud I yell at them they never improve. There doesn’t seem to be any affordable gcode optimizers available, so I’ve been eagerly watching your project evolve.
I’m currently working on CNC simulated chip carving, the Amazon truck dropped off a beginner book from a big name actual chip carver yesterday and it’s on today’s project list. It’ll be interesting to see what GCodeClean does to Vectric’s VCarve gcode
Here are a few web control pics from gcode I cleaned up, split and merged. The original gcode was created with Carbide Create v6 build 652. According to GCC my original traveling distance was 12,192 and new traveling distance is 1,165! I am cutting it now.
Good quality cut with no seen issues. Took 33 minutes with a .15" pocket that took 2 passes. Pretty happy with the quickness, this would of been near an hour with the old gcode.
Those numbers are in the units of whatever you used in your GCode. I’m going to guess metric (so millimeters), in which case travelling was reduced from 12.8m to 1.1m
And that also tells me that Carbide Create has a terrible algorithm for planning this out.
I’m not sure zClamp is working right. I have the carbide create set up for 5 mm. I use zClamp to set it 3 mm and it shows up as .5 and the created gcc file. With both commands listed below.
./GCC clean --filename /home/tim3/Pictures/1-8.nc --minimise medium --zClamp 3
or
./GCC clean --filename /home/tim3/Pictures/1-8.nc --zClamp 3
Also, is there a default zClamp that is applied through clean --minimise, split, or merge? Seems to be defaulting to .5 with --minimise medium (which i think is intended).
I didn’t know that there was GCode optimizers at all. Not that I looked, I was really solving my own problems with what ESTLCam spat out at me.
I’ve just revisited the description of the Travelling salesman problem - Wikipedia to see if I could identify what it was that I came up with - because I’m assuming that I didn’t come up with something new. And … nup didn’t help, but still good for a quick review.
Some caveats with this, and things that I’ll tackle sometime in the nebulous future, depending on who may need them sorted out.
Only supports GCode files that use a single tool. I’ve got certain features to get around this already in place. And a first fix for it ‘should be’ reasonably simple to implement. But for now, it just says ‘nope’.
Doesn’t care about multiple passes with increasing depth of cut for each pass. Watch out for this one. I did not bother trying to figure out how to handle this. The algorithm does respect the original order of the individual cutting paths, so in most cases it will be fine … but not always.
Sometimes the algorithm will come up with a worse result, this is especially the case where you have a small number of individual cutting paths. I’m not going to try and fix this one.
Thanks @TimS - that was a bug in the CLI code, in the end it would just ignore whatever you typed and give you the default (minimum) for whatever units your GCode was using.
re: multi-tool use. you already have the split/combine logic to split at
different segments, can you extend that to make a directory of directories with
the tools being a layer above the movement files?
I always create separate gcode for each tool even if it’s the same project. It gives me good stopping points through the project, so personally not having that feature won’t affect me.
I think I did run into where the merged code may of cut deeper on the first pass instead of the shallower cut. I have to examine the code to make sure.
Validated. My pass at -1.27 was after my pass at -3.81 for the same path.
-3.81 was line 71 and -1.27 was at line 3951 and the pass at -2.54 was at line 5403.
Luckily I did not break a bit. Only to go to use it for single pass gcode, for now.
With regard to multiple tools my idea was something similar to what @dlang suggested, which is to partition all of the individual cutting actions according to whenever a tool change occurs, and then treat each one of those ‘partitions’ as a specific collection of cutting paths to optimise on their own.
For the differing depth of cuts, I’m thinking of recording the mean and standard deviation of the depth of cut for any given cutting path (ignoring anything zero or above) as part of the file’s name. And then doing a grouping / clustering algorithm over that to identify any ‘layers’ to the depth of cuts. And processing each layer in turn. With optimising each layer after the first based on the end of the ‘final’ cutting path of the layer above.
I think the big question is if the person wants to do depth-first cutting or
not.
I think usually, depth first will make the most sense, and will minimize
movement (especially when you can alternate directions with each layer), but not
always.
I was laser cutting some box joints and I would have much preferred to have the
laser cut the entire perimeter and then go back again for a second pass than to
cut a couple lines, jump back to the start of the lines and cut them again (was
trying for multiple passes to reduce charing due to too much heat in a small
area at a time)
re: direction reversal, climb vs conventional cutting may come into play (not if
you are cutting slots, but for a finish pass where you only have material on one
side it could matter)
I have thought through the various options. At least as far as my head at this time of year will allow…
The very best would be to understand that one cutting path is ‘dependent’ on another. So, the ‘dependent’ cutting path is not ‘selectable’ for inclusion in the list of cutting paths, until the ‘predecessor’ path has been included. This is neither depth-first nor layer-first.
While interpreting the cutting paths to understand that one overlays another would help in determining this. There are the issues such as finishing passes etc. like you wrote. And I’m not doing cutting path reversal/inversion yet.
I wasn’t impressed with Carbide Create V6’s travel moves either, they seemed to get carried away retracting and plunging with feed rate moves along with a lot of bopping back and forth. Easy to use and generally competent, but that was one area they screwed up. The previous versions of GCC helped a lot with the ups and downs, but now you’ve tackled the big issue of horizontal moves