I’ve been playing with a bit of classic Monte Carlo stuff (technically I think it would count as MCMC, but with a binary yes/no rather than a weighted step), that seems to be working well in my small experiments.
If we’re in the right ballpark for the answer, just sampling around what we’ve got and keeping any higher fitness gets me better fitness than I was seeing from the algorithm.
We can use a descending radius to sample from too to focus in those samples once we’re close.
That said, we need to be careful we don’t fall in the same trap of getting stuck on a non-optimal minima. And we need to get fairly close to a minima to start. What seems to work quite well is to marry a short run of the original algorithm (up to 1000 steps) with some MC, and loop that, keeping the current best but kicking off new descents for a decent distance away.
It’s scrappy experiment code, but this is working quite well for me:
const compute1kButton = document.getElementById('compute-1k-button');
compute1kButton.addEventListener('click', () => {
currentBest = JSON.parse(JSON.stringify(initialGuess));
for(let outer = 0; outer < 10; outer++){
clearCanvas();
guessThisTime = JSON.parse(JSON.stringify(currentBest));
guessThisTime.tl.x = guessThisTime.tl.x + (Math.random() * 10) - 5;
guessThisTime.tl.y = guessThisTime.tl.y + (Math.random() * 10) - 5;
guessThisTime.tr.x = guessThisTime.tr.x + (Math.random() * 10) - 5;
guessThisTime.tr.y = guessThisTime.tr.y + (Math.random() * 10) - 5;
guessThisTime.br.x = guessThisTime.br.x + (Math.random() * 10) - 5;
guessThisTime = computeLinesFitness(measurements, guessThisTime);
guessThisTime = findMaxFitness(guessThisTime, measurements);
printResults(guessThisTime);
if(1/guessThisTime.fitness > 1/currentBest.fitness){
console.log("Fitness: " + 1/guessThisTime.fitness);
currentBest = JSON.parse(JSON.stringify(guessThisTime));
}
for(let i = 0; i < 1000; i++){
let iterations = 10;
for(let focuser = 0; focuser < iterations; focuser++){
let range = 10 * (focuser / iterations);
let halfRange = range/2;
clearCanvas();
refineThisTime = JSON.parse(JSON.stringify(guessThisTime));
if(Math.random() < 0.3){ refineThisTime.tl.x = refineThisTime.tl.x + (Math.random() * range) - halfRange; }
if(Math.random() < 0.3){ refineThisTime.tl.y = refineThisTime.tl.y + (Math.random() * range) - halfRange; }
if(Math.random() < 0.3){ refineThisTime.tr.x = refineThisTime.tr.x + (Math.random() * range) - halfRange; }
if(Math.random() < 0.3){ refineThisTime.tr.y = refineThisTime.tr.y + (Math.random() * range) - halfRange; }
if(Math.random() < 0.3){ refineThisTime.br.x = refineThisTime.br.x + (Math.random() * range) - halfRange; }
refineThisTime = computeLinesFitness(measurements, refineThisTime);
if(1/refineThisTime.fitness > 1/guessThisTime.fitness){
console.log("Fitness: " + 1/refineThisTime.fitness);
guessThisTime = JSON.parse(JSON.stringify(refineThisTime));
}
if(1/guessThisTime.fitness > 1/currentBest.fitness){
console.log("Fitness: " + 1/guessThisTime.fitness);
currentBest = JSON.parse(JSON.stringify(guessThisTime));
}
}
}
printResults(currentBest);
}
// printResults(initialGuess);
printResults(currentBest);
});
(theres a few other mods to the code to make this work too)
one of the things to note is:
if(Math.random() < 0.3){ refineThisTime.tl.x = refineThisTime.tl.x + (Math.random() * range) - halfRange; }
if(Math.random() < 0.3){ refineThisTime.tl.y = refineThisTime.tl.y + (Math.random() * range) - halfRange; }
if(Math.random() < 0.3){ refineThisTime.tr.x = refineThisTime.tr.x + (Math.random() * range) - halfRange; }
if(Math.random() < 0.3){ refineThisTime.tr.y = refineThisTime.tr.y + (Math.random() * range) - halfRange; }
if(Math.random() < 0.3){ refineThisTime.br.x = refineThisTime.br.x + (Math.random() * range) - halfRange; }
It’s important that we’re potentially randomising only some of the values - for resampling, we want to try small variations on the current as well as medium ones. 0.3 is arbitrary but I think ok - it’s something that could do with testing with a proper data set.
When I find some time i’ll tidy it up into some commits other people can try. I suspect it might well work best with a round of weighted mean as an early way of getting ballpark measurements to save a bit of time, and emphasising a bit more robustness 