In this, the algorithm script acts as a scout. It does the marathon to search possible ways. Then, if it succeeded, it'd tell the the path animation officer to start the animation.
I didn't actually implement the "complete" algorithm, such as:
The complete scoring for close-open nodes. There are times where the "scout" needs to choose between open pathways. It still chooses the "nearest" one to the end-point event though there's actually no path on chosen node(s).
I "facilitate" it using cardinals shuffle. It sorta works, sometimes. But it can't be controlled/measured. Like, free will.
This half-assed artificial indetergent demo doesn't "learn" or anything. "Just" random.
This should have at least two solid stages before the scout reports anything to the animation officer. Therefore there's the heuristic filter. This demo uses "one stage", kinda, plus brute force shuffle.
It will be a worthless work to modify the main algorithm I typed, because it is stupid and it has a lot of inconsistencies. I didn't document the flowchart because I'm pretty much lazy. If looked from Pluto, it has the "quite similar basic idea, maybe" as the references above.
As time goes by, I suppose I'll learn more idea structures.