An AI That Teaches Itself to Improve
We treat a geometric algorithm not as static code, but as a living entity that can be evolved and refined by a Very Large Language Model (VLLM). This creates a powerful, automated feedback loop for continuous self-improvement, moving beyond manual debugging.
A "seed" algorithm generates 3D supervoxels, which are then rigorously tested for geometric properties like watertightness and star-convexity.
If validation fails, an error report and the faulty code are sent to the VLLM. The AI analyzes the failure and generates a new, corrected algorithm.
The new algorithm becomes the next candidate. This "survival of the fittest" cycle repeats, autonomously discovering robust and sophisticated solutions.
A simple, domain-specific language helps the VLLM generate valid geometric logic, bridging the gap between natural language and precise code.
A highly efficient engine runs algorithms in parallel, leveraging Python’s multiprocessing to handle massive medical datasets.
Sophisticated modules automatically check for complex properties, providing the critical feedback that drives the evolutionary process.
Expert prompt engineering enables the VLLM to understand, debug, and rewrite its own code for creative problem-solving.