This paper introduces AdaEvolve, a novel framework designed to enhance how Large Language Models (LLMs) solve complex optimization and programming tasks through evolutionary search. Unlike existing methods that use rigid, pre-set schedules, this system implements hierarchical adaptivity to manage computational resources and search strategies dynamically. It operates across three levels: local adaptation to adjust exploration intensity, global adaptation to allocate the budget toward promising solution populations, and meta-guidance to generate new tactics when progress stalls. This approach mimics the efficiency of adaptive gradient methods used in continuous optimization but applies it to discrete, zero-th order problems. Experimental results across 185 benchmarks show that AdaEvolve consistently outperforms standard baselines and human-designed solutions in areas like combinatorial geometry and systems optimization. By replacing brittle manual tuning with a unified improvement signal, the framework demonstrates a more robust and autonomous path for AI-driven discovery.
Informações
- Podcast
- FrequênciaDiário
- Publicado14 de março de 2026 às 22:02 UTC
- Duração20min
- ClassificaçãoLivre
