
Google DeepMind has introduced AlphaEvolve, a generative AI agent designed to advance algorithms used in mathematics and computing. The system develops new and more complex algorithms for “mathematical analysis, geometry, combinatorics and number theory.”
Academic users can apply for selection in an AlphaEvolve Early Access Program. Google intends to make AlphaEvolve “more broadly available” but has not specified a timeline.
AlphaEvolve is an ‘evolutionary’ coding agent built on large language models
AlphaEvolve leverages Google’s other advanced models: both Gemini Flash and Gemini Pro are integrated. Google says Flash provides breadth, while Pro offers depth. Digital evaluators, which verify AlphaEvolve’s outputs, store programs generated from prompts and apply them to tasks. What Google calls an evolutionary algorithm sorts through those programs and chooses which to use for future prompts.
SEE: Microsoft laid off 3% of its workforce, primarily managers in software engineering and product management.
With AlphaEvolve, Google says it “enhanced the efficiency of Google’s data centers, chip design and AI training processes — including training the large language models underlying AlphaEvolve itself.” The agent has developed new, more efficient algorithms for instances of long-standing mathematical challenges, such as the kissing number problem, a geometry puzzle that has remained unsolved in its general form for 300 years.
Google used AlphaEvolve in TPU design and more
Internally, Google has deployed algorithms created by AlphaEvolve to work in data center design, hardware, and software design.
In data centers, Google applied AlphaEvolve to the Borg cluster manager to improve scheduling and efficiency. After over a year of operation, it recovered 0.7% of Google’s worldwide compute resources.
In hardware, AlphaEvolve recommended a rewrite in the Verilog hardware description language, removing unnecessary bits from a circuit in an upcoming version of Google’s Tensor Processing Unit for AI acceleration.
In software, AlphaEvolve identified a way to achieve a 1% reduction in Gemini’s training time. The percentage may seem low, but Google points out that developing generative AI requires so much in terms of computing resources that any efficiency gain can lead to significant benefits. Lastly, AlphaEvolve optimized low-level GPU instructions, achieving up to 32.5% acceleration in FlashAttention kernel implementation used in Transformer-based AI models.