Google's AI solves silver medal-level Math Olympiad problems

Google DeepMind has unveiled two new AI systems that collectively solved four of six problems from this year's International Mathematical Olympiad (IMO), performing at the level of a silver medalist. According to an official blog post by Google, this marks the first time AI has achieved such a level in the world's most elite high school math competition.
The systems, AlphaProof for algebra/number theory and AlphaGeometry 2 for geometry, tackled problems that typically require exceptional human reasoning. According to Google, AlphaProof solved three problems, including the competition's most difficult question, answered correctly by only five human contestants. AlphaGeometry 2 solved one geometry problem in just 19 seconds. Combined, they scored 28/42 points - the silver medal threshold at this year's IMO, where 58 of 609 competitors achieved gold.
"The fact that the program can come up with a non-obvious construction like this is very impressive, and well beyond what I thought was state of the art," said Timothy Gowers, Fields Medalist and IMO gold medalist, who evaluated the solutions.
AlphaProof combines language models with game-playing AI techniques to generate and verify mathematical proofs, says Google. On the other hand, AlphaGeometry 2 uses enhanced symbolic reasoning trained on significantly more synthetic data than its predecessor. While impressive, the systems still couldn't solve two combinatorics problems, showing remaining limitations.
The achievement comes as AI researchers increasingly use math competitions to benchmark reasoning capabilities. Google DeepMind plans to release more technical details soon while continuing to develop multiple AI approaches for advanced mathematical problem-solving.
Comments