# FrontierMath: A Benchmark for Evaluating Advanced Mathematical Reasoning in AI
## Abstract
We introduce FrontierMath, a benchmark of hundreds of original, exceptionally challenging mathematics problems crafted and vetted by expert mathematicians. The questions cover most major branches of modern mathematicsâfrom computationally intensive problems in number theory and real analysis to abstract questions in algebraic geometry and category theory. Solving a typical problem requires multiple hours of effort from a researcher in the relevant branch of mathematics, and for the upper end questions, multiple days. FrontierMath uses new, unpublished problems and automated verification to reliably evaluate models while minimizing risk of data contamination. Current state-of-the-art AI models solve under 2% of problems, revealing a vast gap between AI capabilities and the prowess of the mathematical community. As AI systems advance toward expert-level mathematical abilities, FrontierMath offers a rigorous testbed that quantifies their progress.
Teams interested in evaluating models on the benchmark should reach out to math_evals@epochai.org We thank Terence Tao, Timothy Gowers, and Richard Borcherds for their insightful interviews and thank Terence Tao for contributing several problems to this benchmark. We also thank Nat Friedman, Fedor Petrov, Feiyang Lin, Eli Lifland, Marc Pickett, and the Constellation team. We gratefully acknowledge OpenAI for their support in creating the benchmark.
## 1 Introduction
Recent AI systems have demonstrated remarkable proficiency in tackling challenging mathematical tasks, from achieving olympiad-level performance in geometry [trinh2024solving] to improving upon existing research results in combinatorics [romera2024mathematical]. However, existing benchmarks face some limitations:
Saturation of existing benchmarks
Current standard mathematics benchmarks such as the MATH dataset [hendrycks2021measuring] and GSM8K [cobbe2021training] primarily assess competency at the high-school and early undergraduate level. As state-of-the-art models achieve near-perfect performance on these benchmarks, we lack rigorous ways to evaluate their capabilities in advanced mathematical domains that require deeper theoretical understanding, creative insight, and specialized expertise. Furthermore, to assess AIâs potential contributions to mathematics research, we require benchmarks that better reflect the challenges faced by working mathematicians.
Benchmark contamination in training data
A significant challenge in evaluating large language models (LLMs) is data contaminationâthe inadvertent inclusion of benchmark problems in training data. This issue leads to artificially inflated performance metrics that mask modelsâ true reasoning capabilities [deng2023investigating, xu2024benchmark]. While competitions like the International Mathematics Olympiad (IMO) or American Invitational Mathematics Examination (AIME) offer fresh test problems created after model training, making them immune to contamination, these sources provide only a small number of problems and often require substantial manual effort to grade.
Sample problem 1: Testing Artinâs primitive root conjecture
Definitions. For a positive integer $n$ , let $v_{p}(n)$ denote the largest integer $v$ such that $p^{v}\mid n$ . For $p$ a prime and $a\not\equiv 0\pmod{p}$ , we let $\textup{ord}_{p}(a)$ denote the smallest positive integer $o$ such that $a^{o}\equiv 1\pmod{p}$ . For $x>0$ , we let $\textup{ord}_{p,x}(a)=\prod_{\begin{subarray}{c}q\leq x\\ q\text{ prime}\end{subarray}}q^{v_{q}(\textup{ord}_{p}(a))}\prod_{\begin{subarray}{c}q>x\\ q\text{ prime}\end{subarray}}q^{v_{q}(p-1)}.$ Problem. Let $S_{x}$ denote the set of primes $p$ for which $\textup{ord}_{p,x}(2)>\textup{ord}_{p,x}(3),$ and let $d_{x}$ denote the density $d_{x}=\frac{|S_{x}|}{|\{p\leq x:p\text{ is prime}\}|}$ of $S_{x}$ in the primes. Let $d_{\infty}=\lim_{x\to\infty}d_{x}.$ Compute $\lfloor 10^{6}d_{\infty}\rfloor.$ Answer: 367707 MSC classification: 11 Number theory
Sample problem 2: Find the degree 19 polynomial
Construct a degree 19 polynomial $p(x)\in\mathbb{C}[x]$ such that $X:=\{p(x)=p(y)\}\subset\mathbb{P}^{1}\times\mathbb{P}^{1}$ has at least 3 (but not all linear) irreducible components over $\mathbb{C}$ . Choose $p(x)$ to be odd, monic, have real coefficients and linear coefficient -19 and calculate $p(19)$ . Answer: 1876572071974094803391179 MSC classification: 14 Algebraic geometry; 20 Group theory and generalizations; 11 Number theory generalizations
Sample problem 3: Prime field continuous extensions
Let $a_{n}$ for $n\in\mathbb{Z}$ be the sequence of integers satisfying the recurrence formula See Appendix A.3 for precise coefficients. $\displaystyle a_{n}=(981\times 0^{11})a_{n-1}+(549\times 0^{11})a_{n-2}$ $\displaystyle\hskip 11.38092pt-(277\times 0^{11})a_{n-3}+(706\times 0^{8})a_{n-4}$ with initial conditions $a_{i}=i$ for $0\leq i\leq 3$ . Find the smallest prime $p\equiv 4\bmod 7$ for which the function $\mathbb{Z}\to\mathbb{Z}$ given by $n\mapsto a_{n}$ can be extended to a continuous function on $\mathbb{Z}_{p}$ . Answer: 9811 MSC classification: 11 Number theory
Figure 1: FrontierMath features hundreds of advanced mathematics problems that require hours for expert mathematicians to solve. We release representative samples from the benchmark. Statements and solutions for released problems are provided in Appendix Ë A.
To address these limitations, we introduce FrontierMath, a benchmark of original, exceptionally challenging mathematical problems created in collaboration with over 60 mathematicians from leading institutions. FrontierMath addresses data contamination concerns by featuring exclusively new, previously unpublished problems. It enables scalable evaluation through automated verification that provides rapid, reproducible assessment. The benchmark spans the full spectrum of modern mathematics, from challenging competition-style problems to problems drawn directly from contemporary research, covering most branches of mathematics in the 2020 Mathematics Subject Classification (MSC2020). In particular, FrontierMath contains problems across 70% of the top-level subjects in the MSC2020 classification excluding â00 General and overarching topics; collectionsâ, â01 History and biographyâ, and â97 Mathematics educationâ.
<details>
<summary>x1.png Details</summary>

### Visual Description
\n
## Bar Chart: Problems Not Solved by Leading AI Models
### Overview
This is a vertical bar chart illustrating the percentage of problems not solved by leading AI models across various datasets. The x-axis represents different problem sets, and the y-axis represents the percentage of unsolved problems. The chart uses a teal color for the bars. An arrow on the left indicates "Less saturated" corresponds to higher percentages, and an arrow on the right indicates "More saturated" corresponds to lower percentages.
### Components/Axes
* **Title:** "Problems not solved by leading AI models" (positioned at the top-center)
* **X-axis:** Problem Sets: FrontierMath, Omni-Math, MathVista, AIME, MATH, GSM-8k, MMLU (positioned at the bottom)
* **Y-axis:** Percentage of Unsolved Problems (0% to 100%, positioned on the left)
* **Color Scheme:** Teal bars representing the percentage of unsolved problems.
* **Saturation Arrows:** "Less saturated" pointing upwards, "More saturated" pointing downwards.
### Detailed Analysis
The chart displays the following approximate data points:
* **FrontierMath:** Approximately 95% of problems are not solved. (Tallest bar)
* **Omni-Math:** Approximately 40% of problems are not solved.
* **MathVista:** Approximately 25% of problems are not solved.
* **AIME:** Approximately 20% of problems are not solved.
* **MATH:** Approximately 5% of problems are not solved. (Shortest bar)
* **GSM-8k:** Approximately 3% of problems are not solved.
* **MMLU:** Approximately 2% of problems are not solved.
The bars generally decrease in height from left to right, indicating a decreasing percentage of unsolved problems as you move across the datasets.
### Key Observations
* FrontierMath has the highest percentage of unsolved problems by a significant margin.
* MATH, GSM-8k, and MMLU have very low percentages of unsolved problems, indicating high performance on these datasets.
* There is a steep drop in unsolved problems between Omni-Math and MathVista.
* The difference between AIME and MATH is noticeable.
### Interpretation
The data suggests that leading AI models struggle significantly with the FrontierMath dataset, while performing relatively well on datasets like MATH, GSM-8k, and MMLU. This could indicate that FrontierMath presents a unique challenge, perhaps involving problem types or complexity levels not well-represented in the other datasets. The decreasing trend in unsolved problems across the datasets suggests that the AI models are becoming more capable of solving a wider range of mathematical problems. The large difference between FrontierMath and the other datasets could be due to the nature of the problems in FrontierMath, which may require reasoning skills or knowledge that the models currently lack. The chart highlights the limitations of current AI models in solving complex mathematical problems and suggests areas for future research and development. The use of "saturation" as a visual cue is interesting, implying that higher percentages represent a less "saturated" state of AI problem-solving ability, and vice versa.
</details>
Figure 2: Comparison of unsolved problems across five mathematics benchmarks. While existing benchmarks are approaching saturation (see Appendix B.3), FrontierMath maintains a >98% unsolved rate.
The problems in FrontierMath demand deep theoretical understanding, creative insight, and specialized expertise, often requiring multiple hours of effort from expert mathematicians to solve. We provide a set of representative example problems and their solutions (see Appendix B.3 and Figure Ë 1). Notably, current state-of-the-art AI models are unable to solve more than 2% of the problems in FrontierMath, even with multiple attempts, highlighting a significant gap between human and AI capabilities in advanced mathematics (see Figure 2).
To understand expert perspectives on FrontierMathâs difficulty and relevance, we interviewed several prominent mathematicians, including Fields Medalists Terence Tao (2006), Timothy Gowers (1998), and Richard Borcherds (1998), and IMO coach Evan Chen. They unanimously characterized the problems as exceptionally challenging, requiring deep domain expertise and significant time investment to solve. While they noted that the requirement of numerical answers differs from typical mathematical research practice, these problems nonetheless demand substantial expertise and time even from experienced mathematicians.
## 2 Data collection
### 2.1 The collection pipeline
We developed the FrontierMath benchmark through collaboration with over 60 mathematicians from universities across more than a dozen countries. Our contributors formed a diverse group spanning academic ranks from graduate students to faculty members. Many are distinguished participants in prestigious mathematical competitions, collectively holding 14 IMO gold medals. One contributing mathematician also holds a Fields Medal.
Their areas of expertise collectively span a vast expanse of modern mathematicsâincluding but not limited to algebraic geometry, number theory, point set and algebraic topology, combinatorics, category theory, representation theory, partial differential equations, probability theory, differential geometry, logic, and theoretical computer science. Since many of our contributors are active researchers, the problems naturally incorporate sophisticated techniques and insights found in contemporary mathematical research.
Each mathematician created new problems following guidelines designed to ensure clarity, verifiability, and definitive answers across various difficulty levels and mathematical domains. Four requirements guided problem creation:
- Originality: While problems could build on existing mathematical ideas, they needed to do so in novel and non-obvious waysâeither through clever adaptations that substantially transform the original concepts, or through innovative combinations of multiple ideas that obscure their origins. This ensures that solving them requires genuine mathematical insight rather than pattern matching against known problems.
- Automated verifiability: Problems had to possess definitive, computable answers that could be automatically verified. To facilitate automated verification, we often structured problems to have integer solutions, which are straightforward to check programmatically. We also included solutions that could be represented as arbitrary SymPy objectsâincluding symbolic expressions, matrices, and more. By utilizing SymPy, we enabled a wide range of mathematical outputs to be represented and verified programmatically, ensuring that even complex symbolic solutions could be efficiently and accurately checked. This approach allowed us to support diverse answer types while maintaining verification standards.
- Guessproofness: Problems were designed to avoid being susceptible to guessing; solving the problem had to be necessary to find the correct answer. As can be seen from Figure Ë 1, problems often have numerical answers that are large and nonobvious, which reduces the vulnerability to guesswork. As a rule of thumb, we require that there should not be a greater than 1% chance of guessing the correct answer without doing most of the work that one would need to do to âcorrectly" find the solution. This is important to ensure that the challenges assessed genuine mathematical understanding and reasoning.
- Computational tractability: The solution of a computationally intensive problem must include scripts demonstrating how to find the answer, starting only from standard knowledge of the field. For example, if a problem requires factoring a large integer, this must be doable from a standard algorithm. These scripts must cumulatively run less than a minute on standard hardware. This requirement ensures efficient and manageable evaluation times.
For each contributed problem, mathematicians provided a detailed solution and multiple forms of metadata. Each submission included both a verification script for automated answer checking (see Section Ë 2.2) and comprehensive metadata tags. These tags included subject and technique classifications, indicating both the mathematical domains and specific methodologies required for solution. Each submission then underwent peer review by at least one mathematician with relevant domain expertise, who evaluated all components: the problem statement, solution, verification code, difficulty ratings, and classification tags (see Section Ë 2.3). Additionally, the problems were securely produced and handled to prevent data contamination, and reviewed for originality (see Section Ë 2.4). Finally, the submissions included structured difficulty ratings that assessed three key dimensions: the required level of background knowledge, the estimated time needed to identify key insights, and the time required to work through technical details (see Section Ë 2.5).
### 2.2 Automated verification
FrontierMath focuses exclusively on problems with automatically verifiable solutionsâprimarily numerical answers or mathematical objects that can be expressed as SymPy objects (including symbolic expressions, matrices, sets, and other mathematical structures). For each problem, the evaluated model submits code that computes and saves its answer as a Python object. A script then automatically verifies this answer. If the problem has a unique integer solution, it checks for an exact match. If the problem has a unique symbolic real answer, it uses SymPy evaluation to check that the difference between the submitted answer and the actual answer simplifies to 0. In all other cases, a custom verification script is necessary to check validity of a submitted answer (see Figure 3).
<details>
<summary>x2.png Details</summary>

### Visual Description
\n
## Diagram: LLM Submission and Verification Process
### Overview
This diagram illustrates the process of an LLM (Large Language Model) submitting code, saving the answer, and a verification script checking the correctness of the answer. It shows two separate submission/verification cycles, one resulting in a correct answer and the other in an incorrect answer.
### Components/Axes
The diagram consists of two main sections: "LLM submits code that computes and saves the answer" and "Verification script loads the results and marks them as either correct or incorrect". Each section contains a code block labeled `submitted_code.py` and `verification_script.py` respectively. Connections between the code blocks are represented by dashed arrows. Each verification cycle is marked with a green checkmark for correct and a red 'X' for incorrect.
### Detailed Analysis or Content Details
**First Submission/Verification Cycle (Top):**
* **submitted_code.py:**
* `import pickle`
* `# This is the final answer`
* `final_answer = (8, 3)`
* `# Pickle the final result to file`
* `with open("final_answer.p", "wb") as f:`
* `pickle.dump(final_answer, f)`
* **verification_script.py:**
* `import pickle`
* `def verify(answer):`
* `return answer[0]**2 - 7*answer[1]**2 == 1`
* `answer = pickle.load(open("final_answer.p", "rb"))`
* `verify(answer)`
* **Result:** Green checkmark indicating a correct answer.
**Second Submission/Verification Cycle (Bottom):**
* **submitted_code.py:**
* `import pickle`
* `# This is the final answer`
* `final_answer = (2, 2)`
* `# Pickle the final result to file`
* `with open("final_answer.p", "wb") as f:`
* `pickle.dump(final_answer, f)`
* **verification_script.py:**
* `import pickle`
* `def verify(answer):`
* `return answer[0]**2 - 7*answer[1]**2 == 1`
* `answer = pickle.load(open("final_answer.p", "rb"))`
* `verify(answer)`
* **Result:** Red 'X' indicating an incorrect answer.
### Key Observations
The diagram highlights the flow of data from the LLM's code submission to the verification script. The verification script uses a specific mathematical formula (`answer[0]**2 - 7*answer[1]**2 == 1`) to determine the correctness of the answer. The first submission, (8, 3), satisfies this equation (8^2 - 7*3^2 = 64 - 63 = 1), while the second submission, (2, 2), does not (2^2 - 7*2^2 = 4 - 28 = -24).
### Interpretation
This diagram demonstrates a simple feedback loop for evaluating LLM-generated code. The LLM produces an answer, which is then serialized using `pickle` and saved to a file. The verification script loads this serialized answer and applies a predefined verification function. The outcome (correct or incorrect) is visually indicated. This process is crucial for assessing the reliability and accuracy of LLMs in tasks requiring precise calculations or logical reasoning. The use of `pickle` suggests a need to preserve the data structure of the answer, likely a tuple in this case. The diagram effectively illustrates a basic unit test scenario for an LLM.
</details>
Figure 3: Example verification script for an instance of Pellâs equation: Find a tuple of integers $(x,y)$ such that $x^{2}-7y^{2}=1$ . The left boxes show the code that produces and saves answers, the right box shows the verification script that evaluates them. A custom verification script is necessary here since the problem has infinitely many solutions.
This design choice enables rapid, objective evaluation of AI models without requiring them to output solutions in formal proof languages like Lean or Coq. The automated verification framework substantially reduces the cost and complexity of evaluation, as verification can be performed programmatically without human intervention. Furthermore, it eliminates potential human bias and inconsistency in the evaluation process, as the verification criteria are explicitly defined and uniformly applied.
While this approach enables efficient and scalable evaluation, it does impose certain limitations. Most notably, we cannot include problems that require mathematical proofs or formal reasoning steps, as these would demand human evaluation to assess correctness and clarity. In theory, we could have accepted formalized proofs in Lean or other formal languages as a problem solution format. However, this presents two challenges. Firstly, current AI models are often not trained to write formalized proofs in specialized languages. We wanted to ensure that the problems were measuring genuine reasoning skills, rather than skill at writing formalized proofs. Secondly, Leanâs mathematical library, mathlib, doesnât fully cover the undergraduate math curriculum [mathlib], let alone the scope of research math, which limits the fields such a benchmark could measure. However, even without formal proofs, these problems span a substantial breadth of modern mathematics and include problems that eminent mathematicians have characterized as exceptionally difficult (see Section Ë 6).
### 2.3 Validation and quality assurance
To maintain high quality standards and minimize errors in the benchmark, we established a rigorous multi-stage review process. Each submitted problem underwent blind peer review by at least one expert mathematician from our team, who evaluated the following criteria:
- Verifying the correctness of the problem statement and the provided solution.
- Checking for ambiguities or potential misunderstandings in the problem wording.
- Assessing the problem for guessproofness, ensuring that solutions cannot be easily obtained without proper mathematical reasoning.
- Confirming the appropriateness of the assigned difficulty ratings and metadata.
Verifying correctness is the most important of these tasks. For problems with unique solutionsâthe majority of our benchmarkâverifying correctness requires checking the mathematical argumentation, computations, and, if applicable, programming scripts the writer provided to justify their answer. For problems with non-unique solutions, e.g. those which ask for a solution to some system of Diophantine equations or for a Hamiltonian path in a given graph, the reviewer only checks that the provided verification script matches the problem requirements and test it on the answer.
All problems in the benchmark have undergone at least one blind peer review. Through this review process, we identify problems with incorrect answers as well as those that fail to meet our guessproofness criterion. Such problems are only included in the benchmark after authors successfully address these issues through revision.
During first reviews, our reviewers commonly identify several types of issues with submitted problems. These include answers that arenât easily verifiable, problems where guessing is easier than proving, and cases where simple brute-force methods circumvent the intended difficulty. Such issues occur predominantly with authors who are new to submitting questions to the benchmark and have not yet fully internalized the guidelines. Beyond these concerns, straightforward errors in question statements or computational mistakes in solutions leading to incorrect answers are also not uncommon.
We commissioned blind reviews from second reviewers on 30 randomly chosen questions so far to get some idea about the noise threshold we might expect due to errors, and we also randomly selected 5 questions to be removed from the dataset and serve as a public sample of problems (see Appendix Ë A). In total, 35 of the accepted problems have received scrutiny beyond the first round of peer review.
Two out of 35 questions had an incorrect answer provided by the author, undetected in the first review. Assuming that a question being incorrect is a Bernoulli trial with probability $p$ and that the second reviewers catch all errors, using a Jeffreys prior on $p$ yields a posterior error rate of $\frac{2+0.5}{35+1}\approx 6.9\$ for this benchmark. On one hand, the Jeffreys prior is conservative since our prior belief about $p$ is likely lower due to the careful construction of the benchmark. On the other hand, we must account for potential undetected errors that even the second review might have missed. Therefore, estimating a critical error rate of approximately $10\$ is reasonable. This estimate aligns with error rates typical in machine learning benchmarks; for instance, [northcutt2021pervasivelabelerrorstest] estimate a $>6\$ label error rate in the ImageNet validation set, while [gema2024we] find that over $9\$ of questions in the MMLU benchmark contain errors based on expert review of 3,000 randomly sampled questions.
The second reviews also identified less critical errors. Six out of 35 questions had missing hypotheses in their statements which technically made them not fully rigorous. While domain experts could reasonably impute this missing context, it might still pose difficulties for models attempting to solve these problems. For two of the 35 questions, reviewers proposed strategies for guessing the solution with substantially less effort or computation than was necessary for a rigorous mathematical justification of the answer, violating the guessproofness criterion. In all cases, they proposed adjustments to the problem which corrected the error while preserving the original mathematical intent of the writer.
Although the error rate remains low, the mistakes we have spotted highlight the issues of a passive review system, in which the reviewer sees the full solution document and simply has to state their approval in order for a problem to be accepted into the benchmark. Going forward, we are adopting a more active review process, and will henceforth require concrete confirmation of a problemâs soundness before it is accepted. For problems which seek a solution with some easily verified characteristic, such as a tuple of integers satisfying a Diophantine equation, it is enough to test that criterion directly. If a problem seeks a symbolic real number or some efficiently computed value, then it would be strong evidence if a heuristically determined guess approximates the given solution. For more abstract problems, it may be necessary to have a reviewer solve the problem themselves, given only the key steps and ideas of the solution rather than the full write-up.
Additionally, we observed inconsistent difficulty ratings between first and second reviewers; due to the subjective nature of this task, ratings rarely matched and often showed substantial differences. See Section 2.5 for discussion on our ongoing effort to normalize these ratings. In several cases, second reviewers found no fundamental problems with the questions but suggested ways to increase their difficulty, indicating potential room to improve the difficulty of the remaining questions if reviewers were to spend more time on refining them.
### 2.4 Originality and data contamination prevention
Contributors were explicitly instructed to develop novel mathematical problems. While they could build upon existing mathematical ideas, they were required to do so in non-obvious waysâeither through clever adaptations that substantially transformed the original concepts, or through novel and innovative combinations of multiple ideas. This approach ensured that solving the problems required genuine mathematical insight rather than pattern matching against known problems.
To minimize the risk of problems and solutions being disseminated online, we encouraged all submissions to be conducted through secure, encrypted channels. We employed encrypted communication platforms to coordinate with contributors and requested that any written materials stored online be encryptedâfor instance, by placing documents in password-protected archives shared only via secure methods. While we endeavored to adhere strictly to these protocols, we acknowledge that, in some cases, standard email clients were used when communicating about a subset of the problems outside our immediate team.
Our primary method for validating originality relied on expert review by our core team of mathematicians. Their extensive familiarity with competition and research-level problems enabled them to identify potential similarities with existing problems that automated systems might miss. The team conducted thorough manual checks against popular mathematics websites, online repositories, and academic publications. This expert review process was supplemented by verification using plagiarism detection software.
To provide further confidence in our problemsâ originality, we ran them through the plagiarism detection tools Quetext and Copyscape. Across the entire dataset, the verification process revealed no significant matches to existing content, with minimal flags attributed only to standard mathematical terminology or software oversensitivity. We limited our plagiarism checking to problem statements only, excluding solutions to minimize online exposure. We specifically chose Quetext and Copyscape over more widely-used tools like Turnitin because they posed the lowest risk of our problems being stored in databases that might later be used to train AI models.
### 2.5 Problem difficulty
Estimates of mathematical problem difficulty are useful for FrontierMathâs goal of evaluating AI mathematical capabilities, as such estimates would provide more fine-grained information about the performance of AI models. To assess problem difficulty, each problem author provided ratings along three key dimensions:
1. Background: This rating ranges from 1 to 5 and indicates the level of mathematical background required to approach the problem. A rating of 1 corresponds to high school level, 2 to early undergraduate level, 3 to late undergraduate level, 4 to graduate level, and 5 to research level.
1. Creativity: Estimated as the number of hours an expert would need to identify the key ideas for solving the problem. This measure has no upper limit.
1. Execution: Similarly estimated as the number of hours required to compute the final answer once the key ideas are found, including time writing a script if applicable. This measure also has no upper limit.
To verify and refine the initial difficulty assessments, each difficulty assessment underwent a peer-review process. Reviewers assessed the accuracy of the initial difficulty ratings. Any discrepancies between the authorsâ ratings and the reviewersâ assessments were discussed, and adjustments were made as needed to ensure that the difficulty ratings accurately reflected the problemâs demands in terms of background, creativity, and execution.
Accurately assessing the difficulty of mathematics problems presents significant challenges [Chen2024, Chen2024Mistake]. Problems that seem impossible may become trivial after exposure to certain techniques, and multiple solution paths of varying complexity often exist. Moreover, while we designed our problems to require substantial mathematical work rather than allow solutions through guessing or pattern matching, the possibility of models finding unexpected shortcuts could undermine such difficulty estimates.
Given these challenges, we view our difficulty ratings as providing rough guidance. More rigorous validation, such as through systematic data collection on human solution times, would be needed to make stronger claims about these difficulty assessments. In unpublished preliminary work, we found that problems rated as more difficult correlate with lower solution rates by GPT-4o, providing some support for our difficulty assessment system. However, more systematic validation would be needed to make strong claims about the reliability of these ratings.
## 3 Dataset composition
The FrontierMath benchmark covers a broad spectrum of contemporary mathematics, spanning both foundational areas and specialized research domains. This comprehensive coverage is crucial for effectively evaluating AI systemsâ mathematical capabilities across the landscape of modern mathematics. Working with over 60 mathematicians across different specializations, we captured most top-level MSC 2020 classification codes, reflecting the breadth of mathematics from foundational fields to emerging research areas.
| MSC Classification | Percentage | MSC Classification | Percentage |
| --- | --- | --- | --- |
| 11 Number theory | 17.8% | 57 Manifolds and cell complexes | 2.1% |
| 05 Combinatorics | 15.8% | 13 Commutative algebra | 2.1% |
| 20 Group theory and generalizations | 8.9% | 54 General topology | 1.4% |
| 60 Probability theory and stochastic processes | 5.1% | 35 Partial differential equations | 1.4% |
| 15 Linear and multilinear algebra; matrix theory | 4.8% | 53 Differential geometry | 1.4% |
| 14 Algebraic geometry | 4.8% | 42 Harmonic analysis on Euclidean spaces | 1.4% |
| 33 Special functions | 4.8% | 41 Approximations and expansions | 1.4% |
| 55 Algebraic topology | 3.1% | 52 Convex and discrete geometry | 1.4% |
| 12 Field theory and polynomials | 2.4% | 82 Statistical mechanics, structure of matter | 1.0% |
| 30 Functions of a complex variable | 2.4% | 44 Integral transforms, operational calculus | 1.0% |
| 68 Computer science | 2.4% | 17 Nonassociative rings and algebras | 1.0% |
| 18 Category theory; homological algebra | 2.4% | Other (< 3 problems each) | 9.9% |
Table 1: Percentage distribution of Mathematics Subject Classification (MSC) 2020 codes in the FrontierMath dataset, representing the proportion of each classification relative to the total MSC assignments.
Number theory and combinatorics are most prominently represented, collectively accounting for approximately 34% of all MSC2020 tags. This prominence reflects both our contributing mathematiciansâ expertise and these domainsâ natural amenability to problems with numerical solutions that can be automatically verified. The next most represented fields are algebraic geometry and group theory (together about 14% of MSC tags), followed by algebraic topology (approximately 3%), linear algebra (5%), and special functions (5%). Problems involving probability theory and stochastic processes constitute about 5% of the MSC tags, with additional significant representation in complex analysis, category theory, and partial differential equations, each comprising between 1-3% of the MSC tags.
<details>
<summary>x3.png Details</summary>

### Visual Description
\n
## Network Diagram: Mathematical Fields and Their Relationships
### Overview
The image presents a network diagram illustrating the relationships between various fields of mathematics. The fields are represented as circles (nodes) of varying sizes, and the connections between them are shown as lines (edges). The size of the circle appears to correlate with the prominence or breadth of the field. The diagram is visually complex, with numerous interconnections suggesting overlapping concepts and dependencies.
### Components/Axes
The diagram lacks traditional axes. Instead, it relies on spatial arrangement to convey relationships. The nodes are labeled with the names of mathematical fields. There is no explicit legend, but the visual representation of connections implies a degree of relatedness. The diagram is primarily organized into clusters, with some fields appearing more central than others.
### Detailed Analysis or Content Details
Here's a breakdown of the fields and their apparent connections, moving roughly from left to right and top to bottom:
* **Combinatorics:** Connected to Statistics, Order, lattices, ordered algebraic structure, Field theory and polynomials.
* **Statistics:** Connected to Combinatorics, Probability theory and stochastic processes.
* **Probability theory and stochastic processes:** Connected to Statistics, Computer science, Statistical mechanics, structure of matter.
* **Computer science:** Connected to Probability theory and stochastic processes, Operations research, mathematical programming.
* **Statistical mechanics, structure of matter:** Connected to Probability theory and stochastic processes, Partial differential equations.
* **Partial differential equations:** Connected to Statistical mechanics, structure of matter, Ordinary differential equations.
* **Ordinary differential equations:** Connected to Partial differential equations.
* **Field theory and polynomials:** Connected to Combinatorics, Commutative algebra, Linear and multi linear algebra; matrix theory.
* **Commutative algebra:** Connected to Field theory and polynomials, Number theory.
* **Linear and multi linear algebra; matrix theory:** Connected to Field theory and polynomials.
* **Number theory:** Connected to Commutative algebra, Category theory; homological algebra, Associative rings and algebras.
* **Associative rings and algebras:** Connected to Number theory, Nonassociative rings and algebras.
* **Nonassociative rings and algebras:** Connected to Associative rings and algebras.
* **Category theory; homological algebra:** Connected to Number theory, K-theory.
* **K-theory:** Connected to Category theory; homological algebra, Group theory and generalizations.
* **Group theory and generalizations:** Connected to K-theory, Topological groups, Lie groups, Manifolds and cell complexes.
* **Topological groups, Lie groups:** Connected to Group theory and generalizations.
* **Manifolds and cell complexes:** Connected to Group theory and generalizations, Global analysis, analysis on manifolds.
* **Global analysis, analysis on manifolds:** Connected to Manifolds and cell complexes, Differential geometry.
* **Algebraic geometry:** Connected to Harmonic analysis on Euclidean spaces, Geometry, Convex and discrete geometry.
* **Geometry:** Connected to Algebraic geometry, Convex and discrete geometry, Differential geometry.
* **Convex and discrete geometry:** Connected to Geometry.
* **Differential geometry:** Connected to Geometry, Algebraic topology.
* **Algebraic topology:** Connected to Differential geometry.
* **Special functions:** Connected to Operations research, mathematical programming, Integral transform, operational calculus, Calculus of variations and optimal control, Functional analysis.
* **Operations research, mathematical programming:** Connected to Computer science, Special functions.
* **Integral transform, operational calculus:** Connected to Special functions, Potential theory.
* **Potential theory:** Connected to Integral transform, operational calculus, Measure and integration.
* **Measure and integration:** Connected to Potential theory, Real functions.
* **Real functions:** Connected to Measure and integration, Functions of a complex variable.
* **Functions of a complex variable:** Connected to Real functions, Several complex analytic spaces.
* **Several complex analytic spaces:** Connected to Functions of a complex variable.
* **Harmonic analysis on Euclidean spaces:** Connected to Algebraic geometry, Approximation theory and expansions.
* **Approximation theory and expansions:** Connected to Harmonic analysis on Euclidean spaces, Functional analysis.
* **Functional analysis:** Connected to Approximation theory and expansions, Calculus of variations and optimal control.
* **Calculus of variations and optimal control:** Connected to Functional analysis, Special functions.
* **Sequences, series, summability:** Connected to Functions of a complex variable.
The connections are represented by thin, light-colored lines, creating a dense web across the diagram. The size of the nodes varies significantly, with "Number theory", "Group theory and generalizations", and "Functions of a complex variable" appearing among the largest.
### Key Observations
* **Centrality:** Number theory, Group theory and generalizations, and Functions of a complex variable appear to be central nodes, connected to many other fields.
* **Interconnectedness:** The diagram highlights the interconnected nature of mathematics, with most fields having multiple connections to others.
* **Node Size and Importance:** Larger nodes likely represent broader or more fundamental areas of mathematics.
* **Cluster Formation:** The diagram exhibits clusters of related fields, such as those around algebra, analysis, and topology.
### Interpretation
This diagram visually represents the complex web of relationships within mathematics. It suggests that mathematical fields are not isolated disciplines but rather interconnected areas of study. The size of the nodes likely reflects the breadth and influence of each field, with central nodes serving as foundational areas. The diagram could be used to illustrate the interdisciplinary nature of mathematical research and to guide students in exploring connections between different areas of study. The lack of explicit weighting on the lines suggests all connections are considered equally important, which may be a simplification. The diagram is a high-level overview and doesn't capture the nuances of specific relationships within each field. It's a conceptual map rather than a precise quantitative representation. The diagram is a visualization of mathematical knowledge, and its structure reflects the perceived relationships between different areas of the discipline.
</details>
Figure 4: Mathematical subject interconnections in FrontierMath. Node sizes indicate the frequency of each subjectâs appearance in problems, while connections indicate when multiple mathematical subjects are combined within single problems, demonstrating the benchmarkâs integration of many mathematical domains.
The network visualization (Figure 4) reveals how mathematical subjects combine within individual problems. Number theory and combinatorics appear together most frequently, with 13% of problems requiring both subjects, followed by combinatorics-group theory (9%) and number theory-group theory (8%). These three fieldsânumber theory (44% of all problems), combinatorics (39%), and group theory (22%)âform the core of the benchmark, each combining with more than a dozen other mathematical domains in novel ways.
There is a wide range of techniques required to solve the problems in our dataset. In particular, there are over 200 different techniques listed as being involved in the solutions of our problems. Although generating functions, recurrence relations, and special functions emerge as common techniques, they each appear in less than 5% of problems, underscoring the benchmarkâs methodological diversity. Even the most frequently co-occurring techniques appear together in at most 3 problems, highlighting how problems typically require unique combinations of mathematical approaches.
## 4 Evaluation
### 4.1 Experiment-enabled evaluation framework
To evaluate AI models on FrontierMath problems, we developed a framework that allows models to explore and verify potential solution approaches through code execution, mirroring how mathematicians might experiment with ideas when tackling challenging problems. This framework enables models to test hypotheses, receive feedback, and refine their approach based on experimental results, as illustrated in Figure 5.
The evaluation process follows an iterative cycle where the model analyzes the mathematical problem, proposes solution strategies, implements these strategies as executable Python code, receives feedback from code execution results, and refines its approach based on the experimental outcomes. For each problem, the model interacts with a Python environment where it can write code blocks that are automatically executed, with outputs and any error messages being fed back into the conversation. This allows the model to verify intermediate results, test conjectures, and catch potential errors in its reasoning.
<details>
<summary>x4.png Details</summary>

### Visual Description
\n
## Diagram: FrontierMath Problem Solving Flow
### Overview
The image depicts a flowchart illustrating the process of solving a FrontierMath problem using a model. The process involves prompting the model, receiving a response, executing code from the response, and checking if the code submits a final answer. If not, the results of the code execution are appended to the prompt, and the process repeats.
### Components/Axes
The diagram consists of rectangular blocks representing steps in the process, connected by arrows indicating the flow. Key components include:
* **START:** The initial point of the process.
* **Prompt model with FrontierMath problem:** The first step, involving providing a problem to the model.
* **Model response:** The output generated by the model in response to the prompt.
* **Execute code from model response:** The step where code contained within the model's response is executed. A Python logo is present within this block.
* **Did the model's code submit a final answer?:** A decision point, represented by a diamond shape.
* **Yes:** Leads to the "END" block.
* **No:** Leads to the "Append results of code blocks to the model prompt" block.
* **Append results of code blocks to the model prompt:** The step where the output of the code execution is added to the original prompt.
* **END:** The final point of the process.
### Detailed Analysis or Content Details
The diagram illustrates a loop. The process begins with a prompt, and continues to iterate until the model's code produces a final answer. The Python logo within the "Execute code from model response" block suggests that the code being executed is Python code. The decision diamond asks a binary question: "Did the model's code submit a final answer?". The "Yes" path leads to the end, while the "No" path loops back to refine the prompt.
### Key Observations
The diagram highlights an iterative problem-solving approach. The process is not a single pass but involves repeated refinement of the prompt based on the results of code execution. This suggests a strategy for handling complex problems that require multiple steps or corrections.
### Interpretation
This diagram demonstrates a feedback loop designed to improve the accuracy and completeness of a model's response to a FrontierMath problem. The iterative process of executing code, evaluating the results, and refining the prompt is a common technique in machine learning and artificial intelligence. The inclusion of code execution suggests that the model is capable of generating and running code to solve mathematical problems. The loop continues until a satisfactory answer is obtained, indicating a commitment to finding a correct solution. The diagram implies that the model may not always provide a complete answer on the first attempt, but can improve its performance through iterative refinement. This is a form of reinforcement learning or active learning, where the model learns from its mistakes and adjusts its approach accordingly.
</details>
Figure 5: Experimental framework for mathematical problem solving. The model can either provide a final answer directly or experiment with code execution before reaching a solution.
When submitting a final answer, models must follow a standardized format by including a specific marker comment (# This is the final answer), saving the result using Pythonâs pickle module, and ensuring the submission code is self-contained and independent of previous computations. The interaction continues until either the model submits a correctly formatted final answer or reaches the token limit, which we set to 10,000 tokens in our experiments. If a model reaches this token limit without having submitted a final answer, it receives a final prompt requesting an immediate final answer submission. If the model still fails to provide a properly formatted final answer after this prompt, the attempt is marked as incorrect.
### 4.2 Results
#### 4.2.1 Accuracy on FrontierMath
We evaluated six leading language models on our existing subset of FrontierMath problems: o1-preview ([o1-preview]), o1-mini ([o1-mini]), and GPT-4o (2024-08-06 version) ([GPT-4o]), Claude 3.5 Sonnet (2024-10-22 version) ([sonnet3-5]), Grok 2 Beta ([grokbeta]), and Google DeepMindâs Gemini 1.5 Pro 002 ([gemini15pro002]). All models had a very low performance on FrontierMath problems, with no model achieving even a 2% success rate on the full benchmark (see Figure 6). This stands in stark contrast to other mathematics evaluations such as GSM8K ([cobbe2021training]), MATH ([hendrycks2021measuring]), AIME 2024 ([o1-learningtoreason]), or Omni-MATH ([gao2024omni]), which are closer to saturation (see Figure 2).
<details>
<summary>visuals/model_comparison_mean_accuracy.png Details</summary>

### Visual Description
\n
## Horizontal Bar Chart: Model Accuracy Comparison
### Overview
The image presents a horizontal bar chart comparing the mean accuracy of several large language models (LLMs). Each bar represents a model, and the length of the bar indicates its accuracy. Error bars are included for each model, representing the uncertainty or variance in the accuracy measurement. A light green background is present, with vertical gridlines at 20%, 40%, 60%, 80%, and 100%.
### Components/Axes
* **Y-axis:** Lists the names of the LLMs being compared: "o1-preview", "o1-mini", "Gemini 1.5 Pro (002)", "Claude 3.5 Sonnet (2024-10-22)", "GPT-4 (2024-08-06)", and "Grok 2 Beta".
* **X-axis:** Labeled "Mean accuracy", with a scale ranging from 0% to 100%, incrementing in 20% steps.
* **Bars:** Horizontal bars representing the mean accuracy of each model. All bars are the same teal color.
* **Error Bars:** Small horizontal lines extending from each bar, indicating the range of uncertainty around the mean accuracy.
### Detailed Analysis
The chart displays the following approximate accuracy values, reading from top to bottom:
* **o1-preview:** The bar extends to approximately 80% accuracy. The error bar extends from approximately 65% to 95%.
* **o1-mini:** The bar extends to approximately 60% accuracy. The error bar extends from approximately 45% to 75%.
* **Gemini 1.5 Pro (002):** The bar extends to approximately 50% accuracy. The error bar extends from approximately 35% to 65%.
* **Claude 3.5 Sonnet (2024-10-22):** The bar extends to approximately 45% accuracy. The error bar extends from approximately 30% to 60%.
* **GPT-4 (2024-08-06):** The bar extends to approximately 30% accuracy. The error bar extends from approximately 15% to 45%.
* **Grok 2 Beta:** The bar extends to approximately 20% accuracy. The error bar extends from approximately 5% to 35%.
The bars are arranged in descending order of mean accuracy, with "o1-preview" at the top and "Grok 2 Beta" at the bottom.
### Key Observations
* "o1-preview" exhibits the highest mean accuracy, significantly outperforming the other models.
* "Grok 2 Beta" has the lowest mean accuracy.
* The error bars indicate varying degrees of uncertainty in the accuracy measurements. "o1-preview" has a relatively wide error bar, suggesting greater variability in its performance.
* The dates in parentheses next to "Claude 3.5 Sonnet" and "GPT-4" indicate the date the model was evaluated.
### Interpretation
This chart demonstrates a clear ranking of the LLMs based on their mean accuracy on a specific task or dataset. The substantial difference in accuracy between "o1-preview" and the other models suggests it is a significantly more capable model. The error bars are crucial for understanding the reliability of these results; a wider error bar implies that the reported accuracy might not be consistently achieved. The inclusion of evaluation dates for "Claude 3.5 Sonnet" and "GPT-4" suggests that model performance can change over time, and these results are specific to the dates indicated. The chart provides a valuable comparative assessment of these LLMs, but it's important to consider the context of the evaluation (e.g., the specific task, dataset, and evaluation metrics) to fully interpret the results. The light green background does not appear to have any data-related significance, and is likely for aesthetic purposes.
</details>
Figure 6: Performance of leading language models on FrontierMath based on mean accuracy across 8 runs. All models show consistently poor performance, with even the best models solving less than 2% of problems in each run on average. When re-evaluating problems that were solved at least once by any model, o1-preview demonstrated the strongest performance across repeated trials (see Section B.2).
Based on a single evaluation of the full benchmark, we found that models solved very few problems (less than 2% success rate). Given this low success rate and the fact that we ran only one evaluation, the precise ordering of model performance should be interpreted with significant caution, as individual successes can have an outsized impact on rankings. To better understand model behavior on solved problems, we identified all problems that any model solved at least onceâa total of four problemsâand conducted repeated trials with five runs per model per problem (see Appendix B.2). We observed high variability across runs: only in one case did a model solve a question on all five runs (o1-preview for question 2). When re-evaluating these problems that were solved at least once, o1-preview demonstrated the strongest performance across repeated trials (see Section B.2).
Moreover, even when a model obtained the correct answer, this does not mean that its reasoning was correct. For instance, on one of these problems running a few simple simulations was sufficient to make accurate guesses without any deeper mathematical understanding. However, modelsâ low overall accuracy shows that such guessing strategies do not work on the overwhelming majority of FrontierMath problems. We also ran each model five times per problem on our five public sample problems (see Appendix A), and made the transcripts publicly available. The transcripts can be downloaded at https://epochai.org/files/sample_question_transcripts.zip
#### 4.2.2 Number of responses and token usage
Our evaluation framework allows models to run experiments and reflect on their results multiple times before submitting a final answer. We found significant differences in how models use this opportunity. For example, o1-preview averages just 1.29 responses per problem, whereas Grok 2 Beta averages 3.81 responses. o1-preview and Gemini 1.5 Pro 002 typically submit a final answer before seeing any experimental results, despite explicit instructions that they can take time to experiment and wait until they are confident to make a final guess. In contrast, the other models show more extended interactions.
This pattern is also reflected in token usage. Gemini 1.5 Pro 002 reaches the 10,000 token limit in only 16.8% of questions, while Claude 3.5 Sonnet, GPT-4o and Grok 2 Beta exceed this limit in more than 45% of their attempts. On average, Gemini 1.5 Pro uses the fewest tokens per question (around 6,000), while other models use between 12,000 and 17,000 tokens. These averages can exceed the 10,000 token conversation limit because models that reach the limit still get to see the results of their final experiment and complete their final answer.
## 5 Related work
The evaluation of mathematical reasoning in AI systems has progressed through various benchmarks and competitions, each testing different aspects of mathematical capability. Here we review the major existing benchmarks and competitions, and discuss how FrontierMath advances this landscape.
Several established mathematics benchmarks focus on evaluating basic mathematical reasoning capabilities in AI systems. Datasets like GSM8K [cobbe2021training] and MATH [hendrycks2021measuring] provide structured collections of elementary to undergraduate-level problems. While these benchmarks provided useful measures of capabilities in mathematical reasoning, state-of-the-art models now achieve near-perfect performance on many of them (see Figure Ë 2), limiting their utility for discriminating between model capabilities.
Recent work has developed more challenging benchmarks for evaluating mathematical reasoning. The Advanced Reasoning Benchmark (ARB) [sawada2023arb] combines university-level mathematics with contest problems, spanning undergraduate to early graduate-level topics. The GHOSTS dataset [frieder2024mathematical] provides a comprehensive evaluation of language modelsâ mathematical capabilities through 709 problems spanning graduate-level topics, with a focus on natural language interactions and human-expert evaluation. Several benchmarks have focused specifically on olympiad-level mathematics: OlympiadBench [he2024olympiadbench] compiled 8,476 problems from mathematics and physics olympiads, PutnamBench [tsoukalas2024putnambench] draws from the Putnam Competition, and OmniMATH [gao2024omni] provides 4,428 problems sourced from international competitions (including IMO), Art of Problem Solving forums, and verified competition archives, with detailed domain categorization and difficulty scaling. While these benchmarks present more challenging problems than earlier datasets, their reliance on historical competition materials creates significant data contamination risks, as models may have been trained on the source materials or highly similar problems.
The Putnam-AXIOM benchmark [gulati2024putnamaxiom] partially circumvents these contamination risks by generating problem variants. Specifically, the authors programmatically generate new problems of equivalent difficulty by renaming variables and changing constants in existing Putnam problems. This results in significant drops in performance - for example, o1-previewâs accuracy drops from 50% to 34% using generated instead of unmodified problems. However, this technique only provides adds a limited degree of novelty to generated problems, and does not aid much in designing problems of much higher difficulty.
The AI Mathematical Olympiad (AIMO) [xtx2024aimo], an initiative led by XTX Markets, aims to develop AI systems capable of achieving International Mathematical Olympiad gold medal performance. While AIMO creates novel national olympiad-level problems specifically designed to test advanced mathematical reasoning while avoiding data contamination risks, its requirement that runtime models be open-weight means that frontier AI models cannot be officially evaluated on the benchmark.
A parallel line of work explores formal mathematics benchmarks. The MiniF2F benchmark [zheng2021minif2f] provides 488 formally verified problems from mathematical olympiads and undergraduate courses, manually formalized across multiple theorem proving systems including Metamath, Lean, and Isabelle. While MiniF2F enables rigorous evaluation of neural theorem provers through machine-verified proofs, it faces similar data contamination concerns and focuses on competition-style problems rather than research mathematics.
FrontierMath addresses these fundamental limitations in three key ways. First, it eliminates data contamination concerns by featuring exclusively new, previously unpublished problems, ensuring models cannot leverage training on highly similar materials. Second, it raises the difficulty level beyond existing benchmarks by incorporating research-level mathematics across most branches of modern mathematics, moving beyond beyond both elementary problems and competition-style mathematics. Third, its automated verification system enables efficient, reproducible evaluation of any modelâincluding closed source frontier AI systemsâwhile maintaining high difficulty standards. This positions FrontierMath as a unique tool for evaluating progress toward research-level mathematical capabilities.
## 6 Interviews with mathematicians
We conducted interviews with four prominent mathematicians to gather expert perspectives on FrontierMathâs difficulty, significance, and prospects: Terence Tao (2006 Fields Medalist), Timothy Gowers (1998 Fields Medalist), Richard Borcherds (1998 Fields Medalist), and Evan Chen, a renowned mathematics educator and IMO coach. Note that Evan Chen is also a co-author of this paper, and Terence Tao contributed several problems to the benchmark.
Assessment of FrontierMath difficulty
All four mathematicians characterized the research problems in the FrontierMath benchmark as exceptionally challenging, noting that the most difficult questions require deep domain expertise and significant time investment. For example, referring to a selection of several questions from the dataset, Tao remarked, "These are extremely challenging. I think that in the near term basically the only way to solve them, short of having a real domain expert in the area, is by a combination of a semi-expert like a graduate student in a related field, maybe paired with some combination of a modern AI and lots of other algebra packagesâŠâ However, some mathematicians pointed out that the numerical format of the questions feels somewhat contrived. Borcherds, in particular, mentioned that the benchmark problems âarenât quite the same as coming up with original proofs.â
Expected timeline for FrontierMath progress
The mathematicians expressed significant uncertainty about the timeline for AI progress on FrontierMath-level problems, while generally agreeing these problems were well beyond current AI capabilities. Tao anticipated that the benchmark would "resist AIs for several years at least," noting that the problems require substantial domain expertise and that we currently lack sufficient relevant training data.
The interviewees anticipated human-AI collaboration would precede full automation. Chen and Tao both suggested that human experts working with AI systems could potentially tackle FrontierMath problems within around three years, much sooner than fully autonomous AI solutions. As Tao explained, on advanced graduate level questions, guiding current AI systems to correct solutions takes "about five times as much effort" as solving them directlyâbut he expected this ratio to improve below 1 on certain problems within a few years with sufficient tooling and capability improvements.
Key barriers to FrontierMath progress
A major challenge the mathematicians identified was the extremely limited training data available for FrontierMathâs specialized research problems. As Tao noted, for many FrontierMath problems, the relevant training data is "almost nonexistentâŠyouâre talking like a dozen papers with relevant things." Gowers emphasized that solving these problems requires familiarity with "the tricks of the trade of some particular branch of maths"âdomain knowledge that is difficult to acquire without substantial training data. The mathematicians discussed potential approaches to address data limitations, including synthetic data generation and formal verification methods to validate solutions.
Potential uses of FrontierMath-capable systems
The mathematicians interviewed identified several potential contributions from AI systems capable of solving FrontierMath-level problems. As Gowers noted, such systems could serve as powerful assistants for "slightly boring bits of doing research where you, for example, make some conjecture that would be useful, but youâre not quite sure if itâs true⊠it could be a very, very nice time saving device." Tao similarly emphasized their value for routine but technically demanding calculations, suggesting that a system performing at FrontierMathâs level could help mathematicians tackle problems that currently "take weeks to figure out." However, several interviewees cautioned that the practical impact would depend heavily on computational costs. As Tao observed regarding current systems like AlphaProof, "if your amazing tool takes three days of compute off of all of Google to solve each problemâŠthen thatâs less of a useful tool." The mathematicians generally agreed that AI systems at FrontierMathâs level would be most valuable as supplements to human mathematicians rather than replacements, helping to verify calculations, test conjectures, and handle routine technical work while leaving broader research direction and insight generation to humans.
## 7 Discussion
In this paper, we introduced FrontierMath, a new benchmark composed of a challenging set of mathematical problems spanning most branches of modern mathematics. FrontierMath focuses on highly challenging math problems, with an emphasis on research-level problems, whose solutions require multiple hours of concentrated effort by expert mathematicians. We found that leading AI models cannot solve over 2% of the problems in the benchmark. We have further qualitatively validated the difficulty of the problems by sharing a sample of ten problems to four expert mathematicians, including three Fields medal winners, who uniformly assessed the problems as exceptionally difficult.
FrontierMath addresses two key limitations of previous math benchmarks such as the MATH dataset [hendrycks2021measuring] and GSM8K [cobbe2021training]. First, it focuses on exceptionally challenging problems that require deep reasoning and creativity, including research-level mathematics, preventing saturation with current leading AI systems. Second, by using exclusively novel, unreleased problems, it helps prevent data contamination, reducing the risk that models succeed through pattern matching against training data.
Despite these strengths, FrontierMath has several important limitations. First, the practical focus on automatically verifiable and numerical answers excludes proof-writing and open-ended exploration, which are significant parts of modern math research. Second, while our problems are substantially more challenging than existing benchmarks, requiring hours of focused work from expert mathematicians, they still fall short of typical mathematical research, which often spans weeks, months or even years of sustained investigation. By limiting problems to hours instead of months, we make the benchmark practical but miss testing crucial long-term research skills.
Another limitation is that current AI models cannot solve even a small fraction of the problems in our benchmark. While this demonstrates the high difficulty level of our problems, it temporarily limits FrontierMathâs usefulness in evaluating relative performance of models. However, we expect this limitation to resolve as AI systems improve.
Evaluating AI systemsâ mathematical capabilities will provide valuable insights into their potential to assist with complex mathematical research. Beyond mathematics, the precise chains of reasoning and creative synthesis required in these problems serve as an important proxy for broader scientific thinking. Through its high difficulty standards and rigorous verification framework, FrontierMath offers a systematic way to measure progress in advanced reasoning capabilities as AI systems evolve.
## 8 Future work
We will further develop the FrontierMath benchmark by introducing new, rigorously vetted problems. These new problems will follow our established data collection and review process, with enhanced quality assurance procedures to reduce answer errors and improve the accuracy of difficulty ratings. We will also release another set of representative sample problems to showcase the benchmarkâs depth and variety.
We will further experiment with different evaluation methodologies to better assess the mathematical proficiency of todayâs leading models. For example, we will test the effects of increasing the token limit, allowing models to reason for longer and run more experiments per problem. We also plan to conduct multiple runs for each model-problem pair, enabling us to report statistics and confidence intervals across attempts. These evaluations will help us better understand how different evaluation parameters affect model performance on advanced mathematical reasoning tasks.
Through FrontierMath, we are setting a new standard of difficulty for evaluating complex reasoning capabilities in leading AI systems. At Epoch AI, we are committed to maintaining FrontierMath as a public resource and regularly conducting comprehensive evaluations of leading models. We will continue working with mathematicians to produce new problems, perform quality assurance, validate difficulty scores, and provide transparent and authoritative assessments of AI systemsâ mathematical reasoning abilities.
## Appendix A Sample problems and solutions
This section presents five problems from the FrontierMath benchmark, with one chosen at random from each quintile of difficulty. The problems are arranged in decreasing order of difficulty, from research-level number theory to undergraduate algebraic geometry. Background levels indicate the prerequisite knowledge required, while creativity and execution ratings reflect hours of innovative thinking and technical rigor needed respectively.
| Problem | Author | Background | Creativity | Execution | MSC | Key Techniques |
| --- | --- | --- | --- | --- | --- | --- |
| 1 (A.1) | O. JĂ€rviniemi | Research | 4 | 15 | 11 | Frobenius elts., Artin symbols |
| 2 (A.2) | A. Kite | Research | 3 | 4 | 14, 20, 11 | Monodromy, branch loci |
| 3 (A.3) | D. Chicharro | Graduate | 3 | 3 | 11 | p-adic analysis, recurrences |
| 4 (A.4) | P. Enugandla | Graduate | 2 | 3 | 20 | Coxeter groups, characters |
| 5 (A.5) | A. Gunning | HS/Undergr. | 2 | 2 | 14, 11 | Hasse-Weil Bound |
Figure 7: MSC2020 codes: 11 (Number Theory), 14 (Algebraic Geometry), 20 (Group Theory and Generalizations) We thank Piotr Ćniady for pointing out a typo in the problem statement of Sample 4, and Will McLafferty, Timothy Smits, and Frederick Vu for identifying errors in the originally given solution for Sample 5 in an earlier draft of the paper.
### A.1 Sample problem 1 â high difficulty
#### A.1.1 Problem
Definitions. For a positive integer $n$ , let $v_{p}(n)$ denote the largest integer $v$ such that $p^{v}\mid n$ .
For $p$ a prime and $a\not\equiv 0\pmod{p}$ , we let $\textup{ord}_{p}(a)$ denote the smallest positive integer $o$ such that $a^{o}\equiv 1\pmod{p}$ . For $x>0$ , we let
$$
\textup{ord}_{p,x}(a)=\prod_{\begin{subarray}{c}q\leq x\\
q\text{ prime}\end{subarray}}q^{v_{q}(\textup{ord}_{p}(a))}\prod_{{q>x\\
q\text{ prime}}}q^{v_{q}(p-1)}.
$$
Problem. Let $S_{x}$ denote the set of primes $p$ for which
$$
\textup{ord}_{p,x}(2)>\textup{ord}_{p,x}(3), \tag{2}
$$
and let $d_{x}$ denote the density
$$
d_{x}=\frac{|S_{x}|}{|\{p\leq x:p\text{ is prime}\}|}
$$
of $S_{x}$ in the primes. Let
$$
d_{\infty}=\lim_{x\to\infty}d_{x}.
$$
Compute We have lowered the precision of this and several other problems which were written before the one-minute computation rule was finalized. $\lfloor 10^{6}d_{\infty}\rfloor.$
Answer:
367707
#### A.1.2 Solution
We will break this into multiple sections.
Background
First of all, the presence of the $x$ -parameter is a technicality; informally, we are just asking for the density of primes $p$ such that
$$
\textup{ord}_{p}(2)>\textup{ord}_{p}(3). \tag{2}
$$
The motivation of this problem comes from Artinâs primitive conjecture which asks: for which integers $a$ are there infinitely many primes $p$ such that $a$ is a primitive root modulo $p$ ? This is equivalent to requesting $\textup{ord}_{p}(a)=p-1$ , i.e. $\textup{ind}_{p}(a)=1$ .
Artinâs conjecture is open. For our purposes, the relevant fact is that the conjecture has been solved on the assumption of a generalization of the Riemann hypothesis (for zeta functions of number fields). This was done by C. Hooley in 1967.
The key idea is that $a$ is a primitive root mod $p$ if and only if there is no $n>1$ such that $a$ is an $n$ th power modulo $p$ for some $n\mid p-1$ . One can try to compute the number of such primes $p$ not exceeding $x$ with an inclusion-exclusion argument. The Chebotarev density theorem gives us asymptotics for
$$
\displaystyle|\{p\leq x:p\equiv 1\pmod{n},a\text{ is a perfect }n\text{th power modulo }p\}| \tag{1}
$$
for any fixed $n.$
Bounding the error terms is what we need the generalization of Riemannâs hypothesis for (c.f.: the standard RH bounds the error term in the prime number theorem); we donât know how to do it without.
For concreteness, if $a=2$ , then $a$ is a primitive root modulo $p$ for roughly $37\$ of primes. See https://mathworld.wolfram.com/ArtinsConstant.html. In general, the density is strictly positive as long as $a$ is not a perfect square nor $-1$ (in which case there are no such primes $p>3$ ). The density is not quite $37\$ in all cases â see below for more on this.
This argument is very general: it can be adapted to look at primes $p$ such that $\textup{ord}_{p}(2)=\textup{ord}_{p}(3)=p-1$ , or $\textup{ord}_{p}(2)=(p-1)/a,\textup{ord}_{p}(3)=(p-1)/b$ for fixed $a,b\in\mathbb{Z}_{+}$ , or $\textup{ord}_{p}(2)=\textup{ord}_{p}(3)$ , or â you guessed it â $\textup{ord}_{p}(2)>\textup{ord}_{p}(3)$ . The "bounding the error terms" part is no more difficult in these cases, and no non-trivial complications arise. But we of course still canât do it without GRH.
We replaced $\textup{ord}_{p}(a)$ with $\textup{ord}_{p,x}(2)$ for the reason of deleting the error terms, and for this reason only. As a result, when one does the inclusion-exclusion argument, there are only a finite number of terms needed, and thus we only need asymptotic formulas for (1), rather than any sharp bounds for the error term.
So the question then is about computing the main terms. To do so, one needs to be able to compute asymptotics for (1). In the case $a=2$ , it turns out that (1) is asymptotically $1/\phi(n)n$ for any square-free $n$ . In some cases, like for $a=4$ and $a=5$ , this is not the case: For $a=4$ , take $n=2$ . For $a=5$ , this fails for $n=10$ , since $p\equiv 1\pmod{5}$ implies that $5$ is a square modulo $p$ â this is related to the fact that $\sqrt{5}\in\mathbb{Q}(\zeta_{5})$ . But it turns out that there is always only a small, finite number of revisions one needs to make to the argument, allowing for the computation of the main terms.
There are a few papers considering generalizations of Artinâs conjecture. It is known that the density of primes $p$ for which $\textup{ord}_{p}(2)>\textup{ord}_{p}(3)$ is positive (as a special case of a more general result), this again under GRH, but the task of computing that density has not been carried out in the literature, justifying the originality of this problem. There is a fair amount of legwork and understanding needed to do this.
Solution outline
Here are the key steps for verifying the correctness of the answer:
1. We have explicit (but cumbersome) formulas for the density $d(a,b)=d(a,b,x)$ of primes $p$ such that
$$
\displaystyle\textup{ord}_{p,x}(2)=(p-1)/a,\textup{ord}_{p,x}(3)=(p-1)/b \tag{2}
$$
1. These formulas are " quasi-multiplicative ": they can be written in the form
$$
d(a,b)=f(v_{2}(a),v_{3}(a),v_{2}(b),v_{3}(b))\prod_{5\leq q\leq x}g_{q}(v_{q}(a),v_{q}(b)).
$$
1. $f$ and $g_{q}$ are explicit functions that can be evaluated in $O(1)$ time.
1. For any fixed $a$ , the sum
$$
\sum_{a<b}d(a,b,x)
$$
can be evaluated in $O(a)$ time.
- Idea: We can write $\sum_{a<b}d(a,b,x)=\sum_{b=1}^{\infty}d(a,b,x)-\sum_{b=1}^{a}d(a,b,x)$ . The first sum is just the density of $p:\textup{ord}_{p,x}(2)=(p-1)/a$ , which can be evaluated similarly to point 1 above. The second sum can be evaluated with brute force.
1. We have, for any $T\geq 100$ , roughly This omits a lower-order term.
$$
\sum_{T<a<b}d(a,b,x)<114T^{-2\cdot 46/47},
$$
(think of the exponent as $-2+\epsilon$ ), and thus we have
$$
|\sum_{a<b}d(a,b,x)-\sum_{a<b,a\leq T}d(a,b,x)|<114T^{-2\cdot 46/47}.
$$
1. We have, for any $x_{\infty}>x\geq 10^{8}$ ,
$$
\sum_{a<b,a\leq T}|d(a,b,x_{\infty})-d(a,b,x)|\leq\frac{1}{9x}.
$$
Then, taking $T=10^{5}$ and $x=10^{8}$ , we get an answer that has error at most $2.5\cdot 10^{-8}$ . This then gives $7$ correct decimal places.
Step 1
In this part, we fix $a,b\in\mathbb{Z}_{+}$ , and weâll compute the density of primes $p$ satisfying (2).
For (2) to hold, we need $2$ to be a perfect $a$ th power (and $p\equiv 1\pmod{a}$ ), but "no more": $2$ should not be an $aq$ th power for any prime $q$ (such that $p\equiv 1\pmod{aq}$ ). Similarly for $b$ . Itâs convenient to frame these propositions in terms of algebraic number theory.
Let $K=\mathbb{Q}(\zeta_{\textup{lcm}(a,b)},2^{1/a},3^{1/b})$ , and for each prime $q$ let $K_{1,q}=\mathbb{Q}(\zeta_{aq},2^{1/aq})$ and $K_{2,q}=\mathbb{Q}(\zeta_{bq},3^{1/bq})$ . We want to compute the density of primes $p$ such that $p$ splits in $K$ , but doesnât split in $K_{i,q}$ for any $i\in\{1,2\},2\leq q\leq x$ .
There is an "obvious guess" for what that density should be, by naively assuming that different values of $q$ are independent (e.g. $2$ being a fifth power shouldnât provide evidence on whether $2$ is a seventh power), and that there are no "spurious correlations" (e.g. $2$ being a fifth power shouldnât provide evidence on whether $3$ is a fifth power or not). This intuition is almost correct: it makes a small, finite number of mistakes, namely at the primes $q=2$ and $q=3$ . There are two reasons for this:
- $\zeta_{3}\in\mathbb{Q}(\zeta_{4},\sqrt{3})$ , since $\zeta_{3}=(1+\sqrt{-3})/2$ .
- $\sqrt{2}\in\mathbb{Q}(\zeta_{8})$ , since we have $(1+i)^{2}=2i$ , and thus $1+i=\zeta_{8}\cdot\sqrt{2}$ .
Crucially, the only relations between $\zeta_{N},2^{1/N}$ and $3^{1/N}$ are these two. The following two well-known lemmas formalize this:
Lemma 1. Denote $K_{n}=\mathbb{Q}(\zeta_{n},2^{1/n},3^{1/n})$ . Let $N$ be an arbitrary positive integer. Picture $N$ as large, e.g. $N=100$ . Let $q_{1},q_{2},\ldots,q_{k}$ be pairwise distinct primes greater than three. The fields
$$
K_{6^{N}},K_{q_{1}^{N}},\ldots,K_{q_{k}^{N}}
$$
are linearly disjoint This is equivalent to the Galois group of the compositum (in this case, $K_{(6q_{1}\cdots q_{k})^{N}}$ ) factorizing as the product of Galois groups of the individual fields: $\textup{Gal}(K_{(6q_{1}\cdots q_{k})^{N}}/\mathbb{Q})\cong\textup{Gal}(K_{6^{N}}/\mathbb{Q})\times\textup{Gal}(K_{q_{1}^{N}}/\mathbb{Q})\times\cdots\times\textup{Gal}(K_{q_{k}^{N}}/\mathbb{Q})$ . (Another way of saying this is that the degree of the compositum is the product of the degrees of the individual fields.) For the non-expert, this is just a way of saying âthese things are independent from each otherâ.. Furthermore, the degrees $[K_{q_{i}^{N}}:\mathbb{Q}]$ are equal to $\phi(q_{i}^{N})q_{i}^{2N}$ ("the maximum possible").
Proof. We apply several results of [olli]. Recall that $\mathbb{Q}(\zeta_{q_{1}^{N}}),\ldots,\mathbb{Q}(\zeta_{q_{K}^{N}})$ are linearly disjoint. We can append $K_{6^{N}}$ to the set while maintaining disjointness, since the largest cyclotomic subfield of $K_{6^{N}}$ lies in $\mathbb{Q}(\zeta_{2\cdot 6^{N}},\sqrt{2},\sqrt{3})\subset\mathbb{Q}(\zeta_{2\cdot 6^{N}},\zeta_{8},\zeta_{12})$ (see Proposition 3.14 of the paper). Then, throwing in elements of the form $2^{1/q_{i}^{N}}$ and $3^{1/q_{i}^{N}}$ maintains linear disjointness, because the degrees of the resulting field extensions are powers of $q_{i}$ , and those are coprime. For the claim about maximality of degrees, apply Proposition 3.9 with $n=q_{i}^{N},K=\mathbb{Q}(\zeta_{n}),k=n^{2}$ , with $a_{i}$ the numbers $2^{q_{i}^{x}}3^{q_{i}^{y}}$ for $0\leq x,y<N$ ; the quotients of such numbers are not $n$ th powers in $K$ by Corollary 3.6. QED.
Of course, the claim about linear disjointness and the degrees being the maximum possible now holds for subfields of $K_{6^{N}},K_{q_{1}^{N}},\ldots,K_{q_{k}^{N}}$ , too; weâll be using this fact throughout.
At the primes $q=2,q=3$ we lose a factor of 4, as promised earlier:
Lemma 2. For $N\geq 3$ we have
$$
[K_{6^{N}}:\mathbb{Q}]=\frac{\phi(6^{N})6^{2N}}{4}.
$$
Proof. Apply Proposition 3.9 with $n=6^{N}$ , $K=\mathbb{Q}(\zeta_{n}),k=n^{2}/4$ , with $a_{i}$ the numbers $2^{2^{x}3^{y}}3^{2^{z}3^{w}}$ , where $0\leq x,y,z,w<N$ and $x,z<N/2$ . The quotients of such numbers are not $n$ th powers, again by Corollary 3.6. QED
With these tools, weâll now attack the problem of computing $d(a,b)$ . Assume $x$ is large enough that $q\mid ab\implies q\leq x$ (otherwise just replace $a\rightarrow\prod_{q\leq x}q^{v_{q}(a)}$ ).
By the Chebotarev density theorem, the density of primes $p$ splitting in a field $K$ is $[K:\mathbb{Q}]$ . Reminder for the non-expert: Splitting of primes in a number field is a somewhat technical concept, but for our purposes you only need to know that: $p$ splitting in $\mathbb{Q}(\zeta_{n})$ is equivalent to $p\equiv 1\pmod{n}$ , and $p$ splitting in a field like $\mathbb{Q}(\zeta_{n},2^{1/a},3^{1/b})$ (where we always assume $\textup{lcm}(a,b)\mid n$ ) is equivalent to $p\equiv 1\pmod{n}$ , $2$ being an $a$ th power and $3$ being a $b$ th power mod $p$ . So âsplittingâ is exactly the thing we need. Thus, by inclusion-exclusion,
| | $\displaystyle d(a,b,x)=\sum_{n_{1},n_{2}:p\mid n_{i}\implies p\leq x}\frac{\mu(n_{1})\mu(n_{2})}{[\mathbb{Q}(\zeta_{\textup{lcm}(an_{1},bn_{2})},2^{1/an_{1}},3^{1/bn_{2}}):\mathbb{Q}]}.$ | |
| --- | --- | --- |
By the first lemma, this sum is "quasi-multiplicative" in the sense that prime powers $q^{n}$ with $q>3$ can be factored out. Explicitly, if we denote $a^{\prime}=2^{v_{2}(a)}3^{v_{3}(a)}$ and define $b^{\prime}$ similarly, we have
| | $\displaystyle d(a,b,x)=\sum_{\begin{subarray}{c}n_{1},n_{2}\in\{1,2,3,6\}\end{subarray}}\frac{\mu(n_{1})\mu(n_{2})}{[\mathbb{Q}(\zeta_{\textup{lcm}(a^{\prime}n_{1},b^{\prime}n_{2})},2^{1/a^{\prime}n_{1}},3^{1/b^{\prime}n_{2}}):\mathbb{Q}]}\times$ | |
| --- | --- | --- |
The first sum over $\{1,2,3,6\}^{2}$ depends only on $v_{2}(a),v_{2}(b),v_{3}(a),v_{3}(b)$ , and the sums over $\{1,q\}^{2}$ only depend on $v_{q}(a),v_{q}(b)$ . This proves 1(a).
For claim 1(b), we first note that for $q\geq 5$ , the fields in the sum have "the maximum degree" (remember the remark about the conclusion of Lemma 1 applying to subfields, too): that is, we have
$$
[\mathbb{Q}(\zeta_{\textup{lcm}(q^{v_{q}(a)}n_{1},q^{v_{q}(b)}n_{2})},2^{1/q^{v_{q}(a)}n_{1}},3^{1/q^{v_{q}(b)}n_{2}})]=\phi(\textup{lcm}(q^{v_{q}(a)}n_{1},q^{v_{q}(b)}n_{2}))q^{v_{q}(a)+v_{q}(b)}n_{1}n_{2}.
$$
So these sums can be computed. (Writing the formulas down explicitly isnât necessary, since we can now write a program that computes the values.)
For the sum over $n_{1},n_{2}\in\{1,2,3,6\}$ , we need to be a bit more careful about the degrees to account for the two failures-in-maximality mentioned earlier. Denote the degree of
$$
\mathbb{Q}(\zeta_{\textup{lcm}(a^{\prime}n_{1},b^{\prime}n_{2})},2^{1/a^{\prime}n_{1}},3^{1/a^{\prime}n_{2}})
$$
by $D(a^{\prime}n_{1},b^{\prime}n_{2})$ . By Lemma 2 (and by knowing where the factor of $4$ there comes from), we can compute $D(x,y)$ . Indeed, we have
- $D(x,y)=\phi(\textup{lcm}(x,y))xy/4$ , if we have ( $8\mid\textup{lcm}(x,y)$ and $2\mid x$ ) and ( $12\mid\textup{lcm}(x,y)$ and $2\mid y$ ).
- $D(x,y)=\phi(\textup{lcm}(x,y))xy/2$ , if we have either ( $8\mid\textup{lcm}(x,y)$ and $2\mid x$ ) or ( $12\mid\textup{lcm}(x,y)$ and $2\mid y$ ), but not both
- $D(x,y)=\phi(\textup{lcm}(x,y))xy$ , if neither ( $8\mid\textup{lcm}(x,y)$ and $2\mid x$ ) nor ( $12\mid\textup{lcm}(x,y)$ and $2\mid y$ ) hold.
That is, we have $D(x,y)=\phi(\textup{lcm}(x,y))xy\cdot C(\gcd(x,24),\gcd(y,24))$ , where $C:\{1,2,\ldots,24\}^{2}\to\{1,1/2,1/4\}$ is explicitly given by the list above. Now, our sum can be written as
| | $\displaystyle\sum_{\begin{subarray}{c}n_{1},n_{2}\in\{1,2,3,6\}\end{subarray}}\frac{\mu(n_{1})\mu(n_{2})}{\phi(\textup{lcm}(a^{\prime}n_{1},b^{\prime}n_{2}))a^{\prime}n_{1}b^{\prime}n_{2}C(\gcd(a^{\prime}n_{1},24),\gcd(b^{\prime}n_{2},24))},$ | |
| --- | --- | --- |
which can be computed in $O(1)$ time.
Step 2
Now, we fix $a$ , and aim to evaluate the sum
$$
\sum_{a<b}d(a,b,x)=\sum_{b=1}^{\infty}d(a,b,x)-\sum_{b=1}^{a}d(a,b,x).
$$
The second sum can be evaluated with brute force, and is thus not of interest to us here. Consider then the first sum. We can just forget about $b$ altogether here: because we are summing over all of $b$ , we are imposing no constraint on $\textup{ord}_{p,x}(3)$ ; that is, we are only demanding $\textup{ord}_{p,x}(2)=(p-1)/a$ . Technical point: weâre assuming here that $\lim_{N\to\infty}\sum_{a,b\leq N}d(a,b,x)=1$ ; see Step 3 for more on this.
Now, the density $d(a,x)$ of $p$ such that $\textup{ord}_{p,x}(2)=(p-1)/a$ can be computed similarly to the computations in Part 1. But itâs much simpler: note that $[\mathbb{Q}(\zeta_{n},2^{1/n}):\mathbb{Q}]$ is always at least $n\phi(n)/2$ â we lose only a factor of $2$ , not a factor of $4$ like in Lemma 2, since we only have the failure at $8\mid n$ coming from $\sqrt{2}\in\mathbb{Q}(\zeta_{8})$ . And so our formula is multiplicative:
| | $\displaystyle d(a,x)=\prod_{2\leq q\leq x}\sum_{n\in\{1,q\}}\frac{\mu(n)}{[\mathbb{Q}(\zeta_{q^{v_{q}(a)}n},q^{1/q^{v_{q}(a)}n}):\mathbb{Q}]}.$ | |
| --- | --- | --- |
Furthermore, the terms are easy to calculate: If $q>2$ , then they are
$$
\frac{1}{\phi(q^{v_{q}(a)})q^{v_{q}(a)}}\left(1-\frac{1}{q^{2}}\right).
$$
For $q=2$ , the values are (for $v_{q}(a)=0,1,2,\ldots$ )
$$
\frac{1}{2},\frac{3}{8},\frac{1}{16},\frac{3}{64},\frac{3}{256},\frac{3}{1024},\ldots
$$
Step 3
We now want to bound tail sums of $d(a,b,x)$ ; the basic idea is to give pointwise bounds for $d(a,b,x)$ and then sum those bounds.
Lemma 3. For any $a,b$ and $x$ we have
$$
d(a,b,x)\leq\frac{4}{\phi(\textup{lcm}(a,b))ab}.
$$
The proof is simple: the condition $\textup{ord}_{p,x}(2)=(p-1)/a,\textup{ord}_{p,x}(3)=(p-1)/b$ in particular implies that $p$ splits in $\mathbb{Q}(\zeta_{\textup{lcm}(a,b)},2^{1/a},3^{1/b})$ , which has at least degree $\phi(\textup{lcm}(a,b))ab/4$ â see the previous two lemmas.
Here is a very basic tail sum bound.
Lemma 4. We have
$$
\lim_{T\to\infty}\sum_{a,b\leq T}d(a,b,x)=1.
$$
For the proof, note that
$$
\sum_{a,b:\max(a,b)>T}d(a,b,x)\leq\sum_{a,b:\max(a,b)>T}\frac{4}{\phi(\textup{lcm}(a,b))ab}\ll\sum_{a>T}\frac{1}{a\phi(a)}\sum_{1\leq b\leq T}\frac{1}{b}.
$$
The inner sum is $O(\log T)$ . For the outer sum, we have well-known bound $\phi(a)\gg\sqrt{a}$ , and so $\sum_{a>T}1/(a\phi(a))\ll 1/\sqrt{T}$ , so the whole sum is $O(\log T/\sqrt{T})=o(1)$ .
Further note that $d(1,2,x)$ is strictly positive (roughly $12.5\$ ). Together with the previous lemma this implies that the density of $p$ with $\textup{ord}_{p,x}(2)>\textup{ord}_{p,x}(3)$ equals
$$
\sum_{(a,b)\in\mathbb{Z}_{+}^{2}:a<b}d(a,b,x).
$$
(An intuitive but non-trivial, technical claim â we in fact havenât proven such results if $x$ is replaced with $\infty$ , and this is the reason Artinâs primitive conjecture is still open.)
Now, we want quantitative, strong bounds for
$$
\sum_{T<a<b}d(a,b,x).
$$
For this, weâll use the fact that
$$
\phi(n)\geq c_{1}\left(\frac{n}{c_{2}}\right)^{\delta},
$$
where $c_{1}=92160,c_{2}=510510,\delta=46/47$ . (See https://math.stackexchange.com/questions/301837/is-the-euler-phi-function-bounded-below.) We have
| | $\displaystyle\sum_{T<a<b}d(a,b,x)\leq 4\sum_{T<a<b}\frac{1}{\phi(\textup{lcm}(a,b))ab}<\frac{2c_{2}^{\delta}}{c_{1}}\sum_{T<a,b}\frac{1}{ab\textup{lcm}(a,b)^{1+\delta}}.$ | |
| --- | --- | --- |
Continue:
| | $\displaystyle\frac{2c_{2}^{\delta}}{c_{1}}\sum_{T<a,b}\frac{1}{ab\textup{lcm}(a,b)^{\delta}}$ | $\displaystyle=\frac{2c_{2}^{\delta}}{c_{1}}\sum_{T<a,b}\frac{\gcd(a,b)^{\delta}}{(ab)^{1+\delta}}$ | |
| --- | --- | --- | --- |
The inner sum is bounded by $\int_{T/d}^{\infty}1/t^{1+\delta}dt=\delta^{-1}(T/d)^{-\delta}$ (plus $1$ for $d>T$ ). And thus, the above is bounded as
$$
\frac{2c_{2}^{\delta}}{c_{1}\delta^{2}}\sum_{d=1}^{\infty}d^{-2-\delta}\left(\left(\frac{d}{T}\right)^{2\delta}+1_{d>T}\right).
$$
The sum over $d$ without the $1_{d>T}$ part can be bounded numerically for $\delta=46/47$ as $13$ , and the $1_{d>T}$ part introduces an additional (small) contribution of $T^{-1-\delta}/(1+\delta)$ , and so we arrive at
$$
\frac{2c_{2}^{\delta}}{c_{1}\delta^{2}}\left(13T^{-2\delta}+T^{-1-\delta}/(1+\delta)\right).
$$
For $T=10^{5}$ this is less than $2\cdot 10^{-8}$ .
Step 4
We want to bound the difference that arises from replacing a particular value of $x$ in $d(a,b,x)$ with an arbitrarily large value of $x=x_{\infty}$ . Recall the function $g_{q}(v_{q}(a),v_{q}(b))$ from Part 1. We have $g_{q}(0,0)=1-2/q(q-1)+1/q^{2}(q-1)$ . In particular,
$$
\prod_{q>x\text{ is prime}}g_{q}(0,0)\geq\prod_{q>x\text{ is prime}}\left(1-\frac{2}{(q-1)^{2}}\right).
$$
Now, when $0\leq t\leq 1/100$ , we have $1-t\cdot 0.99\geq e^{-t}$ , so the above is bounded from below by
$$
\prod_{q>x\text{ is prime}}e^{-1.98/(q-1)^{2}}\geq\exp\left(\sum_{q>x\text{ is prime}}-1.98/(q-1)^{2}\right).
$$
For $y\geq 10^{8}$ , there are at most $1.1y/\log(y)$ primes in the interval $[y,2y)$ , and thus by dyadic decomposition
$$
\sum_{q>x\text{ is prime}}\frac{1.98}{(q-1)^{2}}\leq\sum_{y=2^{k}x,k\geq 0}\frac{1.98}{y^{2}}\frac{1.1y}{\log(y)}=\frac{1.98\cdot 1.1}{x}\sum_{k\geq 0}\frac{1}{2^{k}(k\log(2)+\log(x))}, \tag{2}
$$
which for $x\geq 10^{8}$ is numerically bounded by $1/9x$ . Thus, we arrive at the lower bound
$$
\exp(-1/9x)\geq 1-\frac{1}{9x}.
$$
Thus, for any $x_{\infty}>x>\max(a,b)$ (the last condition needed so that $v_{q}(a)=v_{q}(b)=0$ for $q\geq x$ ) with $x\geq 10^{8}$ , we have
$$
0\geq d(a,b,x_{\infty})-d(a,b,x)\geq d(a,b,x)(1-\frac{1}{9x})-d(a,b,x)\geq\frac{1}{9x}d(a,b,x).
$$
And, since $\sum_{a<b,a\leq T}d(a,b,x)\leq 1$ , we have
$$
\sum_{a<b,a\leq T}|d(a,b,x_{\infty})-d(a,b,x)|\leq\frac{1}{9x}.
$$
The code
Finally, we have posted to GitHub a C++ script for efficiently computing the answer. You can find it at https://github.com/epoch-research/artin_code/. The first 120 lines are prime sieves, gcd, fast exponentiation and other standard boilerplate. The next 80 lines are for computing various expressions arising in the explicit formulas for the density of (2), after which is a function for actually computing that density. This is done once more but one-dimensionally â see Step 2. The main function computes the answer to the problem.
### A.2 Sample problem 2 â high-medium difficulty
Problem
Construct a degree 19 polynomial $p(x)\in\mathbb{C}[x]$ such that $X:=\{p(x)=p(y)\}\subset\mathbb{P}^{1}\times\mathbb{P}^{1}$ has at least 3 (but not all linear) irreducible components over $\mathbb{C}$ . Choose $p(x)$ to be odd, monic, have real coefficients and linear coefficient -19 and calculate $p(19)$ .
Answer
1876572071974094803391179
Solution
Note that $p(x)$ defines a covering $p:\mathbb{P}_{x}^{1}\rightarrow\mathbb{P}^{1}_{z}$ of degree 19 which at $z=\infty$ has monodromy $\sigma_{\infty}$ , a 19-cycle, which without loss of generality is $(1,2,3,\ldots 19).$
Note that $X$ is just the fibre product of $p$ with $p$ (which in general is a singular curve) and therefore $p(x)$ also defines a finite map $\hat{p}:X\rightarrow\mathbb{P}^{1}_{z}$ of degree $19^{2}$ (if we prefer to stick with smooth schemes, we can instead use the normalisation $\hat{X}$ of $X$ , and we will freely move between the irreducible components of $X$ and the corresponding connected components of the smooth $\hat{X}$ with its covering structure).
The idea then is that any such covering $\hat{p}$ can be studied using group theory via its monodromy. Specifically, suppose $q:X\rightarrow\mathbb{P}_{z}^{1}$ is of degree $n$ and has branch locus $B\subset\mathbb{P}^{1}_{z}$ .Then if you follow the fibres around each branch value and see how this permutes the points in the fibres, we get a homomorphism $\bar{q}:\pi_{1}(\mathbb{P}^{1}_{z}-B,x)\rightarrow\text{Im}(\phi):=G_{q}\subset S_{n}$ (defined up to conjugation) and $X$ is irreducible exactly when $\bar{q}$ is transitive. Moreover, if $X$ is reducible, then the irreducible components correspond to the $G_{q}$ -orbits in $S_{n}$ .
Suppose $p$ with branch locus $B$ (including $\infty$ ) is encoded as $\bar{p}:G_{p}\subset S_{19}$ . Then we can check that $\hat{p}$ has branch locus $B$ and is encoded diagonally as $\bar{p}^{2}:G_{p}\subset S_{19}^{2}$ . So we are looking for some $G_{p}\subset S_{19}$ that has between 3 and 18 orbits in $S_{19}^{2}$ . In particular $G_{p}$ is not doubly transitive on $S_{19}$ .
Recall a theorem of Burnside [burnside]: for $q$ prime and a permutation group $G\subset S_{q},$ either $G$ is doubly transitive or $|G|$ divides $q(q-1).$
Applying this and Sylowâs theorem to our $G_{p}$ , we see that the order 19 subgroup $\langle\sigma_{\infty}\rangle\subset G_{p}$ is normal. Now take any $\sigma_{i}\in G_{p}$ and suppose $\sigma_{i}$ fixes at least 2 points (wlog 1 and $n$ ). Then by normality, $\sigma_{i}\sigma_{\infty}\sigma_{i}^{-1}=\sigma_{\infty}^{s}$ for some $s\in\mathbb{Z}$ . Writing this out explicitly, we must have $n=1+(n-1)s\text{ mod }p$ and hence $s=1$ . Therefore plugging in 1 to $\sigma_{i}\sigma_{\infty}\sigma_{i}^{-1}=\sigma_{\infty}$ gives $\sigma_{i}(2)=2$ . Continuing in this way, we conclude all non-trivial elements $\sigma_{i}$ must have at most 1 fixed point.
Applying Riemann-Hurwitz to $p$ , we have $19-1=\Sigma_{r}\text{Index}(\sigma_{r})$ where $r$ enumerates the branch points in $B$ , $\sigma_{r}$ is the corresponding permutation over $r$ and Index geometrically is the ramification index-1. Since each non-trivial $\sigma_{i}$ fixes at most 1 point, we have that $\text{Index}(\sigma_{i})\geq(19-1)/2=9$ , with the lower bound corresponding to precisely $9$ disjoint transpositions. Therefore Riemann-Hurwitz implies $r\leq 2$ . If $r=1$ , we have exactly one 19-cycle, which would give rise to 19 irreducible components. So the only possibility is $r=2$ and $\text{Index}(\sigma_{i})=9$ i.e. $\sigma_{i}$ consist of 9 disjoint transpositions for $i=1,2$ . Explicitly if we assume wlog that $\sigma_{1}$ contains $(1,2)$ then the relation $\sigma_{1}\sigma_{2}=\sigma_{\infty}$ (coming from the topology of $\mathbb{P}^{1}-B$ ) implies that $\sigma_{2}$ contains $(2,19)$ and continuing we find $\sigma_{1}=(1,2)(3,19)(4,18)\ldots$ and $\sigma_{2}=(2,19)(3,18)\ldots$ .
By the Riemann existence theorem, a polynomial with specified monodromy is essentially unique (up to affine scaling of the target and source) and is given by the degree 19 Chebyshev polynomial $T_{19}(x)$ (which can be checked has this cycle structure). One checks that it has exactly the diagonal in $S_{19}^{2}$ and 9 other orbits, giving 10 irreducible components.
This proof therefore shows that the only possibility is a polynomial affine equivalent to $T_{19}(x)$ i.e. of the form $AT_{19}(ax+b)+B$ . Moreover, one can check that, except for the diagonal, all the remaining components are quadratic in both variables, hence of bi-degree (2,2).
Finally we impose the desired properties of $p(x)$ . Since $T_{19}(x)$ is odd, $p(x)$ being odd implies $b=0$ and $B=0$ . $p(x)$ monic implies $A*a^{19}*2^{18}=1$ and the linear of coefficient of $p(x)$ is $-19*A*a=-19$ . Solving gives $A=1/a$ and $a^{18}=1/2^{18}$ . Real coefficients means $a=\pm 1/2$ and hence $p(x)=2*T_{19}(x/2)$ .
$p(x)=2T_{19}(x/2)$ , the degree 19 Chebyshev polynomial, which has 10 irreducible components (including the diagonal). Therefore $p(19)=1876572071974094803391179.$
### A.3 Sample problem 3 â medium difficulty
Problem
Let $a_{n}$ for $n\in\mathbb{Z}$ be the sequence of integers satisfying the recurrence formula
$$
a_{n}=198130309625a_{n-1}+354973292077a_{n-2}-427761277677a_{n-3}+370639957a_{n-4}
$$
with initial conditions $a_{i}=i$ for $0\leq i\leq 3$ . Find the smallest prime $p\equiv 4\bmod 7$ for which the function $\mathbb{Z}\rightarrow\mathbb{Z}$ given by $n\mapsto a_{n}$ can be extended to a continuous function on $\mathbb{Z}_{p}$ .
Answer
9811
Solution
The idea of the solution comes from the Skolem-Mahler-Lech theorem. Let $\alpha_{1},\dots,\alpha_{4}$ be the roots of the characteristic polynomial of the recurrence, this is,
$$
x^{4}-198130309625x^{3}-354973292077x^{2}+427761277677x-370639957.
$$
Then we have the explicit formula
$$
a_{n}=c_{1}\alpha_{1}^{n}+c_{2}\alpha_{2}^{n}+c_{3}\alpha_{3}^{n}+c_{4}\alpha_{4}^{n}
$$
for all $n\in\mathbb{Z}$ , where the coefficients $c_{i}$ are determined by the initial conditions. We can think of the roots $\alpha_{i}$ as elements of some extension $L$ of $\mathbb{Q}_{p}$ of degree $\leq 4!$ . Let $\pi$ be a uniformiser of $L$ , and denote by $v_{L}$ the $\pi$ -adic valuation normalised so that $v_{L}(p)=1$ (in particular, $v_{L}(\pi)=1/e)$ .
Note that the independent coefficient $370639957$ is prime, so all the roots $\alpha_{i}$ have $v_{L}(\alpha_{i})=0$ for any prime $p<370639957$ . As we shall see, the prime we are looking for is indeed $<370639957$ .
Let us prove that whenever all the roots satisfy $v_{L}(\alpha_{i}-1)>0$ , then there exists the required extension of $n\mapsto a(n)$ to $\mathbb{Z}_{p}$ . If this is the case, then $\alpha_{i}\equiv 1\bmod\pi$ , so in particular
$$
-370639957=\alpha_{1}\alpha_{2}\alpha_{3}\alpha_{4}\equiv 1\bmod p.
$$
Hence, $p$ must divide $370639957+1=370639958$ . The prime factors of this number are $2$ , $13$ , $1453$ and $9811$ , of which only the last two are $\equiv 4\bmod 7$ . Similarly, we have
$$
198130309625\equiv\alpha_{1}+\alpha_{2}+\alpha_{3}+\alpha_{4}\equiv 4\bmod p
$$
so $198130309625-4=198130309621$ must be multiple of $p$ . The prime factors are $37$ , $673$ , $811$ , $9811$ , so the only possible prime is $p=9811$ .
For this prime the characteristic polynomial reduces mod $p$ to
$$
x^{4}-4x^{3}+6x^{2}-4x+1=(x-1)^{4}
$$
so indeed the four roots are $\equiv 1\bmod p$ .
Let us show now how to do the extension of $n\mapsto a(n)$ . By the formula above, we need to extend $\alpha^{n}$ to a function on $\mathbb{Z}_{p}$ , and as in real analysis this is accomplished via the exponential and the logarithm. The main problem is that $\log x$ converges only for $v_{p}(x-1)>0$ in $\mathbb{Z}_{p}$ . In our case this is true, so we can simply write
$$
a_{n}=\sum_{i}c_{i}\alpha_{i}^{n}=\sum_{i}c_{i}\exp(n\log\alpha)
$$
and this clearly extends to the continuous function
$$
f(x)=\sum_{i}c_{i}\exp(x\log\alpha).
$$
(Note that $f$ is unique since $\mathbb{Z}$ is dense in $\mathbb{Z}_{p}$ ).
For other primes $p$ for which the roots are not $1\bmod\pi$ , the extension to $\mathbb{Z}_{p}$ is not possible. This is because of the following. For simplicity let us assume that the roots $\alpha_{i}$ all lie in $\mathbb{Z}_{p}$ (this happens exactly when the characteristic polynomial splits completely modulo $p$ ). The general case is done similarly but with some care.
Since the projection map $\mathbb{Z}_{p}^{\times}\to\mathbb{F}_{p}^{\times}$ induces an isomorphism between the group of $(p-1)$ th roots of unities and $F_{p}^{\times}$ , for every $\alpha\in\mathbb{Z}_{p}^{\times}$ we can find some $(p-1)$ th roots of unity $\zeta_{\alpha}$ such that $\alpha\equiv\zeta_{\alpha}\bmod p$ . Hence, $v_{p}(\alpha/\zeta_{\alpha}-1)>0$ , so $\log(\alpha/\zeta_{\alpha})$ exists. Hence,
$$
a_{n}=\sum_{i}c_{i}\alpha_{i}^{n}=\sum_{i}c_{i}\zeta_{\alpha_{i}}^{n}(\alpha_{i}/\zeta_{\alpha_{i}})^{n}=\sum_{i}c_{i}\zeta_{\alpha_{i}}^{n}\exp(n\log(\alpha_{i}/\zeta_{\alpha_{i}})),
$$
and since $\zeta_{\alpha}^{p-1}=1$ , we have
$$
a_{n}=\sum_{i}c_{i}\zeta_{\alpha_{i}}^{n\bmod{p-1}}\exp(n\log(\alpha_{i}/\zeta_{\alpha_{i}})).
$$
Thus, defining for each $0\leq k\leq p-2$ the function
$$
a^{(k)}(x)=\sum_{i}c_{i}\zeta_{\alpha_{i}}^{k}\exp(x\log(\alpha_{i}/\zeta_{\alpha_{i}}))
$$
then we have
$$
a^{(k)}(n)=a_{n}\qquad\text{if $n\equiv k\bmod{p-1}$}.
$$
Thus, we can extend the function $n\mapsto a_{n}$ from the set $S_{k}=\{n\equiv k\bmod{p-1}\}$ to its closure $\mathbb{Z}_{p}$ . But each of the functions $a^{(k)}(x)$ is different, so they are not compatible.
### A.4 Sample problem 4 â medium-low difficulty
Problem
Let $M_{1000}^{4}$ be the set of $4$ -tuples of invertible $1000\times 1000$ matrices with coefficients in $\mathbb{C}$ . Let $S\subset M_{1000}^{4}$ be the subset of all tuples $(A_{1},A_{2},A_{3},A_{4})$ satisfying the conditions:
| | $\displaystyle A_{i}^{2}$ | $\displaystyle=I,$ | $\displaystyle\quad\text{for all }1\leq i\leq 4$ | |
| --- | --- | --- | --- | --- |
where $5\mathbb{Z}_{>0}$ refers to the set of positive multiples of $5$ , i.e., $5\mathbb{Z}_{>0}=\{5,10,15,\dots\}$ .
The group $G=GL(1000)$ of invertible complex $1000\times 1000$ matrices acts on $S$ by the formula:
$$
B\cdot(A_{1},A_{2},A_{3},A_{4})=(BA_{1}B^{-1},BA_{2}B^{-1},BA_{3}B^{-1},BA_{4}B^{-1}).
$$
Find the number of orbits of this action, i.e., find $|S/G|$ .
Answer
625243878951
Solution
Let $V$ be a $1000$ dimensional complex vector space, with some chosen basis. Then using the fact that the $A_{i}$ satisfy precisely the Coxeter relations for the symmetric group $S_{5}$ , we deduce that the data of $A_{1},\dots,A_{4}$ is just a representation of $S_{5}$ in $V$ . More precisely, the set of $(i,j)\in\{1,2,3,4\}^{2}$ such that $3j-i\in 5\mathbb{Z}_{>0}$ is precisely $\{(1,2),(2,4),(4,3)\}$ . This shows that the "braid relation" only occurs for these three pairs, and all other pairs just give commutation relations.
Declaring that the transpositions $(1\ 2),(2\ 3),(3\ 4),(4\ 5)$ act on $V$ as multiplication respectively by $A_{1},A_{2},A_{4}$ and $A_{3}$ gives a representation of $S_{5}$ on $V$ . The action of $GL(1000)$ just gives the matrices of the same representation but with respect to different bases. Therefore, the number of orbits of this action is exactly the number of $1000$ dimensional representations of $S_{5}$ up to isomorphism.
From the character table of $S_{5},$ we see that the irreducible representations of $S_{5}$ have dimensions $1,1,4,4,5,5,6$ . Thus the number we are looking for is the number of ways to write $1000$ as a sum of $1,4,5,6$ (with two different kinds of $1,4,5$ each). This is precisely the coefficient of $t^{1000}$ in the power series
$$
(1+t+t^{2}+\cdots)^{2}(1+t^{4}+t^{8}+\cdots)^{2}(1+t^{5}+t^{10}+\cdots)^{2}(1+t^{6}+t^{12}+\cdots),
$$
which is equal to the power series
$$
\frac{1}{(1-t)^{2}(1-t^{4})^{2}(1-t^{5})^{2}(1-t^{6})}.
$$
The above coefficient can be computed to be 625243878951.
### A.5 Sample problem 5 â low difficulty
Problem
How many nonzero points are there on
$$
x^{3}y+y^{3}z+z^{3}x=0
$$
over $\mathbb{F}_{5^{18}}$ up to scaling?
Answer
3814708984376
Solution
Using the code presented in Figure Ë 8 to count the solutions for $\mathbb{F}_{5},\mathbb{F}_{25},\mathbb{F}_{125}$ gives us $6,26,126$ solutions respectively. You can download the package finite_field_implementation at https://github.com/epoch-research/finite_field_implementation/
âŹ
1 import finite_field_implementation as ffi
2 import numpy as np
3 def countsolutions (p, k): # counts solutions to x^3y+y^3z+z^3x
4 poly = ffi. find_irreducible_poly (p, k)
5 zero = ffi. FiniteFieldElement ([0,0,0], p, k, poly)
6 one = ffi. FiniteFieldElement ([1,0,0], p, k, poly)
7 ans = 2 # (0:0:1), (0:1:0) only solutions at infinity
8
9 for coeffs in np. ndindex ((p,)* k):
10 x = ffi. FiniteFieldElement (coeffs, p, k, poly)
11 for coeffs2 in np. ndindex ((p,)* k):
12 y = ffi. FiniteFieldElement (coeffs2, p, k, poly)
13 if x * x * x * y + y * y * y + x == zero:
14 ans +=1
15 return ans
16
17 print (countsolutions (5,1))
18 print (countsolutions (5,2))
19 print (countsolutions (5,3))
Figure 8: Python implementation of a projective solution counter, used to identify the pattern in solution counts over finite fields $\mathbb{F}_{5}$ , $\mathbb{F}_{25}$ , and $\mathbb{F}_{125}.$
We can also verify this curve is smooth at $p=5$ and has genus $3$ .
The Weil conjectures [maxhasseweil] gives us
$$
|C[\mathbb{F}_{p^{n}}]|=p^{n}-\sum_{i=1}^{2g}\alpha_{i}^{n}+1
$$
for some complex $\alpha_{i}$ of absolute value $\sqrt{p}$ which come in conjugate pairs. [calebji]
We notice
$$
\alpha_{j}=e^{(2j+1)i\pi/6}\sqrt{5}
$$
satisfies this equation.
To show that this is unique, let $\alpha_{i}$ be roots of $z^{2}-\sqrt{5}\beta_{i}z+5$ , $i=1,2,3$ . Then $\alpha_{i}^{2}$ are roots od $z^{2}-5T_{2}(\beta_{i})z+25$ and $\alpha_{i}^{3}$ are roots of $z^{2}-\sqrt{125}T_{3}(\beta_{i})z+215$ , where $T_{2}$ and $T_{3}$ are the 2nd and 3rd Chebyshev polynomials. So we know
$$
\beta_{1}+\beta_{2}+\beta_{3}=T_{2}(\beta_{1})+T_{2}(\beta_{2})+T_{2}(\beta_{3})=T_{3}(\beta_{1})+T_{3}(\beta_{2})+T_{3}(\beta_{3})=0.
$$
So, knowing the values of $\beta_{1}+\beta_{2}+\beta_{3},\beta_{1}^{2}+\beta_{2}^{2}+\beta_{3}^{2}$ and $\beta_{1}^{3}+\beta_{2}^{3}+\beta_{3}^{3}$ gives you the $b_{i}$ (and so the ( $\alpha_{i}$ ) uniquely).
Setting $n=18$ gives us the result:
$$
|C[\mathbb{F}_{5^{18}}]|=5^{18}+6\cdot 5^{9}+1.
$$
## Appendix B Evaluation
### B.1 Prompts
To guide the models effectively in solving the complex mathematical problems, we wrote prompts that encourage mathematical reasoning, writing code for experiments, and adherence to formatting requirements.
Initial Prompt
The initial prompt introduces the problem and provides detailed instructions. It explains how the model can submit Python code enclosed in triple backticks (ââ|) to run experiments. It explains how to submit a final answer: the submitted code should contain the exact comment
Continuation Prompts
After each interaction, if the model has not yet arrived at a correct solution, we provide feedback based on the code execution results. The continuation prompt first includes any error messages, standard output, or indications of timeouts from the executed code. It then urges the model to keep working and try new approaches if stuck, and reminds it of the guidelines for submitting the final answer.
Final Prompt
If the model has not submitted a correct answer within token limit, we provide a final prompt that informs the model that the answer is needed now and instructs it to submit the best attempt. This final prompt reiterates the requirements for the final answer code block.
Initial prompt
I am presenting you with a mathematical problem that you need to solve. This problem does not require high performance computing or parallel processing to solve. Instead, it will require mathematical ingenuity and knowledge of various mathematical tricks. Writing Python code You have the ability to write Python code that we will execute through a command line to perform experiments. This is an ability you should take advantage of to solve this problem. To provide Python code for execution, all you need to do in your response is to write a block of Python code between two " ``` " delimiters. We will execute all such blocks of code and provide you with the results. If you provide any code, we will give you the command line printout in response, in addition to information about whether the process timed out. Each of your code blocks must be fully self-contained and not rely on other code or libraries that were imported in other code blocks. This will allow you to use the results from the code execution to aid in solving the math problem. Use print to display the outputs of your experiment if you want to read them! Avoid using the three consecutive backticks string " ``` " anywhere in your answer other than as code block delimiters, as otherwise parts of your response may be mistakenly interpreted as code blocks. Remember that you need to wait for a response from us about what the output of your code blocks were: you do not execute the code yourself. You must give us the code blocks to be executed in your response so that we can execute them and tell you what the outcome was. Please provide code for execution, even if you think it might require more computing resources than available. Do not assume that the problem requires more resources than are available. It does not. Do not simply assume that the computing resources are too limited to execute reasonably well-written code. They are not. If the code you write takes more than 20 seconds For problems where authors expect a solution to take more than 20 seconds to run, we removed this. to execute, we will stop execution and tell you that it timed out. The problem is a very difficult mathematical problem, and you must approach it accordingly. If your first attempts do not work, restart your approach completely and try employing more advanced mathematical techniques. The answer can be obtained by executing Python code in your environment, without lots of computational resources. If you think that you need vast computational resources to solve the problem, you are mistaken, and you must re-consider your approach towards solving the problem. Continue working on this problem until you are confident that you have found the correct answer. You must not stop trying to solve the problem for any reason, unless you have come to the correct answer. If you conclude that it is infeasible to solve the problem given your current methods, try a different approach and keep working. Submitting your final answer Once youâre confident you have the correct answer, present it as a Python script, enclosed between " ``` " delimiters at the start and end as before, that satisfies the following two conditions: âą
The script must contain the comment " # This is the final answer " exactly as written somewhere in it. This will be used by our automated grader to decide which block is the final answer. âą
The script must dump your final answer using the " pickle " module to a file named " final_answer.p " in the same directory as the script. âą
This script must also be fully self-contained, and must execute correctly in less then 20 seconds footnotemark: without relying on other functions or imports that you have previously written. This formatting is essential because your answer will be extracted from our conversation transcript by a script that recognizes only the code block that has the string " # This is the final answer " in it. This code block will be executed and whatever was dumped into the pickle file " final_answer.p " will be read by our grader and be taken as your final answer. You donât have to submit a final answer in your first response! Youâre allowed to start by just writing experiments, and reflecting on their results. You should only submit a final answer when youâre sure your answer is correct, or when youâre explicitly told to do so. Make sure that your final answer is of the right type signature. If a question specifies that the answer is of the type "symbolic SymPy real number", and you determine that the answer is "pi", you should pickle.dump the object " sympy.pi ", not some other object such as " numpy.pi " or " 3.141592 ". Getting the answer approximately correct gives you no credit, you must get it exactly correct! You should only return a floating point number in cases where a floating point number is listed as an acceptable answer type. Performing mathematical operations Itâs crucial to remember that as a language model, you are not inherently equipped to perform mathematical operations reliably, including arithmetic and algebra. Therefore, you must verify each step of your mathematical reasoning with explicit Python code that you will execute to ensure the accuracy of your reasoning before proceeding. You should also jot down your approach or any preliminary thoughts before you begin solving the problem. Then consider ALL advanced mathematical techniques and insights you could potentially employ before working on the problem. Here is the mathematical problem you need to solve:
Continuation prompt
Keep working. If youâre stuck, make sure to try new approaches. Donât be afraid to execute your code. Tell me if you feel confident about your answer and approach, and give me the solution if so. Ensure that any code block containing the comment " # This is the final answer " is indeed the final, complete solution that correctly computes the required sum and executes without modifications. Do not include the comment " # This is the final answer " in any code block unless it meets these criteria and can be run as-is to produce the accurate result and pickle dump it to a file named " final_answer.p ". Also, remember the following: âą
Use print to display the outputs of your experiment if you want to read them! Donât forget to do this, since for each code block weâll only show you the stdout and stderr. âą
Avoid using the three consecutive backticks string " ``` " anywhere in your answer other than as code block delimiters, as otherwise parts of your response may be mistakenly interpreted as code blocks. âą
If the final answer is specified as a SymPy real or rational number, be careful to not mix different libraries when expressing it: try to use pure SymPy expressions. For example, if the answer to a question is 1/sqrt(2) your final answer should be a pickle.dump of 1/sympy.sqrt(2), not of 1/math.sqrt(2). Only return a floating point number if the specified answer type includes floating point numbers as a possibility.
Final prompt
I need the answer now. Please submit your best guess at the solution, even if you think you arenât finished yet. Ensure that the solution is in the form of a code block that has the comment # This is the final answer somewhere in it, correctly pickle dumps the answer to a file named final_answer.p, and executes without modifications.
### B.2 Analysis of problems solved at least once
We selected four problems that could be solved by at least one model, and conducted repeated trials with five runs per model per problem. We report the success rate over the five trials of each model on each problem in Table 2 (naming problems from 1 to 4).
| Problem | Grok 2 Beta | o1-preview | o1-mini | GPT-4 | Gemini 1.5 Pro | Claude 3.5 Sonnet | MSC2020 Classification |
| --- | --- | --- | --- | --- | --- | --- | --- |
| 1 | 60% | 40% | 20% | 0% | 0% | 20% | 60 Probability theory 41 Approximations and expansions |
| 2 | 0% | 100% | 20% | 20% | 0% | 80% | 55 Algebraic topology 57 Manifolds and cell complexes |
| 3 | 0% | 80% | 0% | 0% | 60% | 0% | 20 Group theory 12 Field theory |
| 4 | 0% | 0% | 20% | 0% | 0% | 0% | 14 Algebraic geometry 05 Combinatorics |
Table 2: Model success rates out of five runs on the four test problems.
### B.3 Percentage of unsolved problems across benchmarks
Figure 2 shows the fraction of problems not solved by the top-performing model on prominent mathematics benchmarks: the College Mathematics subset of MMLU ([hendrycks2021measuringmassivemultitasklanguage]), GSM8K ([cobbe2021training]), MATH ([hendrycks2021measuring]), AIME ([o1-learningtoreason]), Omni-MATH ([gao2024omni]), MathVista ([lu2024mathvistaevaluatingmathematicalreasoning]), and FrontierMath.
To make this figure, we took one minus the top reported accuracy for each benchmark under the pass@1 setting. For MMLU, we used the o1 accuracy of 98.1% ([o1-learningtoreason]). For GSM8K, we used the Claude 3.5 Sonnet pass@1 accuracy of 96.4% ([sonnet-3-5-addendum]). For MATH, we used o1âs reported pass@1 accuracy of 94.8% ([o1-learningtoreason]). For AIME, we used o1âs reported pass@1 accuracy of 74% ([o1-learningtoreason]). For Omni-MATH, we took o1-miniâs accuracy of 60.54% as reported on the benchmark leaderboard [gao2024omni]. For MathVista, we used o1âs pass@1 accuracy of 73.9% ([o1-learningtoreason]). Finally, for FrontierMath, we relied on our own evaluation, which shows that more than 98% of the problems remain unsolved by the best-performing models.
### B.4 Pass@8 accuracy
Figure 6 illustrates the mean accuracy across 8 different evaluations, where no model achieves higher than 2%. Figure 9 we show the pass@8 accuracy of different models on FrontierMath.
<details>
<summary>visuals/model_comparison_pass_at_8.png Details</summary>

### Visual Description
\n
## Bar Chart: Model Accuracy Comparison
### Overview
The image presents a horizontal bar chart comparing the accuracy of several large language models (LLMs) on a specific task. The accuracy metric used is "pass@8", which likely refers to the percentage of times the model produces a correct answer within its top 8 attempts. The chart displays the models' performance in a visually comparative manner.
### Components/Axes
* **Y-axis (Vertical):** Lists the names of the LLMs being compared:
* o1-preview
* Gemini 1.5 Pro (002)
* o1-mini
* Claude 3.5 Sonnet (2024-10-22)
* GPT-4o (2024-08-06)
* Grok 2 Beta
* **X-axis (Horizontal):** Represents the accuracy percentage, ranging from 0% to 100%, with gridlines at 20%, 40%, 60%, 80%, and 100%. The axis is labeled "Accuracy (pass@8)".
* **Bars:** Each bar corresponds to a model, with its length representing its accuracy score. The bars are colored in shades of teal.
### Detailed Analysis
Let's analyze each model's accuracy based on the bar lengths:
* **o1-preview:** The bar extends to approximately 83% ± 2%.
* **Gemini 1.5 Pro (002):** The bar extends to approximately 75% ± 2%.
* **o1-mini:** The bar extends to approximately 60% ± 2%.
* **Claude 3.5 Sonnet (2024-10-22):** The bar extends to approximately 50% ± 2%.
* **GPT-4o (2024-08-06):** The bar extends to approximately 40% ± 2%.
* **Grok 2 Beta:** The bar extends to approximately 25% ± 2%.
The bars are arranged from highest accuracy (o1-preview) to lowest accuracy (Grok 2 Beta).
### Key Observations
* **Performance Leader:** o1-preview significantly outperforms all other models, achieving the highest accuracy.
* **Gemini 1.5 Pro:** Gemini 1.5 Pro shows strong performance, ranking second in accuracy.
* **GPT-4o and Claude 3.5 Sonnet:** These models exhibit moderate accuracy, falling in the middle range.
* **Grok 2 Beta:** Grok 2 Beta demonstrates the lowest accuracy among the models tested.
* **Date Information:** The chart includes dates associated with Claude 3.5 Sonnet (2024-10-22) and GPT-4o (2024-08-06), suggesting these represent specific versions or snapshots of the models.
### Interpretation
This chart provides a comparative assessment of the accuracy of several LLMs using the "pass@8" metric. The substantial difference in performance between o1-preview and the other models suggests it possesses a significant advantage in the task being evaluated. The inclusion of dates for Claude 3.5 Sonnet and GPT-4o implies that model performance can evolve over time, and the chart captures a specific point in their development. The relatively low accuracy of Grok 2 Beta may indicate it is an earlier or less refined version compared to the others.
The "pass@8" metric is interesting. It suggests that while the models may not always provide the correct answer on the first attempt, they are capable of generating it within a limited number of tries. This could be relevant in applications where multiple responses are acceptable or where a post-processing step can filter for the correct answer.
The chart doesn't reveal *what* task the models are being evaluated on, which limits the scope of interpretation. Knowing the task would provide valuable context for understanding the significance of the accuracy differences.
</details>
Figure 9: Pass@8 accuracy of different models on the FrontierMath dataset. The best model (o1-preview) achieves around 6% accuracy, compared to around 2% for Grok 2 Beta.
## Appendix C Difficulty ratings
Our approach to difficulty rating involves a combination of expert self-assessment and rigorous peer review to ensure the reliability and consistency of the ratings. Each problem author provided an initial difficulty assessment along three key dimensions:
1. Background: This rating ranges from 1 to 5 and indicates the level of mathematical background required to approach the problem. A rating of 1 corresponds to high school level, 2 to early undergraduate level, 3 to late undergraduate level, 4 to graduate level, and 5 to research level.
1. Creativity: Estimated as the number of hours an expert would need to identify the key ideas for solving the problem. This measure has no upper limit.
1. Execution: Similarly estimated as the number of hours required to compute the final answer once the key ideas are found, also with no upper limit.
To verify and refine the initial difficulty assessments, each problem underwent a peer-review process. Reviewers, who are also mathematicians, examined the problems along with the provided solutions and difficulty ratings. They assessed the correctness of the solutions, the clarity of the problem statements, and the accuracy of the initial difficulty ratings. Any discrepancies between the authorsâ ratings and the reviewersâ assessments were discussed, and adjustments were made as needed to ensure that the difficulty ratings accurately reflected the problemâs demands in terms of background knowledge, creativity, and execution.
This peer-review process not only validated the difficulty ratings but also enhanced the overall quality of the problem set by ensuring correctness and clarity.
Our initial analyses have shown that substantially easier problems that are not in this benchmark that have received higher difficulty ratings are significantly less likely to be solved by advanced AI models like GPT-4o, supporting the validity of our difficulty assessment system.
## Appendix D Metadata collected
This is the metadata we collect on each problem. Writers put in their own suggestions for this metadata when submitting a problem, but this gets edited by the reviewers to ensure consistency of difficulty ratings and of the relative granularity of subject vs. technique classification.
| Metadata tag | Description |
| --- | --- |
| Background rating | A difficulty rating ranging from 1 to 5 quantifying how much background knowledge is required to solve the problem. |
| 1: High school level 2: Early undergraduate level 3: Late undergraduate level 4: Graduate level 5: Research level | |
| Creativity rating | This rating is based on how long we think the average human expert with the requisite background knowledge would take to find the key ideas going into the solution of a problem. |
| Execution rating | Measures the amount of attention to detail and precise reasoning required to solve the problem. |
| Subjects | A list of broad subjects that the question fits into: âanalytic number theoryâ, ârepresentation theoryâ, âdifferential geometryâ, etc. |
| Note that a question can fit into more than one subject. | |
| Techniques | A list of techniques, theorems, results, etc. that can be used to solve the problem. The list doesnât need to be exhaustive and only needs to contain the âmost prominentâ items that occur to the question author. |
| Example: âgenerating functionsâ, âdouble countingâ, âVietaâs theoremâ, etc. | |
| Is programming required? | This is either âyesâ or ânoâ, depending on whether a human expert would require access to a programming environment in order to find the answer to the question. |
Table 3: Table of metadata tags and their descriptions