Trend-Themen
#
Bonk Eco continues to show strength amid $USELESS rally
#
Pump.fun to raise $1B token sale, traders speculating on airdrop
#
Boop.Fun leading the way with a new launchpad on Solana.

Jared Duker Lichtman
Zahlentheoretiker, Szegő-Assistenzprofessor für Mathematik an der @Stanford
Ich kann es kaum erwarten, was mit der Autoformalisierung im Jahr 2026 kommt.

Math, Inc.17. Jan. 2026
🚨 Math Inc’s agent, Gauss, just autoformalized the proof of the Riemann Hypothesis for curves

103
Sehr aufgeregt, mit Terry an der Formalisierung der Zahlentheorie zu arbeiten

Math, Inc.8. Jan. 2026
🎀 Terence Tao is partnering with Math, Inc. 🎀
as the inaugural Veritas Fellow — to formalize estimates in number theory.
In analytic number theory, the literature contains a large web of explicit estimates. But that web is not immediately interoperable. In practice, results come in three layers:
Primary estimates: These are foundational inputs such as zero-free regions for the Riemann zeta function. They often depend on substantial computation and careful numerical optimization.
Secondary estimates: Many papers take a primary input (e.g., a zero-free region) and convert it into reusable consequences, such as counting primes in short intervals. These become core building blocks used throughout the subject.
Tertiary estimates: Further work then applies those secondary building blocks to frontier number-theoretic problems, e.g. representing integers as sums of three primes.
The difficulty is that these layers do not update cleanly over time. A tertiary paper may rely on the best primary estimate available at the time. But years later improved computations refine the primary input, without being systematically propagated through the secondary and tertiary chain. As a result, the “same theorem with updated constants” is often unknown.
The goal is to formalize key papers across these layers and then abstract them so their dependencies become explicit, composable, and machine-checkable. The long-term vision is to create a living network of implications: when a primary estimate improves, every downstream implication is automatically upgraded. This will transform the mathematical literature into modular software.
Number theory is a strong test case because its estimates has a relatively clear structure, and a shared set of standard inputs and outputs. But in many areas such as PDEs, researchers constantly spend effort on modification: adapting lemmas and hypotheses, translating between incompatible frameworks, “fitting square pegs into round holes.” A composable, machine-verified implication network directly targets this friction.
The same infrastructure is poised to scale to other fields and enable crowdsourced, large-scale projects that are currently hard to coordinate. A classic example is the classification of finite simple groups: a decades-long effort distributed across many contributors, with inevitable complexity around bookkeeping, integration, and confidence in completeness.
With modern tooling, we envision tackling moonshots of comparable scope: many contributors handling diverse cases, and automated systems gluing the pieces together. The field becomes a live progress dashboard that records what is proved, what remains, and exactly which dependencies each component requires.
This opens up the possibility for a much faster-pace and engaging way to do mathematics.
Watch Tao's outline on YouTube:
140
Top
Ranking
Favoriten
