Lean-auto is an interface between Lean and automated theorem provers. Up to now, lean-auto is maintained and developed primarily by Yicheng Qian (GitHub: PratherConid). It is currently in active development, and pull requests/issues are welcome. For more information, feel free to reach out to Yicheng Qian on Lean Zulip.
Lean-auto is based on a monomorphization procedure from dependent type theory to higher-order logic and a deep embedding of higher-order logic into dependent type theory. It is capable of handling dependently-typed and/or universe-polymorphic input terms. Currently, proof reconstruction can be handled by Duper, a higher-order superposition prover written in Lean.
Although Lean-auto is still under development, it's already able to solve nontrivial problems. For example the first part of the "snake lemma" in category theory can be solved by a direct invocation to auto
(and the second part can also be partly automated):
Type "auto 👍" to see whether auto is set up.
Usage
auto [<term>,*] u[<ident>,*] d[<ident>,*]
u[<ident>,*]
: Unfold identifiersd[<ident>,*]
: Add definitional equality related to identifiers
- Currently, auto supports
- SMT solver invocation:
set_option auto.smt true
, but without proof reconstruction. Make sure that SMT solvers are installed, and thatauto.smt.solver.name
is correctly set. - TPTP Solver invocation:
set_option auto.tptp true
, but without proof reconstruction. Make sure that TPTP solvers (currently only supports zipperposition) are installed, and thatauto.tptp.solver.name
andauto.tptp.zeport.path
are correctly set. - Proof search by native prover. To enable proof search by native prover, use
set_option auto.native true
, and useattribute [rebind Auto.Native.solverFunc] <solve_interface>
to bindlean-auto
to the interface of the solver, which should be a Lean constant of typeArray Lemma → MetaM Expr
.
- SMT solver invocation:
Installing Lean-auto
z3
version >= 4.12.2. Lower versions may not be able to deal with smt-lib 2.6 string escape sequence.cvc5
zipperposition
portfolio mode
Utilities
- Command
#getExprAndApply [ <term> | <ident> ]
: Defined inExprExtra.lean
. This command first elaborates the<term>
into a leanExpr
, then applies function<ident>
toExpr
. The constantident
must be already declared and be of typeExpr → TermElabM Unit
- Command
#genMonadState <term>, #genMonadContext <term>
: Defined inMonadUtils.lean
. Refer to the comment at the beginning ofMonadUtils.lean
. - Command
#fromMetaTactic [<ident>]
: CallsTactic.liftMetaTactic
on<ident>
. The constant<ident>
must be already declared and be of typeMVarId → MetaM (List MVarId)
- Lexical Analyzer Generator:
Parser/LeanLex.lean
. The frontend is not yet implemented. The backend can be found inNDFA.lean
.
Monomorphization Strategy
- Let
be an assumption. We require that If the type of any subterm of depends on a bound variable inside , and is not of type , then must be instantiated. Examples: Monomorphization, section InstExamples
If any binder of has binderinfo instImplicit
, then the bindermust be instantiated via typeclass inference.
- TODO
Translation Workflow (Tentative)
- Collecting assumptions from local context and user-provided facts
- We reduce
let
binders and unfold projections when we collect assumptions. So, in the following discussion, we'll assume that the expression contains nolet
binders and noproj
s. - We also
reduce user provided facts so that there are nothing like
- We reduce
: Collecting constructors and recursors for inductive types (effectively, not directly) - e.g collecting equational theorem for match constructs
- e.g collect constructors for inductively defined propositions
: Monomorphization - Monomorphize all (dependently typed/universe polymorphic) facts to higher-order universe-monomorphic facts
stands for "constant universe level" - Note that at this stage, all the facts we've obtained are still valid
expressions and has convenient CIC proofs from the assumptions.
- We want all types
occuring in the signature of constants and variables to be of sort Type (u + 1)
, i.e.,. This is necessary because we want to write a checker (instead of directly reconstructing proof in DTT) and the valuation function from less expressive logic to dependent type theory requires [the elements in the range of the valuation function] to be [of the same sort]. - To do this, we use
GLift
. For example,Nat.add
is transformed intoNat.addLift
structure GLift.{u, v} (α : Sort u) : Sort (max u (v + 1)) where /-- Lift a value into `GLift α` -/ up :: /-- Extract a value from `GLift α` -/ down : α def Nat.addLift.{u} (x y : GLift.{1, u} Nat) := GLift.up (Nat.add (GLift.down x) (GLift.down y))
- We only transfer these "lifted" terms to the less expressive
, and is unaware of the universe levels wrapped inside GLift.up
. - Lifted constantes should be introduced into the local context. Theorems corresponding to the original one but using only lifted constants and with uniform universe levels, should also be introduced into the local context. Later translations should only use theorems and constants with uniform universe levels.
- We want all types
: Instantiating function arguments is the reified
- There should also be a process similar to ULifting that "lifts" Bool into Prop, Nat to Int
Reification
Auto/Translation/LamReif.lean
Auto/Translation/LamFOL2SMT.lean
Checker
- The checker is based on a deep embedding of simply-typed lambda calculus into dependent type theory.
- The checker is slow on large input. For example, it takes
6s
to typecheck the final example inBinderComplexity.lean
. However, this is probably acceptable for mathlib usages, because e.gMathlib/Analysis/BoxIntegral/DivergenceTheorem.lean
has two theorems that take4s
to compile (but a large portion of the4s
are spent on typeclass inference)
Rules in Proof Tree
defeq <num> <name>
: The<num>
-th definitional equality associated with definition<name>
hw <name>
: Lemmas hard-wired into Lean-autolctxInh
: Inhabitation fact from local contextlctxLem
: Lemma from local contextqueryNative::<func_name>
: Proved by native proverrec <indName>.<ctorName>
rw [0, 1]
: Rewrite0
using1
(1
must be an equality)tyCanInh
: Inhabitation instance synthesized for canonicalized type❰<term>❱
: User-provided lemma<term>