Created by Martin Dvořák
In situations where p x : Prop we cannot write but we can write
Σ x : α, p x which is both correct and idiomatic.
∃ x : α, p x
Tuples (prod) aren't enough to express existential quantification.
Both Π and ∀ are notations for the same kernel expressions. They are not inductive types. They are more "primitive" than other types. They use a built-in term constructor; it is not the same kind of constructor as structures have.
However, Σ is an inductive type.
core.html#sigma
In addition to that, ∃ is also an inductive type.
logic.html#exists
As a consequence, Σ and ∃ have different meanings.
In contrast, Π and ∀ mean exactly the same thing.
universes u v
def Pi' {α : Sort u} (x : α → Sort v) := Π i, x i
#check @Pi'.{u v}
#check @psigma.{u v}
#check @sigma.{u v}
#check @Exists.{u}
In situations where p x : Prop we write as a syntactic sugar for
∀ x : α, p x which would be less idiomatic to write.
Π x : α, p x
Functions (→) aren't enough to express universal quantification.
τ : Type
bar : τ
δ : Type
baz : δ
Consider the following declaration.
mtt = τ × δ
The symbol × refers to prod and not to Σ. However, a conceptually same thing could be expressed by a Σ-type.
mst = Σ x : τ, δ
Here δ doesn't depend on x.
We obtained a cartesian product of our types τ and δ. Don't get confused by the fact that a cartesian product is expressed by a dependent sum type (and not by a dependent product type).
The type prod further defines a notation with parentheses. Hence a term stands for(bar, baz) prod.mk bar baz of the type mtt and nothing else.
In comparison can stand for either⟨bar, baz⟩ prod.mk bar baz or sigma.mk bar baz and the exact type is inferred from the context.
If we write it explicitly, we can obtain (prod.mk bar baz) : mtt or (sigma.mk bar baz) : mst only.
The parentheses associate to the right (as well as chevrons do), hence (a,means the same asb,c) and they both denote a term of the type a,((b,c)) Tassuming that×U×V = T×(U×V) = prod T (prod U V) a:T, b:U, c:V.
In general, prod is preferred over sigma wherever possible. It has a better type inference and the code is more readable.
The dependent sum type (a.k.a. "dependent pair type"), or Σ-type for short ("sigma type"), is a generalization of tuple types.
α : Type
β : α → Type
mdst = Σ x : α, β x
Any z : mdst is of the form z = ⟨a,b⟩ where a : α and b : (β a).
def f : ℕ → Type
| 0 := ℕ
| 1 := ℤ
| _ := ℕ
def g : ℕ → Type
| _ := ℤ
def foo : sigma f := ⟨1, (2 : ℤ)⟩
def bar : sigma g := ⟨1, (2 : ℤ)⟩
lemma also_does_not_typecheck : foo = bar := sorry
The dependent product type (a.k.a. "dependent function type"), or Π-type for short ("pie type"), is a generalization of function types.
α : Type
β : α → Type
mdpt = Π x : α, β x
Any f : mdpt is a function with the property that, for each a : α, we have (f a) : (β a).
τ : Type
δ : Type
Consider the following declaration.
mft = τ → δ
It is just a convenient notation for a Π-type.
mft = Π x : τ , δ
Here δ doesn't depend on x.
Every type other than the universes and every type constructor other than Π belongs to a general family of type constructions known as inductive types.
If the type foo has only one constructor, it is automatically named foo.mk and it can automatically be envoked by writing ⟨bar, baz⟩ as a syntactic sugar for foo.mk bar baz in case of a constructor with two arguments (in this example). The chevrons associate to the right, hence ⟨a, means the same as b,c⟩⟨a, and so on.⟨b,c⟩⟩
inductive foo : Sort u
| constructor1 : ... → foo
| constructor2 : ... → foo
...
| constructorN : ... → foo
essentially single-constructor inductive types with some syntactic sugar
That's why it is called "dependent type theory".
https://en.wikipedia.org/wiki/Lambda_cube
TODO
TODO
any level
Type* or Prop
https://en.wikipedia.org/wiki/Lambda_calculus
Agda also has non-cumulative universes. Coq has cumulative universes, i.e., whenever x : Type u we also have x : Type v for every v > u.
7 : int : Type : Type 1 : Type 2 : Type 3 : (...)
Type = Type 0 = Sort 1
Prop = Sort 0
begins the namespace
namespace.lemma
namespace.definition
namespace.namespace.tactic
...
proves equality from definition
rfl is a short for eq.refl _
#check -3 gives int
#check int gives Type
#check Type gives Type 1
closes the namespace
A proof in Lean is a function of the given type, that is, a construction of a term of the type representing the theorem.
unpacks and renames a term from "before :" and then another := must follow
formally the same as theorem ; better for auxiliary theorems (lemmata)
forward proofs
https://en.wikipedia.org/wiki/Simply_typed_lambda_calculus
beta-equivalent expression ("reduct")
"imports" the namespace (rather allows to omit the prefix <name>.)
best to use for important theorems
no name; cannot be refferred to
syntactic sugar for λ which is recommended for preconditions (better readability)
https://agentultra.github.io/lean-4-hackers/
https://en.wikipedia.org/wiki/Curry%E2%80%93Howard_correspondence
#print axioms mytheorem prints all axioms that were used in the proof of the theorem mytheorem
Lean is based on this type theory.
less trustworthy than #reduce but far more efficient
Every function is a unary function.
Two "values" (actually types):
false — empty typetrue — singleton type — the instance is called trivialinside a begin ... end block, a nested end can be abbreviated to }
defines new terms and names them
TODO
The project began in 2013.
inside a begin ... end block, a nested begin can be abbreviated to {
closes the interactive mode
in position of a term; the next token is a tactic (only one) that provides witness to the current term (only one)
outside of the interactive mode, writing by exact foo is equivalent to just writing foo
intros together with rcases applied to each part
In comparison to the Calculus of Inductive Constructions, the following two features are added.
axiom propext {a b : Prop} :
(a ↔ b) → (a = b)Head:tail in Haskell
Type in Lean
Type in Haskell
Head::tail in Lean
opens the interactive mode
axiom dnc :
∀ (a : Prop), (¬(¬a)) →a
answers current goal by a proof term
intro together with rcases
applies intro repeatedly; can be used with argument names or without (then it eats as much as it can and assigns default names to the eaten terms)
The axiom classical.choice is a type-theoretic statement that implies the set-theoretic axiom of choice.
like have together with rcases
Declaration in Lean
??
often used to avoid setoids
By default, two terms are equal, if they beta-reduce to the same term, that is, they are "somewhere deep inside" both made by the same tree of constructors.
interface between proof terms and tactics
https://leanprover-community.github.io/contribute/naming.html
searches the library for a single exact ... that closes the goal; otherwise (theorem for the current goal is not found), it fails
introduces an auxiliary proposition (inside a proof)
like library_search but shows more stuff, including lemmata for which preconditions haven't been provided
not proved yet; generates a warning during compilation and is listed by #print axioms
axiom lem :
∀ (a : Prop), a ∨ ¬a
Declaration in Haskell
Relation operator (id) in Lean
Relation operator (id) in Haskell
https://github.com/flypitch/flypitch
TODO
eats the first part of the curent goal and saves it into a variable
fills in existentially-quantified variable in the current goal (which must begin by ∃)
https://www.cmu.edu/dietrich/philosophy/people/phd/mario-carneiro.html
https://www.imperial.ac.uk/people/k.buzzard
https://github.com/leanprover-community/lean-perfectoid-spaces
recursive cases but requires separating parts by commas
lists possible tactics which will make progress (that is, not fail) towards the current goal
what you can write when don't know how to (or don't want to) solve a goal
splits the goal into two goals by a condition (proposition); one when holds, one when doesn't
decomposes a terms according to its constructor(s); similar to destruct in Coq
assigns names to the outputs of the command on the left (e.g. cases)
axiom choice {α : Sort*} :
(nonempty α) → α
proof by contradiction (needs classical logic)
quantifiers, logical connectives, ...
theorem funext : ∀ {α : Type u₁} {β : α → Type u₂}
{f₁ f₂ : Π (x : α), β x},
(∀ (x : α), f₁ x = f₂ x) →
(f₁ = f₂))
This should follow from quotients but I don't understand what it means.
theorem setext {a b : set α}
(h : ∀ x, x ∈ a ↔ x ∈ b) :
(a = b) :=
funext (λ x, propext (h x))
tries to simplify both the goal and all hypotheses, using all hypotheses and all simp lemmas (defaults)
uses only intuitionistic logic
contrapose followed by push_neg
stuff around negations
backward proofs (unless something modifies their structure)
TODO
writing @[simp] theorem mytheorem declares a new theorem and automatically invokes attribute [simp] mytheorem
TODO
breaks the goal identity into several identities to be proved (use e.g. for tuples)
works on all hypothesis and the goal
replaces the current goal (no matter what it is) by false
simp [h1] at h2 ⊢ tries to simplify both h2 and the goal using h1 and all simp lemmas (defaults)
applies a transformation to the current goal (proving stuff "backwards"), then it attemps refl
uses all hypotheses present in the local context (and all defaults) when simplifying the goal
sound and fast but incomplete SAT+ solver
tries to finish the proof automatically (slow and incomplete, of course)
expands a term using its definition
well-ordered types modulo order-isomorphism
tries to definitionally reduce expressions to true or false
TODO
mathematical induction; transforms the current goal into a base step and an induction step
types modulo equivalence
local attribute [simp] myrule adds myrule into the list of simplification rules for the current file or section (local scope)
attribute [simp] myrule adds myrule into the (global) list of simplification rules
works on a given hypothesis instead
tactics
stuff around binary relations
replaces a term by its definition (without any simplification)
short for apply or.inl
short for apply or.inr
"faktorgrupa"
attribute [mysimps] myrule adds myrule into the (custom) list of simplification rules called mysimps
attempts to simplify the current goal; probably the most powerful tactic out there
attempts to reach a contradiction using elementary properties of inequalities (transitivity, sums, ...)
simp with mysimps adds all the rules that have been marked with the attribute [mysimps] to the default set of rules marked with the attribute [simp] before applying simp
attempts to simplify and solve the current goal; if it is unable to close the goal, it fails
attempts to solve the current goal (if it is an algebraic identity) using properties of commutative (semi)rings (commutativity, associativity, distributivity (left and right), zero and one)
rewrites all occurences in the goal and clears
solves the current goal from an explicitly given series of identities and/or inequalities; applies transitivity to them (many possible forms)
closes a goal of type x = x or some xRx for any reflexive relation R
is more general than rfl
works on any relation that has been tagged by the attribute refl
reduces more goals into a single goal using permutations of variables
breaks conjunction (or equivalence) into two goals
specifies which goal will be worked on now
TODO
https://github.com/leanprover-community/mathlib/tree/master/src/linear_algebra
before using the (custom) list of simplification rules called mysimps we must create it using run_cmd mk_simp_attr `mysimps
attempts to simplify the right side and close the current goal with that result; if it is unable to close the goal, it fails
???
probably removes a whole collection of rules
any_goals { bar, baz } applies { bar, baz } to all open goals; only if all branches fail, any_goals fails
substitutes a variable by a given term
uses the defaults (simp lemmas) excluding the given rule(s)
uses the given rules(s) for simplification and also defaults (so-called simp lemmas)
uses only the given rule(s) for simplication; excludes defaults
if the current goal is rXy for a transitive relation R, it will get replaced by two goals xRt and tRy for a new variable t (or by putting a term from an argument in place of t)
swaps the left side and the right side of the current goal (which must be a symmetric relation)
short for reflexivity
the backtick indicates a new name
???
probably something about normal form
performs a substitution (using a given identity or equivalence — from left to right) to the current goal
all_goals { bar, baz } applies { bar, baz } to all open goals; if any branch fails, all_goals fails
substitutes a term by a new variable
like rw it applies identities/equivalences in the given order, but it also rewrites terms under binders (∀ or ∃ or λ)
short for rewrite
to a given variable instead of to the current goal
given identity or equivalence is used from right to left instead
working with "structure" of the proof script
reverts the effect of intro or generalize
repeat { foo } applies foo repeatedly until it fails;
it also goes to subgoals (and the recursion stops in a subgoal when the tactic has failed to make progress)
rw followed by assumption
useful to write rwa equ at foo instead of rw equ at foo, exact foo
iterate 3 { foo } applies foo three times
tries to satisfy the current goal by something from current context (fails otherwise)
try { foo } invokes foo but doesn't fail if foo fails; instead, the goal simply stays unchanged
selects which open goal the script will solve now