Quantifier Composition via Predicate Abstraction #
Demonstrates that interp composes quantificational sentences
end-to-end: lexicon → syntax tree (with QR traces and binders) →
truth conditions.
H&K Pipeline (@cite{heim-kratzer-1998} Ch. 5) #
After Quantifier Raising moves a DP to a higher position, it leaves a
trace tₙ and creates a binder node n. Predicate Abstraction (PA)
converts the binder + body into λx. ⟦body⟧^{g[n↦x]}, producing a
predicate that the raised quantifier takes as its scope argument.
"Every student sleeps" after QR:
[S [DP [D every] [N student]] [1 [S [t₁] [VP sleeps]]]]
Evaluated as:
⟦t₁⟧^g = g(1)(Traces rule)⟦sleeps⟧ = sleeps'(TN)⟦[t₁ sleeps]⟧^g = sleeps'(g(1))(FA)⟦[1 [t₁ sleeps]]⟧^g = λx. sleeps'(x)(PA)⟦every student⟧ = every'(student')(FA)⟦S⟧ = every'(student')(λx. sleeps'(x))(FA)
Scope Ambiguity #
"Everybody loves somebody" yields two readings from two QR structures:
- Surface scope (∀>∃): every person scopes above some person
- Inverse scope (∃>∀): some person scopes above every person
The two trees differ only in which quantifier is raised higher.
Equations
- One or more equations did not get rendered due to their size.
- Semantics.Composition.QuantifierComposition.quantLex "sleeps" = some { ty := Semantics.Montague.Ty.e ⇒ Semantics.Montague.Ty.t, denot := Semantics.Montague.ToyLexicon.sleeps_sem }
- Semantics.Composition.QuantifierComposition.quantLex "laughs" = some { ty := Semantics.Montague.Ty.e ⇒ Semantics.Montague.Ty.t, denot := Semantics.Montague.ToyLexicon.laughs_sem }
- Semantics.Composition.QuantifierComposition.quantLex word = none
Instances For
QR tree: [S [DP every student] [1 [S t₁ sleeps]]]
Equations
- One or more equations did not get rendered due to their size.
Instances For
Every student sleeps = false (Mary is a student but doesn't sleep).
Grounding: tree interpretation = direct application of GQ.
QR tree: [S [DP some student] [1 [S t₁ sleeps]]]
Equations
- One or more equations did not get rendered due to their size.
Instances For
Some student sleeps = true (John is a student and sleeps).
Grounding: tree interpretation = direct application.
Two QR structures yield two scope readings. The trees differ only in which quantifier occupies the higher position.
Surface scope (∀>∃):
[S [DP every person] [1 [S [DP some person] [2 [S t₁ [VP sees t₂]]]]]]
∀x[person(x) → ∃y[person(y) ∧ sees(x,y)]]
Equations
- One or more equations did not get rendered due to their size.
Instances For
Inverse scope (∃>∀):
[S [DP some person] [2 [S [DP every person] [1 [S t₁ [VP sees t₂]]]]]]
∃y[person(y) ∧ ∀x[person(x) → sees(x,y)]]
Equations
- One or more equations did not get rendered due to their size.
Instances For
Surface scope is true in the toy model. (John sees Mary and Mary sees John — each person sees some person.)
Inverse scope is also true. (John is seen by everyone: both John and Mary see John? No — Mary sees John ✓, John sees Mary ✓, but John doesn't see John ✗. Check: is there ONE person that everyone sees?)
The two scope readings differ: proof of genuine ambiguity.
Surface scope tree = manual ∀>∃ computation.
Inverse scope tree = manual ∃>∀ computation.
A different initial assignment yields the same result: closed sentences don't depend on the assignment.
Instances For
The same QR tree built as Tree Cat String — carrying real UD-grounded
categories on every node. interp ignores the categories and produces
identical truth conditions to the category-free Tree Unit String version.
QR tree with UD categories:
[S [DP [Det every] [N student]] [1 [S [t₁:NP] [VP sleeps]]]]
Equations
- One or more equations did not get rendered due to their size.
Instances For
Category-bearing tree yields the same result as category-free tree.
Tree Cat String agrees with Tree Unit String.