Basic Quantifier Examples #
End-to-end tests verifying that the English quantifier Fragment, composed with a toy scenario, produces correct truth-value judgments, acceptability predictions, and entailment patterns.
The scenario (entities, predicates, truth assignments) is defined here in Phenomena — it is empirical data. The compositional machinery (Model, FiniteModel, GQ denotations) comes from Semantics.Montague. The lexical entries (strength, monotonicity) come from the Fragment.
Test architecture #
- Acceptability (Tier 1): there-insertion from Fragment
Strength - Truth values (Tier 2): sentence denotations evaluated in a scenario
- Entailment (Tier 3): monotonicity-driven inferences
- Scalar distinctness (Tier 4): quantifiers differ on at least one input
Scenario #
Four entities: Alice, Bob, Carol, Dave.
- Students: Alice, Bob (2 of 4)
- Passed: Alice (1 of 2 students)
- Laughed: Alice, Bob (all students)
- Cried: nobody
Equations
- One or more equations did not get rendered due to their size.
Instances For
Equations
- Phenomena.Quantification.Examples.scenario = { Entity := Phenomena.Quantification.Examples.Person, decEq := inferInstance }
Instances For
Equations
- One or more equations did not get rendered due to their size.
Tier 1: Acceptability #
B&C Table II: weak determiners allow there-insertion, strong ones don't.
These judgments are derived purely from the Fragment's Strength field.
- "There are some students in the room." (✓ weak)
- "There are no students in the room." (✓ weak)
- "There are few students in the room." (✓ weak)
- *"There is every student in the room." (✗ strong)
- *"There are most students in the room." (✗ strong)
- *"There are all students in the room." (✗ strong)
Weak quantifiers: acceptable in there-sentences.
Strong quantifiers: unacceptable in there-sentences.
The 6-word scale partitions into 4 weak + 2 strong.
Tier 2: Truth Values #
Compose Fragment denotations with scenario predicates and verify.
| Sentence | Expected | Why |
|---|---|---|
| Every student laughed | true | Alice ✓, Bob ✓ |
| Every student passed | false | Bob ✗ |
| Some student passed | true | Alice ✓ |
| Some student cried | false | nobody cried |
| No student cried | true | nobody cried |
| No student passed | false | Alice passed |
| Most students passed | false | 1 of 2 ≤ half |
| Most students laughed | true | 2 of 2 > 0 (= |
Tier 3: Entailment #
Fragment monotonicity metadata predicts entailment directions.
We verify these by composing semantic proofs with our scenario.
- Scope-↑ (every, some): if P ⊆ Q then Det(A)(P) → Det(A)(Q)
- Scope-↓ (no): if P ⊆ Q then Det(A)(Q) → Det(A)(P)
Fragment says "every"/"all" is scope-↑ monotone.
"Some student passed" entails "some student laughed" by scope-↑ mono.
"No student laughed" would entail "no student passed" by scope-↓ mono. (In our scenario, neither premise holds, but the implication is valid.)
Fragment monotonicity metadata matches semantic behavior.
Tier 4: Scalar Distinctness #
For scalar implicature to arise, adjacent scale-mates must differ on some input. We verify this by finding witnesses — inputs where they diverge.
"Some" ≠ "every": some(student)(passed)=T but every(student)(passed)=F.
"Some" ≠ "no": some(student)(passed)=T but no(student)(passed)=F.
"Every" vs "most": they agree on students who laughed (both T) and students who passed (both F). The key difference is logical strength: every entails most but not vice versa.