@cite{scontras-pearl-2021}: Quantifier Scope Ambiguity @cite{musolino-lidz-2003} #
"When pragmatics matters more for truth-value judgments: An investigation of quantifier scope ambiguity" Glossa 6(1): 110.
S&P is a modeling paper — it explains endorsement patterns from @cite{musolino-lidz-2003} and others via RSA, rather than reporting new experiments.
Part I: Truth Conditions & Shared Types #
- §1. Every-not (n=2):
JumpOutcome,ScopeReading,scopeTruth - §2. Two-not (n=4):
JumpOutcome4,NumeralReading,twoNotTruth - Scope entailment asymmetry, @cite{musolino-lidz-2003} data, and
numeral semantics grounding via
maxMeaningfromNumeral.Semantics.
Part II: Every-Not RSA Model (§3, EveryNot namespace) #
Domain: "Every horse didn't jump" with n=2 horses. 3 world states (0, 1, 2 jumped). 2 utterances (null, everyNot). 6 latent states (2 scopes × 3 QUDs).
- L0: L0(w|u,i) ∝ δ_{⟦u⟧ⁱ(w)} (literal semantics, no world prior; footnote 6)
- S1: S1(u|w,i,q) ∝ exp(α · log L0(⟦q⟧(w)|u,i,q)) (QUD-projected)
- L1: L1(w,i,q|u) ∝ P(w) · P(i) · P(q) · S1(u|w,i,q)
- S2: S2(u|w) ∝ exp(log Σ_{i,q} L1(w,i,q|u)) = L1(w|u)
- Endorsement: P(endorse u | w_obs) = S2(u|w_obs)
Parameters: α = 1 (§3.2). P(w) = Binomial(n, b_suc).
QUDs (paper (3)) #
Three QUD partitions over worlds:
- how-many?: identity — partitions {w0}, {w1}, {w2}
- all?: w = n? — partitions {w0,w1} vs {w2}
- none?: w = 0? — partitions {w0} vs {w1,w2}
Compositional Grounding #
Truth conditions grounded in every_sem (Quantifier.lean),
ScopeConfig/ScopeDerivation (Scope.lean).
Key Findings (Figure 2) #
S2 endorsement for "every horse didn't jump" in the partial world (w=1). The "Paper value" column is S&P's computed model predictions (not experimental data):
| Config | S2(everyNot | w=1) |
|---|---|---|
| b_suc=0.1 (baseline) | 0.288 | ~0.29 |
| b_suc=0.5 (default) | 0.506 | ~0.48 (read from Figure 2) |
| b_suc=0.9 (high base rate) | 0.796 | ~0.80 |
QUD manipulation (Figure 2, center panel): favoring none? < how-many? < all? yields monotonically increasing endorsement (paper: 0.38, 0.48, 0.63).
Developmental Continuity (§3.3) #
Same model architecture explains child and adult behavior. Children's
isomorphic (surface-scope) preference follows from low b_suc priors.
Part III: Two-Not RSA Model (§4, TwoNot namespace) #
Domain: "Two horses didn't jump" with n=4 horses. 5 world states (0–4 jumped). 2 utterances (null, twoNot). 10 latent states (2 scopes × 5 QUDs).
Central Claim (§4.2) #
Under exact semantics, surface scope pinpoints w=2 as the unique true world → high S2 endorsement (> 1/2). Under at-least semantics, surface scope is true at {w0,w1,w2} → low S2 endorsement (< 1/2). This predicts that adults endorse "two horses didn't jump" more readily in 2-of-4 contexts under exact numeral semantics.
Equations
Instances For
Equations
- One or more equations did not get rendered due to their size.
Instances For
Equations
- One or more equations did not get rendered due to their size.
Equations
Instances For
Equations
- One or more equations did not get rendered due to their size.
Instances For
Equations
- One or more equations did not get rendered due to their size.
Truth conditions for "Every horse didn't jump" under each scope reading.
Equations
- One or more equations did not get rendered due to their size.
Instances For
How many horses jumped (out of 4).
- w0 : JumpOutcome4
- w1 : JumpOutcome4
- w2 : JumpOutcome4
- w3 : JumpOutcome4
- w4 : JumpOutcome4
Instances For
Equations
Instances For
Equations
- One or more equations did not get rendered due to their size.
Instances For
Equations
- One or more equations did not get rendered due to their size.
Convert JumpOutcome4 to natural number.
Equations
- Phenomena.Quantification.Studies.ScontrasPearl2021.JumpOutcome4.w0.toNat = 0
- Phenomena.Quantification.Studies.ScontrasPearl2021.JumpOutcome4.w1.toNat = 1
- Phenomena.Quantification.Studies.ScontrasPearl2021.JumpOutcome4.w2.toNat = 2
- Phenomena.Quantification.Studies.ScontrasPearl2021.JumpOutcome4.w3.toNat = 3
- Phenomena.Quantification.Studies.ScontrasPearl2021.JumpOutcome4.w4.toNat = 4
Instances For
Numeral reading: does "two" mean exactly 2 or at least 2?
- exact : NumeralReading
- atLeast : NumeralReading
Instances For
Equations
Instances For
Equations
- One or more equations did not get rendered due to their size.
Instances For
Equations
- One or more equations did not get rendered due to their size.
Truth conditions for "two horses didn't jump" with n=4 horses (paper (6)).
Parameterized by numeral reading and scope configuration.
Surface scope (two > not): "There are two horses that didn't jump"
- Exact: exactly 2 didn't jump → exactly 2 jumped → w=2
- At-least: at least 2 didn't jump → at most 2 jumped → w ∈ {0,1,2}
Inverse scope (not > two): "It's not the case that two horses jumped"
- Exact: ¬(exactly 2 jumped) → w ≠ 2 → w ∈ {0,1,3,4}
- At-least: ¬(at least 2 jumped) → fewer than 2 jumped → w ∈ {0,1}
Equations
- One or more equations did not get rendered due to their size.
Instances For
The two numeral theories diverge in the 2-of-4 context (n=4). Surface scope: exact → {w2}, at-least → {w0,w1,w2} Inverse scope: exact → {w0,w1,w3,w4}, at-least → {w0,w1}
For universals, surface scope (∀>¬: none jumped) ENTAILS inverse scope (¬>∀: not all jumped). If no horse jumped, then trivially not every horse jumped. This means no truth-value judgment context can diagnose the isomorphism effect for universals: whenever surface is true, inverse is automatically true too.
The entailment is strict: inverse does NOT entail surface. At w=1 (one horse jumped), inverse scope is true (not all jumped) but surface scope is false (not none jumped).
For exact numerals, surface scope does NOT entail inverse scope. At w=2 (exactly 2 jumped out of 4), surface is true (exactly 2 didn't) but inverse is false (it IS the case that exactly 2 jumped). This independence is what makes numerals diagnostic for the isomorphism effect.
Inverse scope also does not entail surface for exact numerals. At w=0 (none jumped), inverse is true (¬(exactly 2 jumped)) but surface is false (not exactly 2 didn't jump, since all 4 didn't).
Connects S&P's twoNotTruth truth conditions to linglib's numeral
semantics infrastructure (maxMeaning in Numeral.Semantics).
The truth conditions in the data file are grounded in maxMeaning:
- Exact surface: "exactly 2 didn't jump" =
maxMeaning.eq 2 (4 - w) - Exact inverse: "¬(exactly 2 jumped)" =
!(maxMeaning.eq 2 w) - At-least surface: "≥2 didn't jump" =
maxMeaning.ge 2 (4 - w) - At-least inverse: "¬(≥2 jumped)" =
!(maxMeaning.ge 2 w)
Convergent evidence for exact semantics from @cite{kennedy-2015} (de-Fregean semantics where bare numerals mean =n) and @cite{musolino-2004} (acquisition data — children reject "two" at w=3).
Exact surface: "exactly two didn't jump" (out of 4) ↔ exactly two jumped.
Matches maxMeaning.eq 2 applied to the complement count (4 - w).
Exact inverse: "¬(exactly two jumped)" ↔ !(maxMeaning.eq 2 w).
At-least surface: "at least two didn't jump" ↔ at most two jumped.
Matches maxMeaning.ge 2 applied to the complement count.
At-least inverse: "¬(at least two jumped)" ↔ !(maxMeaning.ge 2 w).
The negation-scope asymmetry collapses under exact semantics: internal and external negation of "three" give the same result.
Lower-bound semantics preserves the negation-scope distinction.
@cite{kennedy-2015}'s resolution: exact meaning is basic, lower-bound is derived via type-shift. Both meanings are grammatically available.
Utterances: null (silence) or "Every horse didn't jump".
Instances For
Equations
Instances For
Equations
- One or more equations did not get rendered due to their size.
Instances For
Equations
- One or more equations did not get rendered due to their size.
Equations
Instances For
Equations
- One or more equations did not get rendered due to their size.
Instances For
Equations
- One or more equations did not get rendered due to their size.
Equations
Instances For
Equations
- One or more equations did not get rendered due to their size.
Instances For
Equations
- One or more equations did not get rendered due to their size.
Extract scope reading from latent variable.
Equations
- Phenomena.Quantification.Studies.ScontrasPearl2021.EveryNot.Latent.surfHowMany.scope = Phenomena.Quantification.Studies.ScontrasPearl2021.ScopeReading.surface
- Phenomena.Quantification.Studies.ScontrasPearl2021.EveryNot.Latent.surfAll.scope = Phenomena.Quantification.Studies.ScontrasPearl2021.ScopeReading.surface
- Phenomena.Quantification.Studies.ScontrasPearl2021.EveryNot.Latent.surfNone.scope = Phenomena.Quantification.Studies.ScontrasPearl2021.ScopeReading.surface
- Phenomena.Quantification.Studies.ScontrasPearl2021.EveryNot.Latent.invHowMany.scope = Phenomena.Quantification.Studies.ScontrasPearl2021.ScopeReading.inverse
- Phenomena.Quantification.Studies.ScontrasPearl2021.EveryNot.Latent.invAll.scope = Phenomena.Quantification.Studies.ScontrasPearl2021.ScopeReading.inverse
- Phenomena.Quantification.Studies.ScontrasPearl2021.EveryNot.Latent.invNone.scope = Phenomena.Quantification.Studies.ScontrasPearl2021.ScopeReading.inverse
Instances For
Extract QUD from latent variable.
Equations
- Phenomena.Quantification.Studies.ScontrasPearl2021.EveryNot.Latent.surfHowMany.qud = Phenomena.Quantification.Studies.ScontrasPearl2021.EveryNot.QUD.howMany
- Phenomena.Quantification.Studies.ScontrasPearl2021.EveryNot.Latent.invHowMany.qud = Phenomena.Quantification.Studies.ScontrasPearl2021.EveryNot.QUD.howMany
- Phenomena.Quantification.Studies.ScontrasPearl2021.EveryNot.Latent.surfAll.qud = Phenomena.Quantification.Studies.ScontrasPearl2021.EveryNot.QUD.all_
- Phenomena.Quantification.Studies.ScontrasPearl2021.EveryNot.Latent.invAll.qud = Phenomena.Quantification.Studies.ScontrasPearl2021.EveryNot.QUD.all_
- Phenomena.Quantification.Studies.ScontrasPearl2021.EveryNot.Latent.surfNone.qud = Phenomena.Quantification.Studies.ScontrasPearl2021.EveryNot.QUD.none_
- Phenomena.Quantification.Studies.ScontrasPearl2021.EveryNot.Latent.invNone.qud = Phenomena.Quantification.Studies.ScontrasPearl2021.EveryNot.QUD.none_
Instances For
RSA meaning derived from scopeTruth.
Null utterance is always true (uninformative baseline).
Equations
- One or more equations did not get rendered due to their size.
- Phenomena.Quantification.Studies.ScontrasPearl2021.EveryNot.uttMeaning x✝¹ Phenomena.Quantification.Studies.ScontrasPearl2021.EveryNot.Utt.null x✝ = true
Instances For
Truth table verification against the paper's utterance semantics (2).
2-horse domain for grounding truth conditions in quantifier semantics.
Instances For
Jump predicate for each world state. In the 1-horse world, exactly h1 jumped (the choice is arbitrary; only cardinality matters for the universally quantified sentence).
Equations
- One or more equations did not get rendered due to their size.
- Phenomena.Quantification.Studies.ScontrasPearl2021.EveryNot.jumpIn Phenomena.Quantification.Studies.ScontrasPearl2021.JumpOutcome.zero x✝ = false
- Phenomena.Quantification.Studies.ScontrasPearl2021.EveryNot.jumpIn Phenomena.Quantification.Studies.ScontrasPearl2021.JumpOutcome.two x✝ = true
Instances For
Horse model as a Montague Model.
Equations
Instances For
Equations
- One or more equations did not get rendered due to their size.
Restrictor: all entities are horses (trivial for this model).
Instances For
Scope predicate: did entity h jump in world w?
Equations
Instances For
Surface scope: ⟦every⟧(horse)(λx.¬jump(x))(w).
Equations
- One or more equations did not get rendered due to their size.
Instances For
Inverse scope: ¬⟦every⟧(horse)(jump)(w).
Equations
- One or more equations did not get rendered due to their size.
Instances For
Surface scope grounding: scopeTruth.surface derives from
compositional ⟦every⟧(horse)(λx.¬jump(x)), not stipulation.
Inverse scope grounding: scopeTruth.inverse derives from
negating the compositional ⟦every⟧(horse)(jump).
Map Montague ScopeConfig to data file's ScopeReading.
Equations
- Phenomena.Quantification.Studies.ScontrasPearl2021.EveryNot.scopeConfigToReading Semantics.Scope.ScopeConfig.surface = Phenomena.Quantification.Studies.ScontrasPearl2021.ScopeReading.surface
- Phenomena.Quantification.Studies.ScontrasPearl2021.EveryNot.scopeConfigToReading Semantics.Scope.ScopeConfig.inverse = Phenomena.Quantification.Studies.ScontrasPearl2021.ScopeReading.inverse
Instances For
Map data file's ScopeReading to Montague ScopeConfig.
Equations
- Phenomena.Quantification.Studies.ScontrasPearl2021.EveryNot.readingToScopeConfig Phenomena.Quantification.Studies.ScontrasPearl2021.ScopeReading.surface = Semantics.Scope.ScopeConfig.surface
- Phenomena.Quantification.Studies.ScontrasPearl2021.EveryNot.readingToScopeConfig Phenomena.Quantification.Studies.ScontrasPearl2021.ScopeReading.inverse = Semantics.Scope.ScopeConfig.inverse
Instances For
"Every horse didn't jump" as a ScopeDerivation: a single syntactic form
with multiple semantic values indexed by scope configuration.
Equations
- One or more equations did not get rendered due to their size.
Instances For
The ScopeDerivation's meaningAt matches scopeTruth for both readings.
RSA meaning is grounded in ScopeDerivation: the meaning function used
by the RSA config matches the compositional scope derivation.
The every-not scope pair has surface-entails-inverse structure: surface scope (none jumped) is a strict subset of inverse scope (not all jumped). This makes universals non-diagnostic for scope preferences — no TVJ context can distinguish isomorphic from non-isomorphic behavior.
QUD answer function: q(w) → equivalence class identifier (paper (3)). For howMany, each world is its own class (identity partition).
Equations
- One or more equations did not get rendered due to their size.
Instances For
Inline QUD projection: explicit case analysis, kernel-reducible. For howMany, each world is its own equivalence class (identity partition). For all?/none?, worlds sharing an answer are aggregated.
Equations
- One or more equations did not get rendered due to their size.
Instances For
@cite{scontras-pearl-2021} RSA model, parametric in three priors. S1 uses QUD-projected rpow with α = 1 (§3.2). L0 does not incorporate the world prior (footnote 6).
Equations
- One or more equations did not get rendered due to their size.
Instances For
World priors follow Binomial(2, b_suc), unnormalized: - b_suc = 0.1: P(w) ∝ (81, 18, 1) — horses unlikely to jump - b_suc = 0.5: P(w) ∝ (1, 2, 1) — symmetric - b_suc = 0.9: P(w) ∝ (1, 18, 81) — horses likely to jump
Baseline: low base rate (b_suc = 0.1), uniform scope, uniform QUD. Best fit to adult Experiment 1 data (§3.2, Figure 2 left).
Equations
- One or more equations did not get rendered due to their size.
Instances For
Default: symmetric prior (b_suc = 0.5), uniform scope, uniform QUD. Binomial(2, 0.5) ∝ (1, 2, 1). Paper's default parameter setting.
Equations
- One or more equations did not get rendered due to their size.
Instances For
High base rate: b_suc = 0.9, uniform scope, uniform QUD. Tests robustness of S2 ordering to prior manipulation (Figure 2 left).
Equations
- One or more equations did not get rendered due to their size.
Instances For
Supportive context: b_suc = 0.9 + all?-biased QUD (1:18:1 ≈ 0.05:0.9:0.05). Models S&P's supportive-context prediction (§3.3, Figure 3), motivated by @cite{gualmini-etal-2008}'s finding that QUD manipulation increases endorsement. When both pragmatic factors are supportive, scope access has negligible impact on endorsement (paper: 0.91 at P(inv)=0.1 vs 0.91 at P(inv)=0.9).
Equations
- One or more equations did not get rendered due to their size.
Instances For
Surface-only: P(inverse) = 0. Tests whether scope ambiguity is needed to produce intermediate endorsement.
Equations
- One or more equations did not get rendered due to their size.
Instances For
QUD manipulation: favored QUD gets P = 0.9, others get P = 0.05. Default world prior (b_suc = 0.5) and default scope prior (uniform). Paper: "we see an increase in utterance endorsement from the none? (0.38) to how-many? (0.48) to all? (0.63) QUD."
None?-biased QUD: P(none?) ≈ 0.9, P(howMany?) = P(all?) ≈ 0.05. Figure 2, center panel, leftmost bar (S2 ≈ 0.38).
Equations
- One or more equations did not get rendered due to their size.
Instances For
How-many?-biased QUD: P(howMany?) ≈ 0.9, P(all?) = P(none?) ≈ 0.05. Figure 2, center panel, middle bar (S2 ≈ 0.48).
Equations
- One or more equations did not get rendered due to their size.
Instances For
All?-biased QUD: P(all?) ≈ 0.9, P(howMany?) = P(none?) ≈ 0.05. Figure 2, center panel, rightmost bar (S2 ≈ 0.63).
Equations
- One or more equations did not get rendered due to their size.
Instances For
S2 endorsement uses the generic RSAConfig.S2 from
Theories/Pragmatics/RSA/Core/Config.lean:
S2(u|w) = S2agent.policy(w, u) where S2agent.score(w, u) = cfg.L1(u, w)
(the normalized L1 posterior).
The `rsa_predict` tactic handles S2 cross-world goals via `policy_gt_cross`,
building compositional QInterval proofs for the cross-product comparison.
Baseline L1: 0-jumped > 1-jumped. Both scopes agree w=0 is true; high prior weight (81 vs 18).
Baseline L1: 1-jumped > 2-jumped. Inverse scope makes w=1 true; moderate prior advantage (18 vs 1).
Scope ambiguity boosts partial-world endorsement. With both scopes active, L1(w=1) is higher than surface-only, because inverse scope directly makes w=1 true.
Baseline S2: w0 > w1. The model predicts higher endorsement of "every horse didn't jump" when no horses jumped (none-scenario) than when one horse jumped (not-all scenario).
Baseline S2: w1 > w2. Endorsement in the not-all scenario exceeds the all scenario.
S2 ordering robust to high base rate (b_suc = 0.9). Even when L1 reverses (w1 > w2 > w0 at L1), S2 still orders w0 > w1.
S2 ordering robust to high base rate: w1 > w2.
S2 ordering robust to symmetric prior (b_suc = 0.5).
S2 ordering robust under supportive context (b_suc = 0.9, all?-biased QUD).
QUD manipulation: how-many?-biased > none?-biased (Figure 2, center panel). Favoring the identity QUD yields higher endorsement than favoring the "did none succeed?" QUD, because how-many? makes the ambiguous utterance more informative at w=1 (partial success).
QUD manipulation: all?-biased > how-many?-biased (Figure 2, center panel). Favoring the "did all succeed?" QUD yields the highest endorsement because both scope interpretations fully resolve all? at w=1 (the answer is "no" under either reading).
Utterances: null (silence) or "two horses didn't jump".
Instances For
Equations
Instances For
Equations
- One or more equations did not get rendered due to their size.
Instances For
Equations
- One or more equations did not get rendered due to their size.
QUDs for the two-not model (paper (7)). Five partitions over the 5-world domain. The two numeral-specific QUDs (two=?, two≥?) are added because explicitly mentioning a numeral makes that cardinality potentially relevant to the topic of conversation.
Instances For
Equations
Instances For
Equations
- One or more equations did not get rendered due to their size.
Instances For
Equations
- One or more equations did not get rendered due to their size.
Flattened latent variable: scope reading × QUD. 2 scopes × 5 QUDs = 10 constructors.
- surfHowMany : Latent10
- surfAll : Latent10
- surfNone : Latent10
- surfTwoExact : Latent10
- surfTwoAtLeast : Latent10
- invHowMany : Latent10
- invAll : Latent10
- invNone : Latent10
- invTwoExact : Latent10
- invTwoAtLeast : Latent10
Instances For
Equations
Instances For
Equations
- One or more equations did not get rendered due to their size.
Instances For
Equations
- One or more equations did not get rendered due to their size.
Extract scope reading from latent variable.
Equations
- Phenomena.Quantification.Studies.ScontrasPearl2021.TwoNot.Latent10.surfHowMany.scope = Phenomena.Quantification.Studies.ScontrasPearl2021.ScopeReading.surface
- Phenomena.Quantification.Studies.ScontrasPearl2021.TwoNot.Latent10.surfAll.scope = Phenomena.Quantification.Studies.ScontrasPearl2021.ScopeReading.surface
- Phenomena.Quantification.Studies.ScontrasPearl2021.TwoNot.Latent10.surfNone.scope = Phenomena.Quantification.Studies.ScontrasPearl2021.ScopeReading.surface
- Phenomena.Quantification.Studies.ScontrasPearl2021.TwoNot.Latent10.surfTwoExact.scope = Phenomena.Quantification.Studies.ScontrasPearl2021.ScopeReading.surface
- Phenomena.Quantification.Studies.ScontrasPearl2021.TwoNot.Latent10.surfTwoAtLeast.scope = Phenomena.Quantification.Studies.ScontrasPearl2021.ScopeReading.surface
- Phenomena.Quantification.Studies.ScontrasPearl2021.TwoNot.Latent10.invHowMany.scope = Phenomena.Quantification.Studies.ScontrasPearl2021.ScopeReading.inverse
- Phenomena.Quantification.Studies.ScontrasPearl2021.TwoNot.Latent10.invAll.scope = Phenomena.Quantification.Studies.ScontrasPearl2021.ScopeReading.inverse
- Phenomena.Quantification.Studies.ScontrasPearl2021.TwoNot.Latent10.invNone.scope = Phenomena.Quantification.Studies.ScontrasPearl2021.ScopeReading.inverse
- Phenomena.Quantification.Studies.ScontrasPearl2021.TwoNot.Latent10.invTwoExact.scope = Phenomena.Quantification.Studies.ScontrasPearl2021.ScopeReading.inverse
- Phenomena.Quantification.Studies.ScontrasPearl2021.TwoNot.Latent10.invTwoAtLeast.scope = Phenomena.Quantification.Studies.ScontrasPearl2021.ScopeReading.inverse
Instances For
Extract QUD from latent variable.
Equations
- Phenomena.Quantification.Studies.ScontrasPearl2021.TwoNot.Latent10.surfHowMany.qud = Phenomena.Quantification.Studies.ScontrasPearl2021.TwoNot.QUD5.howMany
- Phenomena.Quantification.Studies.ScontrasPearl2021.TwoNot.Latent10.invHowMany.qud = Phenomena.Quantification.Studies.ScontrasPearl2021.TwoNot.QUD5.howMany
- Phenomena.Quantification.Studies.ScontrasPearl2021.TwoNot.Latent10.surfAll.qud = Phenomena.Quantification.Studies.ScontrasPearl2021.TwoNot.QUD5.all_
- Phenomena.Quantification.Studies.ScontrasPearl2021.TwoNot.Latent10.invAll.qud = Phenomena.Quantification.Studies.ScontrasPearl2021.TwoNot.QUD5.all_
- Phenomena.Quantification.Studies.ScontrasPearl2021.TwoNot.Latent10.surfNone.qud = Phenomena.Quantification.Studies.ScontrasPearl2021.TwoNot.QUD5.none_
- Phenomena.Quantification.Studies.ScontrasPearl2021.TwoNot.Latent10.invNone.qud = Phenomena.Quantification.Studies.ScontrasPearl2021.TwoNot.QUD5.none_
- Phenomena.Quantification.Studies.ScontrasPearl2021.TwoNot.Latent10.surfTwoExact.qud = Phenomena.Quantification.Studies.ScontrasPearl2021.TwoNot.QUD5.twoExact
- Phenomena.Quantification.Studies.ScontrasPearl2021.TwoNot.Latent10.invTwoExact.qud = Phenomena.Quantification.Studies.ScontrasPearl2021.TwoNot.QUD5.twoExact
- Phenomena.Quantification.Studies.ScontrasPearl2021.TwoNot.Latent10.surfTwoAtLeast.qud = Phenomena.Quantification.Studies.ScontrasPearl2021.TwoNot.QUD5.twoAtLeast
- Phenomena.Quantification.Studies.ScontrasPearl2021.TwoNot.Latent10.invTwoAtLeast.qud = Phenomena.Quantification.Studies.ScontrasPearl2021.TwoNot.QUD5.twoAtLeast
Instances For
RSA meaning for the two-not model, parameterized by numeral reading. Null utterance is always true (uninformative baseline).
Equations
- One or more equations did not get rendered due to their size.
- Phenomena.Quantification.Studies.ScontrasPearl2021.TwoNot.uttMeaning nr x✝¹ Phenomena.Quantification.Studies.ScontrasPearl2021.TwoNot.Utt.null x✝ = true
Instances For
Exact semantics truth table (paper (6), exact reading).
At-least semantics truth table (paper (6), at-least reading).
Equations
- One or more equations did not get rendered due to their size.
Jump predicate for each world state (out of 4 horses). In partial worlds, the first k horses jumped.
Equations
- One or more equations did not get rendered due to their size.
- Phenomena.Quantification.Studies.ScontrasPearl2021.TwoNot.jumpIn4 Phenomena.Quantification.Studies.ScontrasPearl2021.JumpOutcome4.w0 x✝ = false
- Phenomena.Quantification.Studies.ScontrasPearl2021.TwoNot.jumpIn4 Phenomena.Quantification.Studies.ScontrasPearl2021.JumpOutcome4.w1 x✝ = false
- Phenomena.Quantification.Studies.ScontrasPearl2021.TwoNot.jumpIn4 Phenomena.Quantification.Studies.ScontrasPearl2021.JumpOutcome4.w2 x✝ = false
- Phenomena.Quantification.Studies.ScontrasPearl2021.TwoNot.jumpIn4 Phenomena.Quantification.Studies.ScontrasPearl2021.JumpOutcome4.w3 x✝ = false
- Phenomena.Quantification.Studies.ScontrasPearl2021.TwoNot.jumpIn4 Phenomena.Quantification.Studies.ScontrasPearl2021.JumpOutcome4.w4 x✝ = true
Instances For
Horse4 model as a Montague Model.
Equations
Instances For
Equations
- One or more equations did not get rendered due to their size.
Restrictor: all entities are horses (trivial for this model).
Instances For
Jump predicate as Montague semantic value.
Equations
Instances For
Exact surface scope: ⟦exactly 2⟧(horse)(λx.¬jump(x))(w). "There are exactly two horses that didn't jump."
Equations
- One or more equations did not get rendered due to their size.
Instances For
Exact inverse scope: ¬⟦exactly 2⟧(horse)(jump)(w). "It's not the case that exactly two horses jumped."
Equations
- One or more equations did not get rendered due to their size.
Instances For
Exact surface grounding: twoNotTruth .exact .surface derives from
compositional ⟦exactly 2⟧(horse)(λx.¬jump(x)), not stipulation.
Exact inverse grounding: twoNotTruth .exact .inverse derives from
negating the compositional ⟦exactly 2⟧(horse)(jump).
At-least surface scope: ⟦at least 2⟧(horse)(λx.¬jump(x))(w). "There are at least two horses that didn't jump."
Equations
- One or more equations did not get rendered due to their size.
Instances For
At-least inverse scope: ¬⟦at least 2⟧(horse)(jump)(w). "It's not the case that at least two horses jumped."
Equations
- One or more equations did not get rendered due to their size.
Instances For
At-least surface grounding: twoNotTruth .atLeast .surface derives from
compositional ⟦at least 2⟧(horse)(λx.¬jump(x)).
At-least inverse grounding: twoNotTruth .atLeast .inverse derives from
negating the compositional ⟦at least 2⟧(horse)(jump).
RSA meaning is grounded in compositional semantics: the meaning function used by the two-not RSA config matches the GQT numeral quantifiers.
The two grounding layers agree: maxMeaning .eq (count-based) and
exactly_n_sem (GQT compositional) produce the same truth values.
Chains twoNotExact_surface_matches_maxMeaning with
exact_surface_from_exactly_n_sem by transitivity.
Map data file's ScopeReading to Montague ScopeConfig.
Equations
- Phenomena.Quantification.Studies.ScontrasPearl2021.TwoNot.readingToScopeConfig Phenomena.Quantification.Studies.ScontrasPearl2021.ScopeReading.surface = Semantics.Scope.ScopeConfig.surface
- Phenomena.Quantification.Studies.ScontrasPearl2021.TwoNot.readingToScopeConfig Phenomena.Quantification.Studies.ScontrasPearl2021.ScopeReading.inverse = Semantics.Scope.ScopeConfig.inverse
Instances For
"Two horses didn't jump" as a ScopeDerivation under exact semantics:
a single syntactic form with multiple semantic values indexed by scope.
Equations
- One or more equations did not get rendered due to their size.
Instances For
"Two horses didn't jump" as a ScopeDerivation under at-least semantics.
Equations
- One or more equations did not get rendered due to their size.
Instances For
The exact two-not scope pair has INDEPENDENT readings: neither entails the other. This independence makes exact numerals diagnostic for the isomorphism effect — unlike universals, which have nested readings.
At-least two-not has NESTED readings: inverse (true at {w0,w1}) entails surface (true at {w0,w1,w2}). Like universals, at-least numerals are non-diagnostic for the isomorphism effect.
QUD projection for the 5-world domain (extends every-not QUDs; paper (7)). Explicit case analysis, kernel-reducible.
Equations
- One or more equations did not get rendered due to their size.
Instances For
Two-not RSA model, parameterized by numeral reading and priors. Same architecture as the every-not model: S1 uses QUD-projected rpow with α = 1, L0 does not incorporate the world prior.
Equations
- One or more equations did not get rendered due to their size.
Instances For
World priors follow Binomial(4, b_suc), unnormalized.
The paper's central 2-of-4 predictions (Figure 7) use b_suc = 0.1
with low P(inverse) = 0.1 (surface scope bias), matching the baseline
parameters from the every-not model that produce low 1-of-2 endorsement.
Binomial(4, 0.1) ∝ C(4,k) · 1^k · 9^(4-k) = (6561, 2916, 486, 36, 1).
Baseline exact config: b_suc = 0.1, P(inv) = 0.1 (surface scope bias). Matches Figure 7 right panel, red bar (S2 ≈ 0.8).
Equations
- One or more equations did not get rendered due to their size.
Instances For
Baseline at-least config: same parameters, at-least numeral semantics. Matches Figure 7 left panel, red bar (S2 ≈ 0.1).
Equations
- One or more equations did not get rendered due to their size.
Instances For
The paper's central claims for the 2-of-4 context (Figure 7).
Under exact semantics with low base rate (b_suc = 0.1) and surface scope
bias (P(inv) = 0.1), surface scope pinpoints w=2 as the unique true world,
giving maximum informativity → high S2 endorsement at w=2.
Under at-least semantics with the same parameters, surface scope is true
at {w0,w1,w2}, diluting informativity → low S2 endorsement at w=2.
The 1-of-2 vs 2-of-4 asymmetry: these SAME "baseline" parameters produce
low endorsement (27.5%) in the 1-of-2 context but high endorsement in the
2-of-4 context, but ONLY under exact numeral semantics. This is the
paper's key argument for exact semantics as the basic numeral meaning.
The `rsa_predict` tactic handles the S2 computation via reflection,
building L0→S1→L1→S2 layers and comparing exact rational bounds.
Under exact semantics with baseline parameters (b_suc=0.1, P(inv)=0.1), S2 endorsement of "two horses didn't jump" at w=2 exceeds 1/2. Surface scope pinpoints w=2 as the unique true world, giving maximum informativity (Figure 7 right, red bar ≈ 0.8).
Under at-least semantics with baseline parameters, S2 endorsement at w=2 is below 1/2 (Figure 7 left, red bar ≈ 0.1). Surface scope is true at {w0,w1,w2}, diluting informativity.
Under at-least semantics with baseline parameters, S2 endorsement at w=2 is lower than under exact semantics. Exact surface has 1 true world; at-least surface has 3. (Figure 7: right panel > left panel at matching P(inv).)
The key informativity contrast: under exact semantics, surface scope has exactly 1 true world (w2), while under at-least it has 3 (w0–w2). This drives the endorsement difference via S1 informativity.
Exact inverse has 4 true worlds (w0,w1,w3,w4) — very uninformative. Since w2 is the only world where surface scope is true, inverse scope contributes nothing at w2 (it's false there), explaining why surface scope dominates the S2 prediction under exact semantics.
At-least inverse has 2 true worlds (w0,w1) — more informative than exact inverse's 4, but still less informative than exact surface's 1.
The paper's key argument: the SAME "baseline" parameters that produce low 1-of-2 endorsement also produce high 2-of-4 endorsement — but only under exact numeral semantics.
The models have different world types (JumpOutcome vs JumpOutcome4),
so we state this as two separate bounds that together establish the
1-of-2 vs 2-of-4 asymmetry:
- Every-not baseline: S2(everyNot|w=1) < 1/2 (low)
- Two-not exact baseline: S2(twoNot|w=2) > 1/2 (high)
- Two-not at-least baseline: S2(twoNot|w=2) < 1/2 (low)
The first two use "baseline" parameters (b_suc=0.1, P(inv)=0.1).
The asymmetry between the second and third is the argument for exact
semantics: changing only the numeral reading flips the prediction.
Every-not baseline endorsement at w=1 is below 1/2. This is the low-endorsement end of the 1-of-2 vs 2-of-4 asymmetry. Uses the same b_suc=0.1 parameter that the TwoNot baseline uses.
Cross-model summary (proved above):
- everyNot_baseline_endorsement_low: S2(everyNot|w=1) < 1/2
- TwoNot.exact_baseline_endorsement_high: S2(twoNot|w=2) > 1/2
- TwoNot.atleast_baseline_endorsement_low: S2(twoNot|w=2) < 1/2
Same parameters, different domain size, same model architecture.
The exact/at-least split is the only difference between high and low
endorsement in the 2-of-4 context.