@cite{scontras-pearl-2021} — Scope Ambiguity RSA Model #
@cite{scontras-pearl-2021} "When pragmatics matters more for truth-value judgments: An investigation of quantifier scope ambiguity" Glossa 6(1): 110.
The Model (§3.1) #
Domain: "Every horse didn't jump" with n=2 horses. 3 world states (0, 1, 2 jumped). 2 utterances (null, everyNot). 6 latent states (2 scopes × 3 QUDs).
- L0: L0(w|u,i) ∝ δ_{⟦u⟧ⁱ(w)} (literal semantics, no world prior; footnote 6)
- S1: S1(u|w,i,q) ∝ exp(α · log L0(⟦q⟧(w)|u,i,q)) (QUD-projected)
- L1: L1(w,i,q|u) ∝ P(w) · P(i) · P(q) · S1(u|w,i,q)
- S2: S2(u|w) ∝ exp(log Σ_{i,q} L1(w,i,q|u)) = L1(w|u)
- Endorsement: P(endorse u | w_obs) = S2(u|w_obs)
Parameters: α = 1 (§3.2, p.15). P(w) = Binomial(n, b_suc).
QUDs (eqs 3–4) #
Three QUD partitions over worlds:
- how-many?: identity — partitions {w0}, {w1}, {w2}
- all?: w = n? — partitions {w0,w1} vs {w2}
- none?: w = 0? — partitions {w0} vs {w1,w2}
S2 vs L1 (eq 8) #
The paper models endorsement as S2, not L1. S2(u|w) ∝ P_{L1}(w|u), using the normalized L1 posterior. This matters because:
- L1 conditions on the heard utterance (same normalizer for all worlds)
- S2 conditions on the observed world (different normalizer per world)
- worldPrior enters S2 through L1's normalization denominator, so different worldPriors produce different S2 values
The S2 ordering S2(everyNot|w0) > S2(everyNot|w1) > S2(everyNot|w2) is robust across all prior configurations, even when L1 orderings vary (e.g., highBaseCfg reverses L1 ordering but preserves S2 ordering).
Compositional Grounding #
The truth conditions scopeTruth are grounded in linglib's formal semantics
infrastructure via every_sem (Quantifier.lean), ScopeConfig/ScopeDerivation
(Montague/Scope.lean), and FiniteModel/Model (Montague/Basic.lean):
- Surface (∀>¬):
every_sem horseModel horse_sem (λh.¬jump(h))(w) - Inverse (¬>∀):
¬(every_sem horseModel horse_sem (jump)(w))
See surface_from_every_sem, inverse_from_every_sem, and
scopeDerivation_matches_scopeTruth.
Key Findings (Figure 2) #
S2 endorsement for "every horse didn't jump" in the partial world (w=1). The "Paper value" column is S&P's computed model predictions (not experimental data — S&P is a modeling paper explaining @cite{musolino-lidz-2003} findings):
| Config | S2(everyNot | w=1) |
|---|---|---|
| b_suc=0.1 (baseline) | 0.288 | ~0.29 |
| b_suc=0.5 (default) | 0.506 | ~0.48 |
| b_suc=0.9 (high base rate) | 0.796 | ~0.80 |
The S2 ordering w0 > w1 > w2 is robust across all prior configurations, even when L1 orderings vary (e.g., highBaseCfg reverses L1 ordering).
Developmental Continuity (§3.3) #
The same model architecture explains both child and adult behavior.
Children's isomorphic (surface-scope) preference follows from low b_suc
priors. Adult-like inverse scope access emerges from supportive contexts
(high b_suc, all?-biased QUD) — the same model, different priors.
Equations
- One or more equations did not get rendered due to their size.
Equations
- One or more equations did not get rendered due to their size.
Utterances: null (silence) or "Every horse didn't jump".
Instances For
Equations
Instances For
Equations
- One or more equations did not get rendered due to their size.
Instances For
Equations
- One or more equations did not get rendered due to their size.
Equations
Instances For
Equations
- One or more equations did not get rendered due to their size.
Instances For
Equations
- One or more equations did not get rendered due to their size.
Equations
Instances For
Equations
- One or more equations did not get rendered due to their size.
Instances For
Equations
- One or more equations did not get rendered due to their size.
Extract scope reading from latent variable.
Equations
- Phenomena.Quantification.Studies.ScontrasPearl2021RSA.Latent.surfHowMany.scope = Phenomena.Quantification.Studies.ScontrasPearl2021.ScopeReading.surface
- Phenomena.Quantification.Studies.ScontrasPearl2021RSA.Latent.surfAll.scope = Phenomena.Quantification.Studies.ScontrasPearl2021.ScopeReading.surface
- Phenomena.Quantification.Studies.ScontrasPearl2021RSA.Latent.surfNone.scope = Phenomena.Quantification.Studies.ScontrasPearl2021.ScopeReading.surface
- Phenomena.Quantification.Studies.ScontrasPearl2021RSA.Latent.invHowMany.scope = Phenomena.Quantification.Studies.ScontrasPearl2021.ScopeReading.inverse
- Phenomena.Quantification.Studies.ScontrasPearl2021RSA.Latent.invAll.scope = Phenomena.Quantification.Studies.ScontrasPearl2021.ScopeReading.inverse
- Phenomena.Quantification.Studies.ScontrasPearl2021RSA.Latent.invNone.scope = Phenomena.Quantification.Studies.ScontrasPearl2021.ScopeReading.inverse
Instances For
Extract QUD from latent variable.
Equations
- Phenomena.Quantification.Studies.ScontrasPearl2021RSA.Latent.surfHowMany.qud = Phenomena.Quantification.Studies.ScontrasPearl2021RSA.QUD.howMany
- Phenomena.Quantification.Studies.ScontrasPearl2021RSA.Latent.invHowMany.qud = Phenomena.Quantification.Studies.ScontrasPearl2021RSA.QUD.howMany
- Phenomena.Quantification.Studies.ScontrasPearl2021RSA.Latent.surfAll.qud = Phenomena.Quantification.Studies.ScontrasPearl2021RSA.QUD.all_
- Phenomena.Quantification.Studies.ScontrasPearl2021RSA.Latent.invAll.qud = Phenomena.Quantification.Studies.ScontrasPearl2021RSA.QUD.all_
- Phenomena.Quantification.Studies.ScontrasPearl2021RSA.Latent.surfNone.qud = Phenomena.Quantification.Studies.ScontrasPearl2021RSA.QUD.none_
- Phenomena.Quantification.Studies.ScontrasPearl2021RSA.Latent.invNone.qud = Phenomena.Quantification.Studies.ScontrasPearl2021RSA.QUD.none_
Instances For
RSA meaning derived from the data file's scopeTruth.
Null utterance is always true (uninformative baseline).
Equations
- One or more equations did not get rendered due to their size.
- Phenomena.Quantification.Studies.ScontrasPearl2021RSA.uttMeaning x✝¹ Phenomena.Quantification.Studies.ScontrasPearl2021RSA.Utt.null x✝ = true
Instances For
Truth table verification against the paper's equations (3a-b).
2-horse domain for grounding truth conditions in quantifier semantics.
Instances For
Jump predicate for each world state. In the 1-horse world, exactly h1 jumped (the choice is arbitrary; only cardinality matters for the universally quantified sentence).
Equations
- One or more equations did not get rendered due to their size.
- Phenomena.Quantification.Studies.ScontrasPearl2021RSA.jumpIn Phenomena.Quantification.Studies.ScontrasPearl2021.JumpOutcome.zero x✝ = false
- Phenomena.Quantification.Studies.ScontrasPearl2021RSA.jumpIn Phenomena.Quantification.Studies.ScontrasPearl2021.JumpOutcome.two x✝ = true
Instances For
Horse model as a Montague Model.
Equations
Instances For
Equations
- One or more equations did not get rendered due to their size.
Restrictor: all entities are horses (trivial for this model).
Instances For
Scope predicate: did entity h jump in world w?
Equations
Instances For
Surface scope: ⟦every⟧(horse)(λx.¬jump(x))(w).
Equations
- One or more equations did not get rendered due to their size.
Instances For
Inverse scope: ¬⟦every⟧(horse)(jump)(w).
Equations
- One or more equations did not get rendered due to their size.
Instances For
Surface scope grounding: scopeTruth.surface derives from
compositional ⟦every⟧(horse)(λx.¬jump(x)), not stipulation.
Inverse scope grounding: scopeTruth.inverse derives from
negating the compositional ⟦every⟧(horse)(jump).
Map Montague ScopeConfig to data file's ScopeReading.
Equations
- Phenomena.Quantification.Studies.ScontrasPearl2021RSA.scopeConfigToReading Semantics.Scope.ScopeConfig.surface = Phenomena.Quantification.Studies.ScontrasPearl2021.ScopeReading.surface
- Phenomena.Quantification.Studies.ScontrasPearl2021RSA.scopeConfigToReading Semantics.Scope.ScopeConfig.inverse = Phenomena.Quantification.Studies.ScontrasPearl2021.ScopeReading.inverse
Instances For
Map data file's ScopeReading to Montague ScopeConfig.
Equations
- Phenomena.Quantification.Studies.ScontrasPearl2021RSA.readingToScopeConfig Phenomena.Quantification.Studies.ScontrasPearl2021.ScopeReading.surface = Semantics.Scope.ScopeConfig.surface
- Phenomena.Quantification.Studies.ScontrasPearl2021RSA.readingToScopeConfig Phenomena.Quantification.Studies.ScontrasPearl2021.ScopeReading.inverse = Semantics.Scope.ScopeConfig.inverse
Instances For
"Every horse didn't jump" as a ScopeDerivation: a single syntactic form
with multiple semantic values indexed by scope configuration.
Equations
- One or more equations did not get rendered due to their size.
Instances For
The ScopeDerivation's meaningAt matches scopeTruth for both readings.
RSA meaning is grounded in ScopeDerivation: the meaning function used
by the RSA config matches the compositional scope derivation.
The every-not scope pair has surface-entails-inverse structure: surface scope (none jumped) is a strict subset of inverse scope (not all jumped). This makes universals non-diagnostic for scope preferences — no TVJ context can distinguish isomorphic from non-isomorphic behavior.
QUD answer function: q(w) → equivalence class identifier (eq 4). For howMany, each world is its own class (identity partition).
Equations
- One or more equations did not get rendered due to their size.
Instances For
Inline QUD projection: explicit case analysis, kernel-reducible. For howMany, each world is its own equivalence class (identity partition). For all?/none?, worlds sharing an answer are aggregated.
Equations
- One or more equations did not get rendered due to their size.
Instances For
@cite{scontras-pearl-2021} RSA model, parametric in three priors. S1 uses QUD-projected rpow with α = 1 (§3.2). L0 does not incorporate the world prior (footnote 6).
Equations
- One or more equations did not get rendered due to their size.
Instances For
World priors follow Binomial(2, b_suc), unnormalized: - b_suc = 0.1: P(w) ∝ (81, 18, 1) — horses unlikely to jump - b_suc = 0.5: P(w) ∝ (1, 2, 1) — symmetric - b_suc = 0.9: P(w) ∝ (1, 18, 81) — horses likely to jump
Baseline: low base rate (b_suc = 0.1), uniform scope, uniform QUD. Best fit to adult Experiment 1 data (§3.2, Figure 2 left).
Equations
- One or more equations did not get rendered due to their size.
Instances For
Default: symmetric prior (b_suc = 0.5), uniform scope, uniform QUD. Binomial(2, 0.5) ∝ (1, 2, 1). Paper's default parameter setting.
Equations
- One or more equations did not get rendered due to their size.
Instances For
High base rate: b_suc = 0.9, uniform scope, uniform QUD. Tests robustness of S2 ordering to prior manipulation (Figure 2 left).
Equations
- One or more equations did not get rendered due to their size.
Instances For
Supportive context: b_suc = 0.9 + all?-biased QUD (1:18:1 ≈ 0.05:0.9:0.05). Models the @cite{gualmini-etal-2008} early-success manipulation, where context pragmatically supports inverse scope (§3.3, Figure 3).
Equations
- One or more equations did not get rendered due to their size.
Instances For
Surface-only: P(inverse) = 0. Tests whether scope ambiguity is needed to produce intermediate endorsement.
Equations
- One or more equations did not get rendered due to their size.
Instances For
S2 endorsement (eq 8) uses the generic RSAConfig.S2 from
Theories/Pragmatics/RSA/Core/Config.lean:
S2(u|w) = S2agent.policy(w, u) where S2agent.score(w, u) = cfg.L1(u, w)
(the normalized L1 posterior).
The `rsa_predict` tactic handles S2 cross-world goals via `policy_gt_cross`,
building compositional QInterval proofs for the cross-product comparison.
Baseline L1: 0-jumped > 1-jumped. Both scopes agree w=0 is true; high prior weight (81 vs 18).
Baseline L1: 1-jumped > 2-jumped. Inverse scope makes w=1 true; moderate prior advantage (18 vs 1).
Scope ambiguity boosts partial-world endorsement. With both scopes active, L1(w=1) is higher than surface-only, because inverse scope directly makes w=1 true.
Baseline S2: w0 > w1. The model predicts higher endorsement of "every horse didn't jump" when no horses jumped (none-scenario) than when one horse jumped (not-all scenario).
Baseline S2: w1 > w2. Endorsement in the not-all scenario exceeds the all scenario.
S2 ordering robust to high base rate (b_suc = 0.9). Even when L1 reverses (w1 > w2 > w0 at L1), S2 still orders w0 > w1.
S2 ordering robust to high base rate: w1 > w2.
S2 ordering robust to symmetric prior (b_suc = 0.5).
S2 ordering robust under supportive context (b_suc = 0.9, all?-biased QUD).