Documentation

Linglib.Phenomena.Quantification.Studies.ScontrasPearl2021RSA

@cite{scontras-pearl-2021} — Scope Ambiguity RSA Model #

@cite{scontras-pearl-2021} "When pragmatics matters more for truth-value judgments: An investigation of quantifier scope ambiguity" Glossa 6(1): 110.

The Model (§3.1) #

Domain: "Every horse didn't jump" with n=2 horses. 3 world states (0, 1, 2 jumped). 2 utterances (null, everyNot). 6 latent states (2 scopes × 3 QUDs).

Parameters: α = 1 (§3.2, p.15). P(w) = Binomial(n, b_suc).

QUDs (eqs 3–4) #

Three QUD partitions over worlds:

S2 vs L1 (eq 8) #

The paper models endorsement as S2, not L1. S2(u|w) ∝ P_{L1}(w|u), using the normalized L1 posterior. This matters because:

The S2 ordering S2(everyNot|w0) > S2(everyNot|w1) > S2(everyNot|w2) is robust across all prior configurations, even when L1 orderings vary (e.g., highBaseCfg reverses L1 ordering but preserves S2 ordering).

Compositional Grounding #

The truth conditions scopeTruth are grounded in linglib's formal semantics infrastructure via every_sem (Quantifier.lean), ScopeConfig/ScopeDerivation (Montague/Scope.lean), and FiniteModel/Model (Montague/Basic.lean):

See surface_from_every_sem, inverse_from_every_sem, and scopeDerivation_matches_scopeTruth.

Key Findings (Figure 2) #

S2 endorsement for "every horse didn't jump" in the partial world (w=1). The "Paper value" column is S&P's computed model predictions (not experimental data — S&P is a modeling paper explaining @cite{musolino-lidz-2003} findings):

ConfigS2(everyNotw=1)
b_suc=0.1 (baseline)0.288~0.29
b_suc=0.5 (default)0.506~0.48
b_suc=0.9 (high base rate)0.796~0.80

The S2 ordering w0 > w1 > w2 is robust across all prior configurations, even when L1 orderings vary (e.g., highBaseCfg reverses L1 ordering).

Developmental Continuity (§3.3) #

The same model architecture explains both child and adult behavior. Children's isomorphic (surface-scope) preference follows from low b_suc priors. Adult-like inverse scope access emerges from supportive contexts (high b_suc, all?-biased QUD) — the same model, different priors.

Equations
  • One or more equations did not get rendered due to their size.
Equations
  • One or more equations did not get rendered due to their size.

Utterances: null (silence) or "Every horse didn't jump".

Instances For
    Equations
    • One or more equations did not get rendered due to their size.
    Instances For
      Equations
      • One or more equations did not get rendered due to their size.

      QUDs partition worlds by the question under discussion (eqs 3–4). Three QUD partitions for n=2 worlds.

      Instances For
        Equations
        • One or more equations did not get rendered due to their size.
        Instances For
          Equations
          • One or more equations did not get rendered due to their size.

          Flattened latent variable: scope reading × QUD. Flat inductive avoids Prod, keeping proof terms tractable for the kernel checker.

          Instances For
            Equations
            • One or more equations did not get rendered due to their size.
            Instances For
              Equations
              • One or more equations did not get rendered due to their size.

              RSA meaning derived from the data file's scopeTruth. Null utterance is always true (uninformative baseline).

              Equations
              Instances For

                2-horse domain for grounding truth conditions in quantifier semantics.

                Instances For

                  Jump predicate for each world state. In the 1-horse world, exactly h1 jumped (the choice is arbitrary; only cardinality matters for the universally quantified sentence).

                  Equations
                  Instances For
                    Equations
                    • One or more equations did not get rendered due to their size.

                    Surface scope: ⟦every⟧(horse)(λx.¬jump(x))(w).

                    Equations
                    • One or more equations did not get rendered due to their size.
                    Instances For

                      Inverse scope: ¬⟦every⟧(horse)(jump)(w).

                      Equations
                      • One or more equations did not get rendered due to their size.
                      Instances For

                        "Every horse didn't jump" as a ScopeDerivation: a single syntactic form with multiple semantic values indexed by scope configuration.

                        Equations
                        • One or more equations did not get rendered due to their size.
                        Instances For

                          RSA meaning is grounded in ScopeDerivation: the meaning function used by the RSA config matches the compositional scope derivation.

                          The every-not scope pair has surface-entails-inverse structure: surface scope (none jumped) is a strict subset of inverse scope (not all jumped). This makes universals non-diagnostic for scope preferences — no TVJ context can distinguish isomorphic from non-isomorphic behavior.

                          QUD answer function: q(w) → equivalence class identifier (eq 4). For howMany, each world is its own class (identity partition).

                          Equations
                          • One or more equations did not get rendered due to their size.
                          Instances For

                            Inline QUD projection: explicit case analysis, kernel-reducible. For howMany, each world is its own equivalence class (identity partition). For all?/none?, worlds sharing an answer are aggregated.

                            Equations
                            • One or more equations did not get rendered due to their size.
                            Instances For
                              noncomputable def Phenomena.Quantification.Studies.ScontrasPearl2021RSA.cfg (worldPr : ScontrasPearl2021.JumpOutcome) (hwp : ∀ (w : ScontrasPearl2021.JumpOutcome), 0 worldPr w) (scopePr : ScontrasPearl2021.ScopeReading) (hsp : ∀ (s : ScontrasPearl2021.ScopeReading), 0 scopePr s) (qudPr : QUD) (hqp : ∀ (q : QUD), 0 qudPr q) :

                              @cite{scontras-pearl-2021} RSA model, parametric in three priors. S1 uses QUD-projected rpow with α = 1 (§3.2). L0 does not incorporate the world prior (footnote 6).

                              Equations
                              • One or more equations did not get rendered due to their size.
                              Instances For

                                World priors follow Binomial(2, b_suc), unnormalized: - b_suc = 0.1: P(w) ∝ (81, 18, 1) — horses unlikely to jump - b_suc = 0.5: P(w) ∝ (1, 2, 1) — symmetric - b_suc = 0.9: P(w) ∝ (1, 18, 81) — horses likely to jump

                                @[reducible, inline]

                                Baseline: low base rate (b_suc = 0.1), uniform scope, uniform QUD. Best fit to adult Experiment 1 data (§3.2, Figure 2 left).

                                Equations
                                • One or more equations did not get rendered due to their size.
                                Instances For
                                  @[reducible, inline]

                                  Default: symmetric prior (b_suc = 0.5), uniform scope, uniform QUD. Binomial(2, 0.5) ∝ (1, 2, 1). Paper's default parameter setting.

                                  Equations
                                  • One or more equations did not get rendered due to their size.
                                  Instances For
                                    @[reducible, inline]

                                    High base rate: b_suc = 0.9, uniform scope, uniform QUD. Tests robustness of S2 ordering to prior manipulation (Figure 2 left).

                                    Equations
                                    • One or more equations did not get rendered due to their size.
                                    Instances For
                                      @[reducible, inline]

                                      Supportive context: b_suc = 0.9 + all?-biased QUD (1:18:1 ≈ 0.05:0.9:0.05). Models the @cite{gualmini-etal-2008} early-success manipulation, where context pragmatically supports inverse scope (§3.3, Figure 3).

                                      Equations
                                      • One or more equations did not get rendered due to their size.
                                      Instances For
                                        @[reducible, inline]

                                        Surface-only: P(inverse) = 0. Tests whether scope ambiguity is needed to produce intermediate endorsement.

                                        Equations
                                        • One or more equations did not get rendered due to their size.
                                        Instances For

                                          S2 endorsement (eq 8) uses the generic RSAConfig.S2 from Theories/Pragmatics/RSA/Core/Config.lean: S2(u|w) = S2agent.policy(w, u) where S2agent.score(w, u) = cfg.L1(u, w) (the normalized L1 posterior).

                                          The `rsa_predict` tactic handles S2 cross-world goals via `policy_gt_cross`,
                                          building compositional QInterval proofs for the cross-product comparison. 
                                          

                                          Scope ambiguity boosts partial-world endorsement. With both scopes active, L1(w=1) is higher than surface-only, because inverse scope directly makes w=1 true.

                                          Baseline S2: w0 > w1. The model predicts higher endorsement of "every horse didn't jump" when no horses jumped (none-scenario) than when one horse jumped (not-all scenario).

                                          S2 ordering robust to high base rate (b_suc = 0.9). Even when L1 reverses (w1 > w2 > w0 at L1), S2 still orders w0 > w1.