Documentation

Linglib.Phenomena.Quantification.Studies.ScontrasPearl2021TwoNot

@cite{scontras-pearl-2021} §4 — Two-Not RSA Model @cite{scontras-pearl-2021} @cite{kennedy-2015} @cite{musolino-2004} #

The two-not model extends the every-not model (§3) to "two horses didn't jump" with n=4 horses. The key innovation: when n exceeds the numeral's value, exact vs at-least numeral semantics produce different truth conditions and thus different RSA predictions.

Model Structure (eqs 5–11) #

Domain: "Two horses didn't jump" with n=4 horses. 5 world states (0–4 jumped). 2 utterances (null, twoNot). 10 latent states (2 scopes × 5 QUDs).

Parameters: α = 1. P(w) = Binomial(4, b_suc).

QUDs (eq 7) #

Five QUD partitions over 5-world domain:

Central Claim (§4.2) #

Under exact semantics, surface scope pinpoints w=2 as the unique true world, giving maximum informativity → high S2 endorsement at w=2. Under at-least semantics, surface scope is true at {w0,w1,w2}, diluting informativity → lower S2 endorsement at w=2.

This predicts that adults endorse "two horses didn't jump" more readily in 2-of-4 contexts under exact numeral semantics — converging with @cite{kennedy-2015} and acquisition data from @cite{musolino-2004}.

Equations
  • One or more equations did not get rendered due to their size.
Equations
  • One or more equations did not get rendered due to their size.

Utterances: null (silence) or "two horses didn't jump".

Instances For
    Equations
    • One or more equations did not get rendered due to their size.
    Instances For
      Equations
      • One or more equations did not get rendered due to their size.

      QUDs for the two-not model (eq 7). Five partitions over the 5-world domain. The two numeral-specific QUDs (two=?, two≥?) are added because explicitly mentioning a numeral makes that cardinality potentially relevant to the topic of conversation.

      Instances For
        Equations
        • One or more equations did not get rendered due to their size.
        Instances For
          Equations
          • One or more equations did not get rendered due to their size.

          Flattened latent variable: scope reading × QUD. 2 scopes × 5 QUDs = 10 constructors.

          Instances For
            Equations
            • One or more equations did not get rendered due to their size.
            Instances For
              Equations
              • One or more equations did not get rendered due to their size.

              Extract scope reading from latent variable.

              Equations
              Instances For

                Extract QUD from latent variable.

                Equations
                Instances For

                  RSA meaning for the two-not model, parameterized by numeral reading. Null utterance is always true (uninformative baseline).

                  Equations
                  Instances For
                    theorem Phenomena.Quantification.Studies.ScontrasPearl2021TwoNot.exact_truth_table :

                    Exact semantics truth table (eq 6a).

                    theorem Phenomena.Quantification.Studies.ScontrasPearl2021TwoNot.atLeast_truth_table :

                    At-least semantics truth table (eq 6b).

                    QUD projection for the 5-world domain (eq 4, extended). Explicit case analysis, kernel-reducible.

                    Equations
                    • One or more equations did not get rendered due to their size.
                    Instances For

                      Two-not RSA model, parameterized by numeral reading and priors. Same architecture as the every-not model: S1 uses QUD-projected rpow with α = 1, L0 does not incorporate the world prior.

                      Equations
                      • One or more equations did not get rendered due to their size.
                      Instances For

                        World priors follow Binomial(4, b_suc), unnormalized.

                        The paper's central 2-of-4 predictions (Figure 7) use b_suc = 0.1
                        with low P(inverse) = 0.1 (surface scope bias), matching the baseline
                        parameters from the every-not model that produce low 1-of-2 endorsement.
                        
                        Binomial(4, 0.1) ∝ C(4,k) · 1^k · 9^(4-k) = (6561, 2916, 486, 36, 1). 
                        
                        @[reducible, inline]

                        Baseline exact config: b_suc = 0.1, P(inv) = 0.1 (surface scope bias). Matches Figure 7 right panel, red bar (S2 ≈ 0.8).

                        Equations
                        • One or more equations did not get rendered due to their size.
                        Instances For
                          @[reducible, inline]

                          Baseline at-least config: same parameters, at-least numeral semantics. Matches Figure 7 left panel, red bar (S2 ≈ 0.1).

                          Equations
                          • One or more equations did not get rendered due to their size.
                          Instances For
                            @[reducible, inline]

                            Symmetric exact config: b_suc = 0.5, uniform scope/QUD. Binomial(4, 0.5) ∝ (1, 4, 6, 4, 1).

                            Equations
                            • One or more equations did not get rendered due to their size.
                            Instances For
                              @[reducible, inline]

                              Symmetric at-least config: b_suc = 0.5, uniform scope/QUD.

                              Equations
                              • One or more equations did not get rendered due to their size.
                              Instances For

                                The paper's central claims for the 2-of-4 context (Figure 7).

                                Under exact semantics with low base rate (b_suc = 0.1) and surface scope
                                bias (P(inv) = 0.1), surface scope pinpoints w=2 as the unique true world,
                                giving maximum informativity → high S2 endorsement at w=2.
                                
                                Under at-least semantics with the same parameters, surface scope is true
                                at {w0,w1,w2}, diluting informativity → low S2 endorsement at w=2.
                                
                                The 1-of-2 vs 2-of-4 asymmetry: these SAME "baseline" parameters produce
                                low endorsement (27.5%) in the 1-of-2 context but high endorsement in the
                                2-of-4 context, but ONLY under exact numeral semantics. This is the
                                paper's key argument for exact semantics as the basic numeral meaning.
                                
                                S2 predictions may require sorry because the state space
                                (5 worlds × 10 latents × 2 utterances = 100 S1 scores) exceeds
                                the heartbeat budget for kernel-level proof checking. 
                                

                                Under exact semantics with baseline parameters (b_suc=0.1, P(inv)=0.1), S2 endorsement of "two horses didn't jump" at w=2 exceeds 1/2. Surface scope pinpoints w=2 as the unique true world, giving maximum informativity (Figure 7 right, red bar ≈ 0.8).

                                Under at-least semantics with baseline parameters, S2 endorsement at w=2 is lower than under exact semantics. Exact surface has 1 true world; at-least surface has 3. (Figure 7: right panel > left panel at matching P(inv).)

                                Exact inverse has 4 true worlds (w0,w1,w3,w4) — very uninformative. Since w2 is the only world where surface scope is true, inverse scope contributes nothing at w2 (it's false there), explaining why surface scope dominates the S2 prediction under exact semantics.