Documentation

Linglib.Phenomena.ScalarImplicatures.Studies.PottsEtAl2016

@cite{potts-etal-2016}: Embedded Implicatures as Pragmatic Inferences #

@cite{potts-etal-2016}

"Embedded Implicatures as Pragmatic Inferences under Compositional Lexical Uncertainty." Journal of Semantics 33(4): 755–802.

The Puzzle #

Scalar implicatures interact asymmetrically with logical operators:

The Model: Compositional Lexical Uncertainty #

The key innovation is lexical uncertainty: L1 marginalizes over possible lexica (refinements of "some") rather than using a fixed literal semantics. Two lexica:

This uses the standard RSAConfig latent variable mechanism: Latent := Lexicon. No special infrastructure needed — the same mechanism handles observations (@cite{goodman-stuhlmuller-2013}), scope readings (@cite{scontras-pearl-2021}), and QUDs (@cite{kao-etal-2014-hyperbole}).

Architecture #

The experiment (Section 6) uses 3 players, each with outcome N (nothing) / S (scored but not aced) / A (aced). The 10 equivalence classes are the multisets of 3 outcomes. Utterances are PlayerQ × ShotQ (outer × inner quantifier): "every/exactly one/no player hit all/none/some of his shots."

Predictions #

The asymmetry arises from monotonicity:

World state as equivalence class over 3 players' outcomes. Each player's outcome: N (nothing), S (scored but not aced), A (aced). 10 classes = multisets of size 3 from {N, S, A}.

Instances For
    Equations
    • One or more equations did not get rendered due to their size.
    Instances For
      Equations
      • One or more equations did not get rendered due to their size.

      Inner quantifier: over a player's shots.

      Instances For
        Equations
        • One or more equations did not get rendered due to their size.
        Instances For
          Equations
          • One or more equations did not get rendered due to their size.

          Outer quantifier: over players.

          Instances For
            Equations
            • One or more equations did not get rendered due to their size.
            Instances For
              Equations
              • One or more equations did not get rendered due to their size.

              Utterance: outer quantifier × inner quantifier, plus null.

              Instances For
                Equations
                • One or more equations did not get rendered due to their size.
                Instances For
                  Equations
                  • One or more equations did not get rendered due to their size.
                  Instances For
                    Equations
                    • One or more equations did not get rendered due to their size.

                    Lexicon: how "some" is interpreted.

                    Instances For
                      Equations
                      • One or more equations did not get rendered due to their size.
                      Instances For
                        Equations
                        • One or more equations did not get rendered due to their size.

                        Count of players satisfying the inner predicate, under a given lexicon.

                        • all: number who aced
                        • none_: number who did nothing
                        • some_: depends on lexicon:
                          • weak: number who scored (≥ 1 shot)
                          • strong: number who scored but did not ace
                        Equations
                        Instances For

                          Truth value of an utterance in a world under a lexicon.

                          Equations
                          Instances For

                            @cite{potts-etal-2016} lexical uncertainty model.

                            Latent variable = Lexicon (weak vs strong "some"). L0: literal listener under lexicon l. S1: belief-based scoring, rpow(L0(w|u), α). L1: marginalizes over lexica with uniform prior.

                            Uniform priors, α = 1, no utterance cost. Qualitative predictions (DE blocking, UE enrichment) hold across a range of rationality parameters.

                            Equations
                            • One or more equations did not get rendered due to their size.
                            Instances For

                              Lexica agree on all non-"some" utterances. The lexicon only affects the interpretation of "some"; "all" and "none" are unambiguous.

                              DE context: strong "some" widens the set of true worlds relative to weak. Under "no player hit some of his shots":

                              • Weak "some": only NNN satisfies (1 world)
                              • Strong "some": NNN, NNA, NAA, AAA satisfy (4 worlds) Widening makes the utterance less informative under the strong lexicon.

                              UE context: strong "some" narrows the set of true worlds relative to weak. Under "every player hit some of his shots":

                              • Weak "some": SSS, SSA, SAA, AAA satisfy (4 worlds)
                              • Strong "some": only SSS satisfies (1 world) Narrowing makes the utterance more informative under the strong lexicon.

                              "No player hit some of his shots" → NNN preferred.

                              Under the weak lexicon, only NNN makes the utterance true (1 world, maximally informative). Under the strong lexicon, NNN, NNA, NAA, and AAA all make it true (4 worlds, less informative). L1 marginalizes over lexica weighted by informativity, preferring the weak lexicon for this utterance. Result: NNN receives the highest posterior — the global reading.

                              "Every player hit some of his shots" → SSS preferred.

                              Under the strong lexicon, only SSS makes the utterance true (1 world, maximally informative). Under the weak lexicon, SSS, SSA, SAA, and AAA all make it true (4 worlds, less informative). L1 marginalizes and prefers the informative strong lexicon for this utterance. Result: SSS receives the highest posterior — the embedded implicature.

                              The 6 qualitative findings from the @cite{potts-etal-2016} LU model. 3 DE blocking predictions (global reading preferred under "no") + 3 UE enrichment predictions (enriched reading preferred under "every").

                              Instances For
                                Equations
                                • One or more equations did not get rendered due to their size.
                                Instances For

                                  All findings.

                                  Equations
                                  • One or more equations did not get rendered due to their size.
                                  Instances For

                                    Map each empirical finding to the RSA model prediction that accounts for it.

                                    Equations
                                    • One or more equations did not get rendered due to their size.
                                    Instances For

                                      The RSA model accounts for all 6 qualitative findings from @cite{potts-etal-2016}.

                                      The outer quantifiers "every" and "no" in the @cite{potts-etal-2016} model agree with the generic quantity domain semantics from RSA.Domains.Quantity.meaning. This grounds the stipulated utteranceTruth in the shared quantifier infrastructure.

                                      See also: GoodmanStuhlmuller2013.quantifier_meaning_grounded.

                                      The @cite{potts-etal-2016} predictions connect to two other parts of linglib:

                                      1. someAllBlocking (ScalarImplicatures.Basic): The empirical datum that "some" implicatures are present in UE and blocked in DE. The Potts model derives both sides: UE enrichment (§7) and DE blocking (§6).

                                      2. Geurts2010 (ScalarImplicatures.Studies.Geurts2010): Notes that the minimal LU model inverts the predictions, but "the full Potts et al. model derives the correct pattern." The theorems here are the formal backing.

                                      3. EmbeddedSIPrediction (LexicalUncertainty.Compositional): Tracks embedded SI predictions by context type. The Potts model demonstrates the negation case: local reading dispreferred in DE (global NNN preferred).

                                      The Potts model matches the someAllBlocking empirical pattern: UE enrichment present (implicatureInUE = true) and DE blocking present (implicatureInDE = false).