RSA Embedded Scalar Implicatures: Simplified Model (For Analysis) #
@cite{bergen-levy-goodman-2016} @cite{geurts-2010} @cite{potts-etal-2016}
This file implements a simplified 2-lexicon model to analyze why minimal Lexical Uncertainty models fail to derive embedded implicature patterns.
Status #
The ℚ-based RSA evaluation infrastructure (RSA.Eval, boolToRat, LURSA) has been removed. Type definitions and the model limitation analysis are preserved. RSA computations need to be re-implemented using the new RSAConfig framework.
This File's Purpose #
Demonstrates that a minimal 2-lexicon, 3-world model gives inverted predictions, motivating the richer structure in the full model.
World states for embedded scalar scenarios.
- none: Nobody solved any problems
- someNotAll: Someone solved some-but-not-all problems
- someAll: Someone solved all problems
- none : EmbeddedWorld
- someNotAll : EmbeddedWorld
- someAll : EmbeddedWorld
Instances For
Equations
- One or more equations did not get rendered due to their size.
Instances For
Equations
- RSA.EmbeddedScalars.instBEqEmbeddedWorld.beq x✝ y✝ = (x✝.ctorIdx == y✝.ctorIdx)
Instances For
Utterances for DE context: "No one solved {some/all} problems"
We need scalar alternatives for RSA to reason about informativity.
- noSome : DEUtterance
- noAll : DEUtterance
- null : DEUtterance
Instances For
Equations
Equations
- One or more equations did not get rendered due to their size.
Instances For
Equations
- RSA.EmbeddedScalars.instBEqDEUtterance.beq x✝ y✝ = (x✝.ctorIdx == y✝.ctorIdx)
Instances For
The Lexical Uncertainty Model #
Each lexicon L assigns meanings to "some":
- L_base: "some" = at-least-one (literal)
- L_refined: "some" = some-but-not-all (Neo-Gricean strengthened)
The listener reasons over which lexicon the speaker is using.
Base lexicon meaning: "some" = at-least-one
"No one solved some problems" under L_base:
- True only when nobody solved any problems
Equations
- RSA.EmbeddedScalars.lexBaseMeaning RSA.EmbeddedScalars.DEUtterance.noSome RSA.EmbeddedScalars.EmbeddedWorld.none = true
- RSA.EmbeddedScalars.lexBaseMeaning RSA.EmbeddedScalars.DEUtterance.noSome RSA.EmbeddedScalars.EmbeddedWorld.someNotAll = false
- RSA.EmbeddedScalars.lexBaseMeaning RSA.EmbeddedScalars.DEUtterance.noSome RSA.EmbeddedScalars.EmbeddedWorld.someAll = false
- RSA.EmbeddedScalars.lexBaseMeaning RSA.EmbeddedScalars.DEUtterance.noAll RSA.EmbeddedScalars.EmbeddedWorld.none = true
- RSA.EmbeddedScalars.lexBaseMeaning RSA.EmbeddedScalars.DEUtterance.noAll RSA.EmbeddedScalars.EmbeddedWorld.someNotAll = true
- RSA.EmbeddedScalars.lexBaseMeaning RSA.EmbeddedScalars.DEUtterance.noAll RSA.EmbeddedScalars.EmbeddedWorld.someAll = false
- RSA.EmbeddedScalars.lexBaseMeaning RSA.EmbeddedScalars.DEUtterance.null x✝ = true
Instances For
Refined lexicon meaning: "some" = some-but-not-all
"No one solved some problems" under L_refined:
- True when nobody solved "some-but-not-all"
- This is TRUE when someone solved ALL (they didn't solve "some-but-not-all")!
Equations
- RSA.EmbeddedScalars.lexRefinedMeaning RSA.EmbeddedScalars.DEUtterance.noSome RSA.EmbeddedScalars.EmbeddedWorld.none = true
- RSA.EmbeddedScalars.lexRefinedMeaning RSA.EmbeddedScalars.DEUtterance.noSome RSA.EmbeddedScalars.EmbeddedWorld.someNotAll = false
- RSA.EmbeddedScalars.lexRefinedMeaning RSA.EmbeddedScalars.DEUtterance.noSome RSA.EmbeddedScalars.EmbeddedWorld.someAll = true
- RSA.EmbeddedScalars.lexRefinedMeaning RSA.EmbeddedScalars.DEUtterance.noAll RSA.EmbeddedScalars.EmbeddedWorld.none = true
- RSA.EmbeddedScalars.lexRefinedMeaning RSA.EmbeddedScalars.DEUtterance.noAll RSA.EmbeddedScalars.EmbeddedWorld.someNotAll = true
- RSA.EmbeddedScalars.lexRefinedMeaning RSA.EmbeddedScalars.DEUtterance.noAll RSA.EmbeddedScalars.EmbeddedWorld.someAll = false
- RSA.EmbeddedScalars.lexRefinedMeaning RSA.EmbeddedScalars.DEUtterance.null x✝ = true
Instances For
Equations
Equations
- One or more equations did not get rendered due to their size.
Instances For
Equations
- RSA.EmbeddedScalars.instBEqUEUtterance.beq x✝ y✝ = (x✝.ctorIdx == y✝.ctorIdx)
Instances For
Base lexicon meaning for UE: "some" = at-least-one
Equations
- RSA.EmbeddedScalars.lexBaseUEMeaning RSA.EmbeddedScalars.UEUtterance.someSome RSA.EmbeddedScalars.EmbeddedWorld.none = false
- RSA.EmbeddedScalars.lexBaseUEMeaning RSA.EmbeddedScalars.UEUtterance.someSome RSA.EmbeddedScalars.EmbeddedWorld.someNotAll = true
- RSA.EmbeddedScalars.lexBaseUEMeaning RSA.EmbeddedScalars.UEUtterance.someSome RSA.EmbeddedScalars.EmbeddedWorld.someAll = true
- RSA.EmbeddedScalars.lexBaseUEMeaning RSA.EmbeddedScalars.UEUtterance.someAll RSA.EmbeddedScalars.EmbeddedWorld.none = false
- RSA.EmbeddedScalars.lexBaseUEMeaning RSA.EmbeddedScalars.UEUtterance.someAll RSA.EmbeddedScalars.EmbeddedWorld.someNotAll = false
- RSA.EmbeddedScalars.lexBaseUEMeaning RSA.EmbeddedScalars.UEUtterance.someAll RSA.EmbeddedScalars.EmbeddedWorld.someAll = true
- RSA.EmbeddedScalars.lexBaseUEMeaning RSA.EmbeddedScalars.UEUtterance.null x✝ = true
Instances For
Refined lexicon meaning for UE: "some" = some-but-not-all
Equations
- RSA.EmbeddedScalars.lexRefinedUEMeaning RSA.EmbeddedScalars.UEUtterance.someSome RSA.EmbeddedScalars.EmbeddedWorld.none = false
- RSA.EmbeddedScalars.lexRefinedUEMeaning RSA.EmbeddedScalars.UEUtterance.someSome RSA.EmbeddedScalars.EmbeddedWorld.someNotAll = true
- RSA.EmbeddedScalars.lexRefinedUEMeaning RSA.EmbeddedScalars.UEUtterance.someSome RSA.EmbeddedScalars.EmbeddedWorld.someAll = false
- RSA.EmbeddedScalars.lexRefinedUEMeaning RSA.EmbeddedScalars.UEUtterance.someAll RSA.EmbeddedScalars.EmbeddedWorld.none = false
- RSA.EmbeddedScalars.lexRefinedUEMeaning RSA.EmbeddedScalars.UEUtterance.someAll RSA.EmbeddedScalars.EmbeddedWorld.someNotAll = false
- RSA.EmbeddedScalars.lexRefinedUEMeaning RSA.EmbeddedScalars.UEUtterance.someAll RSA.EmbeddedScalars.EmbeddedWorld.someAll = true
- RSA.EmbeddedScalars.lexRefinedUEMeaning RSA.EmbeddedScalars.UEUtterance.null x✝ = true
Instances For
Analysis of Results #
With α = 1 and uniform priors, the simplified 2-lexicon model gives INVERTED predictions compared to the empirical pattern. This motivates the need for richer model structure.
DE Context ("No one solved some"):
- The simple model predicts L_refined (local) wins -- WRONG
UE Context ("Someone solved some"):
- The simple model predicts L_base (global) wins -- WRONG
Why This Happens #
The key asymmetry is world coverage:
In DE:
- L_base: noSome true in {none} -- 1 world
- L_refined: noSome true in {none, someAll} -- 2 worlds
L_refined makes the utterance true in MORE worlds, so even though L_base is more informative, L_refined gets extra probability mass.
What Potts et al. Actually Does #
The paper succeeds because of richer model structure:
Multiple refinable items: Not just "some", but also proper names, predicates like "scored" vs "aced" (equation 14)
Richer world space: 3 players × 3 outcomes = 10 equivalence classes
Message alternatives: Full cross-product of quantifiers and predicates
Low λ = 0.1: Speaker nearly uniform, so implicatures emerge from lexicon structure, not informativity