BToM ↔ Intensional Logic Correspondence #
Bayesian Theory of Mind (Frank & Goodman) and Hintikka-style intensional semantics use different primitives — probabilistic credences vs categorical accessibility — but should be two views of the same structure.
- Montague/Hintikka: R(x, w, w') means w' is compatible with x's beliefs in w
- Goodman/Frank: P_x(w' | w) is x's credence in w' given w
Accessibility = non-zero belief: w' is accessible from w for agent x iff x assigns positive credence to w' given w.
Equations
Instances For
□_x p (agent x believes p) iff P_x(p) = 1. Categorical doxastic necessity is the probability-1 limit.
Equations
Instances For
Rigid designators = common ground with credence 1. An intension is rigid iff every agent in every world assigns it the same value across all positively-credenced worlds.
Equations
- Core.Conjectures.rigid_iff_common_ground W E τ credence = ∀ (f : Core.Intension W τ), f.IsRigid ↔ ∀ (x : E) (w w' : W), credence x w w' > 0 → f w' = f w
Instances For
RSA ≅ EXH Characterization #
When do the Rational Speech Acts pragmatic theory and grammatical exhaustification make identical predictions?
- @cite{frank-goodman-2012}; @cite{bergen-levy-goodman-2016} — RSA
- @cite{fox-2007}; @cite{chierchia-fox-spector-2012} — EXH
RSA and EXH coincide under specific conditions: uniform prior, high rationality, depth one, no QUD sensitivity.
This is the "Characterization Theorem" — the conjectured boundary between notational variants and genuine empirical disagreement.
Equations
- One or more equations did not get rendered due to their size.
Instances For
RSA Algebraic Metatheory #
Structural properties of the RSA listener/speaker recursion as a mathematical object (fixed points, limits, monotonicity).
Neural-Symbolic Emergence #
Can RSA-like pragmatic reasoning emerge from raw language model next-token distributions via appropriate coarse-graining?
Coarsening a language model's token-level predictions into world-level meanings recovers RSA pragmatic distributions (approximately).
Equations
- Core.Conjectures.rsa_from_coarsened_lm coarsened L1 = ∀ (u : U) (w : W), ∀ ε > 0, (coarsened u w - L1 u w) ^ 2 < ε
Instances For
Almog Independence Conjecture #
The three mechanisms of direct reference (designation, singular proposition, referential use) are empirically independent: natural language supplies expressions exercising every non-empty subset.
See Semantics.Reference.Almog2014.designation_indep_singularProp etc. for
the formal content.
Almog's independence thesis: for any two of the three mechanisms, there exists an expression exhibiting one but not the other. Stated abstractly — the formal witness is in Almog2014.lean.
Equations
Instances For
Phase-Bounded Exhaustification #
Phases as local computation domains for pragmatic inference. @cite{charlow-2014}: scope islands = evaluation boundaries. Chierchia/Fox/@cite{chierchia-fox-spector-2012}: Exh applies at scope positions. Hypothesis: phase boundaries delimit where Exh/RSA applies.
Exh applies at phase boundaries: alternatives are evaluated within the phase domain, not globally.
If computation is phase-bounded, then local exhaustification (within a phase) and global exhaustification (across the whole structure) should agree within a phase domain.
Equations
- Core.Conjectures.exh_at_phase_boundaries exh_local exh_global phase_bounded = (phase_bounded → ∀ (u : U) (w : W), exh_local u w ↔ exh_global u w)
Instances For
Phase-bounded RSA: pragmatic computation is local to phases. S1 optimizes within the current phase, not globally.
If two utterances are in the same phase, S1's local computation (within the phase) matches S1's global computation.
Equations
- Core.Conjectures.rsa_phase_locality S1_local S1_global same_phase = ∀ (u₁ u₂ : U) (w : W), same_phase u₁ u₂ → S1_local u₁ w = S1_global u₁ w
Instances For
Phase-bounded alternative computation: alternatives for an expression are computed from material within the same phase, not globally.
This connects to @cite{fox-katzir-2011}: the set of alternatives depends on what's locally available.
Equations
- Core.Conjectures.phase_bounded_alternatives local_alts global_alts in_same_phase = ∀ (u : U), (∀ a ∈ local_alts u, in_same_phase u a) ∧ ∀ a ∈ global_alts u, ¬in_same_phase u a → a ∉ local_alts u
Instances For
Simplicity Explains Semantic Universals #
@cite{van-de-pol-etal-2023}: quantifiers satisfying the Barwise & Cooper universals (conservativity, quantity, monotonicity) have shorter minimal description length, measured by Lempel-Ziv complexity on truth-table representations.
- Conservativity:
Q(A,B) = Q(A, A ∩ B) - Quantity (isomorphism closure): depends only on cardinalities
- Monotonicity: upward or downward monotone in scope
Formal content: Semantics.Lexical.Determiner.Quantifier.SatisfiesUniversals
Quantifiers satisfying the B&C semantic universals have strictly lower complexity than those violating them, across multiple complexity measures.
Measures: Lempel-Ziv complexity (LZ), minimal description length (MDL) in a language-of-thought grammar.
The strongest effect is for monotonicity, then conservativity; quantity shows a weaker but robust effect.
Equations
- Core.Conjectures.simplicity_explains_universals Q satisfies_universals complexity = ∀ (q₁ q₂ : Q), satisfies_universals q₁ → ¬satisfies_universals q₂ → complexity q₁ < complexity q₂
Instances For
Monotonicity is the strongest predictor of simplicity, stronger than conservativity or quantity alone.
Equations
- One or more equations did not get rendered due to their size.
Instances For
O-Corner Gap #
Natural languages systematically lexicalize three corners of the Square of Opposition but leave the O-corner (particular negative) unlexicalized:
| Corner | Quantifier | Modal | Lexicalized? |
|---|---|---|---|
| A | every | must | ✓ |
| E | no | can't | ✓ |
| I | some | can | ✓ |
| O | not-every | — | ✗ |
The O-corner is always expressed periphrastically (outer negation of A: "not every", "doesn't have to"). @cite{horn-2001} argues this gap is pragmatically explained: the scalar implicature of I (some → not all) recovers O's content, making a dedicated lexical item for O redundant.
See Core.SquareOfOpposition for the formal square infrastructure.
See Implicature.ScalarImplicatures for the some → not-all derivation.
The O-corner of the Square of Opposition is systematically not lexicalized in natural languages. A, E, I have dedicated lexical items (every/no/some, must/can't/can) but O is expressed only as ¬A.
Equations
Instances For
The pragmatic explanation for the O-corner gap: scalar implicature of the I-corner recovers the O-corner's content, making lexicalization of O redundant.
Using the weak scalar term (I = "some") implicates the negation of the strong term (¬A = "not all" = O). Since O is always recoverable from I via Gricean reasoning, there is no communicative pressure to lexicalize it.
Reference: @cite{horn-2001}, A Natural History of Negation, §4.5.
Equations
- Core.Conjectures.o_corner_pragmatic_explanation Utt meaning I_utt O_content _scalar_implicature_of_I = (meaning I_utt → O_content)