Documentation

Linglib.Core.Conjectures

BToM ↔ Intensional Logic Correspondence #

Bayesian Theory of Mind (Frank & Goodman) and Hintikka-style intensional semantics use different primitives — probabilistic credences vs categorical accessibility — but should be two views of the same structure.

def Core.Conjectures.accessibility_iff_positive_credence (W : Type u_1) (E : Type u_2) (R : ModalLogic.AgentAccessRel W E) (credence : EWW) :

Accessibility = non-zero belief: w' is accessible from w for agent x iff x assigns positive credence to w' given w.

Equations
Instances For
    def Core.Conjectures.box_iff_credence_one (W : Type u_1) (E : Type u_2) (R : ModalLogic.AgentAccessRel W E) (credence : EWW) :

    □_x p (agent x believes p) iff P_x(p) = 1. Categorical doxastic necessity is the probability-1 limit.

    Equations
    Instances For
      def Core.Conjectures.rigid_iff_common_ground (W : Type u_1) (E : Type u_2) (τ : Type u_3) (credence : EWW) :

      Rigid designators = common ground with credence 1. An intension is rigid iff every agent in every world assigns it the same value across all positively-credenced worlds.

      Equations
      Instances For

        RSA ≅ EXH Characterization #

        When do the Rational Speech Acts pragmatic theory and grammatical exhaustification make identical predictions?

        def Core.Conjectures.rsa_exh_equivalence {U : Type u_1} {W : Type u_2} (rsa_positive exh_true : UWProp) (uniform_prior high_rationality depth_one no_qud : Prop) :

        RSA and EXH coincide under specific conditions: uniform prior, high rationality, depth one, no QUD sensitivity.

        This is the "Characterization Theorem" — the conjectured boundary between notational variants and genuine empirical disagreement.

        Equations
        • One or more equations did not get rendered due to their size.
        Instances For

          RSA Algebraic Metatheory #

          Structural properties of the RSA listener/speaker recursion as a mathematical object (fixed points, limits, monotonicity).

          def Core.Conjectures.rsa_fixed_point_unique {U : Type u_1} {W : Type u_2} (iterate : (UW)UW) (α : ) :
          α > 0Prop

          RSA iteration converges to a unique fixed point for any α > 0.

          Equations
          Instances For
            def Core.Conjectures.lexicon_refinement_monotone {U : Type u_1} {W : Type u_2} (meaning₁ meaning₂ : UBProp W) (L1 : (UBProp W)UW) :

            Refining lexical meanings (shrinking denotations) can only strengthen RSA pragmatic inferences, never weaken them.

            Equations
            Instances For
              def Core.Conjectures.rsa_tropical_limit {U : Type u_1} {W : Type u_2} (S1 : UW) (bestResponse : UW) :

              In the α → ∞ limit, soft-max RSA speaker converges to iterated best response (argmax / tropical semiring).

              Equations
              Instances For

                Neural-Symbolic Emergence #

                Can RSA-like pragmatic reasoning emerge from raw language model next-token distributions via appropriate coarse-graining?

                def Core.Conjectures.rsa_from_coarsened_lm {U : Type u_1} {W : Type u_2} (coarsened L1 : UW) :

                Coarsening a language model's token-level predictions into world-level meanings recovers RSA pragmatic distributions (approximately).

                Equations
                Instances For

                  Almog Independence Conjecture #

                  The three mechanisms of direct reference (designation, singular proposition, referential use) are empirically independent: natural language supplies expressions exercising every non-empty subset.

                  See Semantics.Reference.Almog2014.designation_indep_singularProp etc. for the formal content.

                  def Core.Conjectures.almog_independence_conjecture (Mechanism : Type u_1) (exprs : List (List Mechanism)) :

                  Almog's independence thesis: for any two of the three mechanisms, there exists an expression exhibiting one but not the other. Stated abstractly — the formal witness is in Almog2014.lean.

                  Equations
                  Instances For

                    Phase-Bounded Exhaustification #

                    Phases as local computation domains for pragmatic inference. @cite{charlow-2014}: scope islands = evaluation boundaries. Chierchia/Fox/@cite{chierchia-fox-spector-2012}: Exh applies at scope positions. Hypothesis: phase boundaries delimit where Exh/RSA applies.

                    def Core.Conjectures.exh_at_phase_boundaries {U : Type u_1} {W : Type u_2} (exh_local exh_global : UWProp) (phase_bounded : Prop) :

                    Exh applies at phase boundaries: alternatives are evaluated within the phase domain, not globally.

                    If computation is phase-bounded, then local exhaustification (within a phase) and global exhaustification (across the whole structure) should agree within a phase domain.

                    Equations
                    Instances For
                      def Core.Conjectures.rsa_phase_locality {U : Type u_1} {W : Type u_2} (S1_local S1_global : UW) (same_phase : UUProp) :

                      Phase-bounded RSA: pragmatic computation is local to phases. S1 optimizes within the current phase, not globally.

                      If two utterances are in the same phase, S1's local computation (within the phase) matches S1's global computation.

                      Equations
                      Instances For
                        def Core.Conjectures.phase_bounded_alternatives {U : Type u_1} (local_alts global_alts : UList U) (in_same_phase : UUProp) :

                        Phase-bounded alternative computation: alternatives for an expression are computed from material within the same phase, not globally.

                        This connects to @cite{fox-katzir-2011}: the set of alternatives depends on what's locally available.

                        Equations
                        Instances For

                          Simplicity Explains Semantic Universals #

                          @cite{van-de-pol-etal-2023}: quantifiers satisfying the Barwise & Cooper universals (conservativity, quantity, monotonicity) have shorter minimal description length, measured by Lempel-Ziv complexity on truth-table representations.

                          Formal content: Semantics.Lexical.Determiner.Quantifier.SatisfiesUniversals

                          def Core.Conjectures.simplicity_explains_universals (Q : Type u_1) (satisfies_universals : QProp) (complexity : Q) :

                          Quantifiers satisfying the B&C semantic universals have strictly lower complexity than those violating them, across multiple complexity measures.

                          Measures: Lempel-Ziv complexity (LZ), minimal description length (MDL) in a language-of-thought grammar.

                          The strongest effect is for monotonicity, then conservativity; quantity shows a weaker but robust effect.

                          Equations
                          Instances For
                            def Core.Conjectures.monotonicity_strongest_predictor (Q : Type u_1) (is_monotone is_conservative is_quantity : QProp) (complexity : Q) :

                            Monotonicity is the strongest predictor of simplicity, stronger than conservativity or quantity alone.

                            Equations
                            • One or more equations did not get rendered due to their size.
                            Instances For

                              O-Corner Gap #

                              Natural languages systematically lexicalize three corners of the Square of Opposition but leave the O-corner (particular negative) unlexicalized:

                              CornerQuantifierModalLexicalized?
                              Aeverymust
                              Enocan't
                              Isomecan
                              Onot-every

                              The O-corner is always expressed periphrastically (outer negation of A: "not every", "doesn't have to"). @cite{horn-2001} argues this gap is pragmatically explained: the scalar implicature of I (some → not all) recovers O's content, making a dedicated lexical item for O redundant.

                              See Core.SquareOfOpposition for the formal square infrastructure. See Implicature.ScalarImplicatures for the some → not-all derivation.

                              def Core.Conjectures.o_corner_gap (Corner : Type u_1) (lexicalized : CornerProp) (A E I O : Corner) :

                              The O-corner of the Square of Opposition is systematically not lexicalized in natural languages. A, E, I have dedicated lexical items (every/no/some, must/can't/can) but O is expressed only as ¬A.

                              Equations
                              Instances For
                                def Core.Conjectures.o_corner_pragmatic_explanation (Utt : Type u_1) (meaning : UttProp) (I_utt : Utt) (O_content : Prop) (_scalar_implicature_of_I : meaning I_uttO_content) :

                                The pragmatic explanation for the O-corner gap: scalar implicature of the I-corner recovers the O-corner's content, making lexicalization of O redundant.

                                Using the weak scalar term (I = "some") implicates the negation of the strong term (¬A = "not all" = O). Since O is always recoverable from I via Gricean reasoning, there is no communicative pressure to lexicalize it.

                                Reference: @cite{horn-2001}, A Natural History of Negation, §4.5.

                                Equations
                                Instances For