Documentation

Linglib.Core.Agent.BayesianUpdate

Bayesian Observation Models and Posterior Update #

General-purpose infrastructure for Bayesian reasoning with stochastic observations. An ObservationModel specifies how experiments generate observations in different worlds; posterior computes the Bayesian update.

Key Results #

Usage #

This module is imported by ExperimentDesign.lean (for EIG computation) and can be imported independently wherever Bayesian updating is needed.

structure Core.BayesianUpdate.ObservationModel (W : Type u_1) (E : Type u_2) (O : Type u_3) [Fintype O] :
Type (max (max u_1 u_2) u_3)

An observation model: how experiments generate observations in different worlds. P(o|w,e) — the probability of observing o when the true world is w and experiment e is conducted.

  • likelihood : WEO

    Likelihood: P(o|w,e)

  • likelihood_nonneg (w : W) (e : E) (o : O) : 0 self.likelihood w e o

    Likelihood is non-negative

  • likelihood_sum (w : W) (e : E) : o : O, self.likelihood w e o = 1

    Likelihood sums to 1 for each (w,e)

Instances For
    noncomputable def Core.BayesianUpdate.marginalObs {W : Type u_1} {E : Type u_2} {O : Type u_3} [Fintype W] [Fintype O] (om : ObservationModel W E O) (prior : W) (e : E) (o : O) :

    Marginal probability of observation o under experiment e: P(o|e) = Σ_w prior(w) · P(o|w,e)

    Equations
    Instances For
      noncomputable def Core.BayesianUpdate.posterior {W : Type u_1} {E : Type u_2} {O : Type u_3} [Fintype W] [Fintype O] (om : ObservationModel W E O) (prior : W) (e : E) (o : O) :
      W

      Bayesian posterior after observing o under experiment e: P(w|o,e) = prior(w) · P(o|w,e) / P(o|e)

      Equations
      Instances For
        theorem Core.BayesianUpdate.marginalObs_nonneg {W : Type u_1} {E : Type u_2} {O : Type u_3} [Fintype W] [Fintype O] (om : ObservationModel W E O) (prior : W) (hprior : ∀ (w : W), 0 prior w) (e : E) (o : O) :
        0 marginalObs om prior e o

        Marginal observation probability is non-negative when prior is.

        theorem Core.BayesianUpdate.posterior_nonneg {W : Type u_1} {E : Type u_2} {O : Type u_3} [Fintype W] [Fintype O] (om : ObservationModel W E O) (prior : W) (hprior : ∀ (w : W), 0 prior w) (e : E) (o : O) (w : W) :
        0 posterior om prior e o w

        Posterior is non-negative when prior is.

        theorem Core.BayesianUpdate.posterior_sum_one {W : Type u_1} {E : Type u_2} {O : Type u_3} [Fintype W] [Fintype O] (om : ObservationModel W E O) (prior : W) (_hprior : ∀ (w : W), 0 prior w) (e : E) (o : O) (hm : marginalObs om prior e o 0) :
        w : W, posterior om prior e o w = 1

        Posterior sums to 1 when marginal is nonzero and prior is non-negative.

        theorem Core.BayesianUpdate.posterior_marginalizes_to_prior {W : Type u_1} {E : Type u_2} {O : Type u_3} [Fintype W] [Fintype O] (om : ObservationModel W E O) (prior : W) (hprior : ∀ (w : W), 0 prior w) (e : E) (w : W) :
        o : O, marginalObs om prior e o * posterior om prior e o w = prior w

        The law of total probability for posteriors: marginalized posteriors reconstruct the prior.

        Σ_o P(o|e) · P(w|o,e) = prior(w)

        This is the key identity underlying EIG non-negativity.

        def Core.BayesianUpdate.deterministicObs {W : Type u_1} {O : Type u_3} [Fintype O] [DecidableEq O] (classify : WO) :

        A deterministic observation model from a classifier: each world produces exactly the observation corresponding to its classification.

        This is the partition-cell case: classify assigns each world to a cell, and the observation reveals which cell the world belongs to.

        Equations
        Instances For