Probabilistic Answerhood @cite{thomas-2026} #
@cite{groenendijk-stokhof-1984}
Answerhood in terms of probability changes, following @cite{thomas-2026} "A probabilistic, question-based approach to additivity".
Core Definitions #
Definition 61 - Relevance: A proposition P is relevant to question Q iff P changes the probability of some alternative:
Relevant(P, Q) ≡ ∃A ∈ Q: P(A|P) ≠ P(A)
Definition 62 - Probabilistic Answerhood: P probabilistically answers Q iff P raises the probability of some alternative:
ProbAnswers(P, Q) ≡ ∃A ∈ Q: P(A|P) > P(A)
Definition 63 - Evidences More Strongly: R evidences A more strongly than R':
EvidencesMoreStrongly(R, R', A) ≡ P(A|info(R)) > P(A|info(R'))
These probabilistic notions of answerhood are central to Thomas's analysis of additive particles like "too", "also", "either".
Connection to Mention-Some #
Probabilistic answerhood generalizes the mention-some/mention-all distinction:
- Under uniform priors, probabilistic answerhood reduces to standard partial answerhood
- Non-uniform priors allow context-sensitive answerhood
A prior distribution as a normalized mass function over worlds.
Bundled with non-negativity and normalization (∑ w, mass w = 1) proofs
via Core.FinitePMF. Use prior w to access the mass at world w
(via CoeFun).
Instances For
Compute P(φ) - probability that φ is true.
This sums the probability mass over worlds where φ holds.
Equations
Instances For
Compute P(A | C) - conditional probability of A given C.
Returns P(A ∧ C) / P(C) when P(C) > 0, otherwise 0.
Equations
- One or more equations did not get rendered due to their size.
Instances For
Probability of an info state being actual.
P(σ) = probability that the actual world is in σ.
This is identical to probOfProp — a convenience alias using info state
vocabulary rather than proposition vocabulary.
Equations
Instances For
Relevance: P changes the probability of some alternative in Q.
Simplified from @cite{thomas-2026} Definition 61 for the case where R is a declarative (single alternative P). Thomas's full definition quantifies over alternatives of both R and S: ∃A ∈ alt(R), A' ∈ alt(S) s.t. P_L(A'|A) ≠ P_L(A'). For declarative R with a single alternative P, this reduces to: ∃A' ∈ alt(Q): P(A'|P) ≠ P(A').
Equations
- One or more equations did not get rendered due to their size.
Instances For
Non-relevance: P doesn't change any alternative's probability.
Equations
Instances For
Probabilistic answerhood (simplified): P raises the probability of some alternative.
Simplified from @cite{thomas-2026} Definition 62, which additionally requires that the witnessed resolution is raised MORE than any other (in ratio terms): (b) for all A' ⊂ alt(Q), if ∩A' ⊉ ∩A, then P(∩A|info(R))/P(∩A) > P(∩A'|info(R))/P(∩A').
This implementation captures only condition (a): ∃A ∈ Q: P(A|P) > P(A). The full definition is stronger — it requires the supported answer to be maximally supported. For the cases we use (binary QUDs, single-alternative declaratives), the simplified version suffices.
Equations
- One or more equations did not get rendered due to their size.
Instances For
Full probabilistic answerhood per @cite{thomas-2026} Definition 62.
R ANSWERS Q iff ∃ nonempty A ⊆ alt(Q) s.t. (a) P(∩A | info(R)) > P(∩A) (b) ∀A' ⊆ alt(Q), ∩A' ⊄ ∩A → P(∩A|info(R))/P(∩A) > P(∩A'|info(R))/P(∩A')
Condition (b) ensures that A is the maximally supported resolution: no other resolution whose intersection isn't already contained in ∩A has a higher Bayes factor. This prevents a proposition from "answering" a question by accidentally raising two unrelated alternatives equally.
For binary QUDs (|alt(Q)| = 2), conditions (a) and (b) coincide with
probAnswers: raising P(H) necessarily lowers P(¬H), so the Bayes factor
condition is automatic. See probAnswersFull_eq_simple_binary.
Equations
- One or more equations did not get rendered due to their size.
Instances For
The simplified probAnswers (condition (a) only) is equivalent to the
full Thomas (62) probAnswersFull for binary QUDs, under a normalized
probability distribution.
For binary {H, ¬H}, raising P(H) necessarily lowers P(¬H) (since P(H|p) + P(¬H|p) = 1 = P(H) + P(¬H)), so the Bayes factor for H automatically exceeds the Bayes factor for ¬H. The only nonempty subsets with non-trivial intersections are {H} and {¬H} (since ∩{H,¬H} = H ∩ ¬H = ∅ has P(∅) = 0).
Without normalization, the theorem is false: if ∑ prior < 1, both
alternatives can have their probability raised simultaneously with equal
Bayes factors, making probAnswersFull false while probAnswers is true.
Which alternative(s) are supported by P.
Returns the alternatives whose probability is raised by learning P.
Equations
- One or more equations did not get rendered due to their size.
Instances For
The maximally supported alternative: the one whose probability increases the most when P is learned.
Equations
- One or more equations did not get rendered due to their size.
Instances For
Informational content of a resolving state.
For a single info state σ (representing a potential resolution), info(σ) is just σ itself - the proposition that the actual world is in σ.
For multiple resolving states, info({σ₁,..., σₙ}) is their union.
Equations
- Semantics.Questions.ProbabilisticAnswerhood.infoContent states w = states.any fun (σ : Discourse.InfoState W) => σ w
Instances For
Evidences more strongly: R evidences A more strongly than R' does.
Definition 63 from @cite{thomas-2026}:
EvidencesMoreStrongly(R, R', A) ≡ P(A|info(R)) > P(A|info(R'))
Used in the felicity conditions for additive particles: the prejacent combined with the antecedent must evidence some resolution more strongly than the antecedent alone.
Equations
- One or more equations did not get rendered due to their size.
Instances For
Simpler version: single propositions instead of state lists.
Compares conditional probabilities P(A|R) > P(A|R'). This is equivalent to comparing Bayes factors P(A|R)/P(A) > P(A|R')/P(A), since the denominator P(A) is the same on both sides and cancels.
Equations
- One or more equations did not get rendered due to their size.
Instances For
Compute how much evidence raises the probability of a conclusion.
This is P(A|E) - P(A), measuring the "boost" from learning E.
Equations
- One or more equations did not get rendered due to their size.
Instances For
Evidence is positive if it raises the probability of the conclusion.
Equations
- Semantics.Questions.ProbabilisticAnswerhood.isPositiveEvidence evidence conclusion prior = decide (Semantics.Questions.ProbabilisticAnswerhood.evidentialBoost evidence conclusion prior > 0)
Instances For
Evidence is negative if it lowers the probability of the conclusion.
Equations
- Semantics.Questions.ProbabilisticAnswerhood.isNegativeEvidence evidence conclusion prior = decide (Semantics.Questions.ProbabilisticAnswerhood.evidentialBoost evidence conclusion prior < 0)
Instances For
Check if a prior is uniform over a world list.
For proving that probabilistic answerhood reduces to standard answerhood under uniform priors.
Equations
Instances For
Entailing an alternative guarantees probabilistic answerhood.
If P entails some alternative A (every P-world is an A-world) and A is not already certain, then learning P raises A's probability to 1, which exceeds the prior P(A) < 1. This gives probAnswers (not just relevance).
Note: the weaker condition of mere consistency (P ∩ A ≠ ∅) does NOT suffice — a proposition balanced across alternatives (e.g., W={0,1,2,3}, Q={{0,1},{2,3}}, P={0,2}) can be consistent with every alternative without changing any conditional probability.
Check if conjunction of two propositions provides stronger evidence than the first proposition alone.
This is the core of Thomas's analysis of additive particles: TOO(π) requires that ANT ∧ π evidences some resolution more strongly than ANT alone.
Equations
- One or more equations did not get rendered due to their size.
Instances For
Find resolutions that the conjunction evidences more strongly.
Equations
- One or more equations did not get rendered due to their size.
Instances For
Check if there exists some resolution that is strengthened.
Equations
- Semantics.Questions.ProbabilisticAnswerhood.someResolutionStrengthened p1 p2 q prior = decide ((Semantics.Questions.ProbabilisticAnswerhood.strengthenedResolutions p1 p2 q prior).length > 0)
Instances For
Probabilistic mention-some: P gives a "partial" probabilistic answer.
P is a probabilistic mention-some answer to Q if:
- P raises the probability of some alternative(s)
- P doesn't resolve Q completely (doesn't make one alternative certain)
Equations
- One or more equations did not get rendered due to their size.
Instances For
Probabilistic mention-all: P resolves Q completely.
P is a probabilistic mention-all answer if learning P makes exactly one alternative have probability 1.
Equations
- One or more equations did not get rendered due to their size.
Instances For
Probabilistic answerhood implies relevance.
If P raises the probability of some alternative, then P changes the probability of that alternative.
Stronger evidence is positive evidence.
If R evidences A more strongly than R', then R is positive evidence for A relative to R'.
The resolution of Q evidenced by R — Q|_R from @cite{thomas-2026} Def. 62.
Returns the alternative A ∈ alt(Q) that maximizes the Bayes factor P(A|info(R)) / P(A). This is the ∩A from Definition 62 such that A's Bayes factor strictly dominates all non-contained alternatives.
For single-alternative resolutions (the common case), this reduces to the alternative with the highest conditional probability increase.
Equations
- One or more equations did not get rendered due to their size.
Instances For
ℚ↔ℝ Probability Bridge #
ProbabilisticAnswerhood uses Prior W := W → ℚ (exact rational arithmetic),
while EntropyNPIs uses W → ℝ (for Mathlib's negMulLog/Real.log). To
connect the two, cast via fun w => (prior w : ℝ). The identity
probOfProp prior φ cast to ℝ equals ∑ w, if φ w then (prior w : ℝ) else 0
follows from Rat.cast_sum.
A formal bridge theorem (probOfProp_cast_eq_cellProb) is deferred until
both modules share an import of Mathlib.Data.Real.Basic.