The set of closest A-worlds to w according to a similarity ordering.
In @cite{lewis-1973}'s notation: min_{≤_w}(A) = {w' ∈ A : ¬∃w'' ∈ A. w'' <_w w'}
Equations
Instances For
Computable version for finite world spaces.
Equations
- One or more equations did not get rendered due to their size.
Instances For
Universal Theory #
@cite{ramotowska-santorio-2025}
The standard possible-worlds analysis: counterfactuals universally quantify over the closest antecedent-worlds.
"If A were, B would" is true at w iff every closest A-world satisfies B.
This predicts:
- "Every student would pass if they studied" is FALSE if even ONE closest study-world for some student doesn't have them passing
Universal counterfactual semantics (@cite{lewis-1973}/Kratzer).
True at w iff all closest A-worlds satisfy B.
Equations
- One or more equations did not get rendered due to their size.
Instances For
Decidable version.
Equations
- One or more equations did not get rendered due to their size.
Instances For
Selectional Theory #
Stalnaker's approach with supervaluation over ties:
- A selection function picks THE closest antecedent-world
- When multiple worlds are equally close (ties), supervaluate over all choices
Three-valued semantics:
- True: B holds at s(w, A) for all legitimate selection functions s
- False: B fails at s(w, A) for all legitimate selection functions s
- Indeterminate: B holds for some s but not others
This predicts:
- "Every student would pass if they studied" is INDETERMINATE when some students' closest study-worlds have passing, others don't
Selectional counterfactual semantics (Stalnaker + supervaluation).
Returns a three-valued truth value based on agreement across selection functions.
Equations
- One or more equations did not get rendered due to their size.
Instances For
Conditional Excluded Middle (CEM) holds for selectional semantics.
(A □→ B) ∨ (A □→ ¬B) is always true or gap, never false.
Proof sketch:
- Empty closest: both vacuously true → or = true
- All B: φ = true → or = true
- All ¬B: ψ = true → or = true
- Mixed: both gap → or = gap
Homogeneity Theory #
Universal quantification PLUS a homogeneity presupposition:
- Presupposes: all closest A-worlds agree on B (all true or all false)
- Asserts: they all satisfy B (given the presupposition)
When the presupposition fails (mixed closest worlds), the sentence is neither true nor false (presupposition failure).
This predicts:
- "Every student would pass if they studied" has PRESUPPOSITION FAILURE when students' closest study-worlds are mixed on passing
Equations
- One or more equations did not get rendered due to their size.
Instances For
Equations
- One or more equations did not get rendered due to their size.
Instances For
Equations
- One or more equations did not get rendered due to their size.
Instances For
Homogeneity counterfactual semantics.
Presupposes that all closest A-worlds agree on B.
Equations
- One or more equations did not get rendered due to their size.
Instances For
Presupposition Preservation for homogeneity semantics.
If the presupposition is satisfied for (A □→ B), it's also satisfied for (A □→ ¬B). This is because homogeneity for B (all true or all false) implies homogeneity for ¬B.
Negation Swap holds for homogeneity semantics in the non-vacuous case.
When closest worlds are non-empty and presupposition is satisfied: assertion(A □→ B).map (¬·) = assertion(A □→ ¬B)
Projection Duality: Why Strength Matters #
The deeper insight behind Ramotowska et al.'s findings is that quantifier strength corresponds to a fundamental duality in how semantic values project through operators:
Universal-like operators (every, no, □, ∧):
- Conjunctive projection: need all components to succeed
- Sensitive to negative witnesses (one failure leads to overall failure)
- Fragile under heterogeneity
Existential-like operators (some, not-every, ◇, ∨):
- Disjunctive projection: need one component to succeed
- Sensitive to positive witnesses (one success leads to overall success)
- Robust under heterogeneity
This duality manifests across natural language semantics:
| Domain | Universal-like | Existential-like |
|---|---|---|
| Quantified counterfactuals | Reject on mixed | Accept on mixed |
| Presupposition projection | Filtering (universal) | Existential accomm. |
| Homogeneity | "The Xs P" needs all | "Some Xs P" needs one |
| Free Choice | □(A∨B) → □A∧□B | ◇(A∨B) → ◇A∧◇B |
| Monotonicity | Downward entailing | Upward entailing |
The selectional theory captures this because three-valued logic with supervaluation naturally implements this projection duality.
Quantifier strength as projection type.
Instances For
Equations
- One or more equations did not get rendered due to their size.
Instances For
Equations
Instances For
Strong quantifiers use conjunctive projection; weak use disjunctive.
Equations
Instances For
The Projection Duality Theorem.
For a list of three-valued results:
- Conjunctive projection: TRUE iff all TRUE, FALSE iff any FALSE
- Disjunctive projection: TRUE iff any TRUE, FALSE iff all FALSE
This directly explains the strength effect: conjunctive operators (strong) are fragile under heterogeneity, disjunctive operators (weak) are robust.
Equations
- One or more equations did not get rendered due to their size.
Instances For
Strength effect as projection duality.
Strong quantifiers use conjunctive projection; weak use disjunctive.
Conjunctive projection is fragile: one false element yields false.
When any element is false, conjunctive projection cannot return true.
Disjunctive projection is robust: one true element yields true.
When any element is true, disjunctive projection returns true.
Galois Connection: Why Duality? #
The projection duality is an instance of the adjoint functor relationship:
∃ ⊣ Δ ⊣ ∀
where Δ is the diagonal/weakening functor.
Category-Theoretic Foundation #
Given projection π: D × W → W:
- ∃_π is LEFT adjoint to pullback π*
- ∀_π is RIGHT adjoint to pullback π*
The RAPL/LAPC principle:
- Left adjoints preserve colimits (joins): ∃ is robust
- Right adjoints preserve limits (meets): ∀ is fragile
In the Truth Value Lattice #
For the three-valued lattice (false < indet < true):
- Conjunctive = infimum (⋀) = meet = limit
- Disjunctive = supremum (⋁) = join = colimit
The quantifier-projection correspondence:
| Quantifier Type | Lattice Op | Adjoint | Projection |
|---|---|---|---|
| every, no | ⋀ (meet) | RIGHT | Fragile |
| some, not-every | ⋁ (join) | LEFT | Robust |
Linguistic Consequence #
Natural language quantifiers inherit their projection behavior from their categorical status as adjoints. This explains cross-linguistic universals:
- All languages have robust existentials and fragile universals
- Polarity doesn't matter (no is strong like every) because both are ∀-like
- Strength = adjoint type, not logical polarity
Connection to Other Phenomena #
The same adjoint duality explains:
- Presupposition projection: Universal presup (fragile) vs existential (robust)
- Free Choice: □(A∨B) → □A∧□B (right adjoint distributes over meet)
- NPI licensing: DE = right adjoint composition = license; UE = left = block
- Homogeneity: "The Xs" = hidden universal = fragile under heterogeneity
The Ramotowska et al. finding that strength determines counterfactual judgments is thus a reflection of deep categorical structure in semantics.
Duality Infrastructure #
The projection duality used here is an instance of the general Core.Duality
infrastructure. See Core/Duality.lean for:
DualityType: existential (robust) vs universal (fragile)Truth3: three-valued truth with join/meet operationsexistential_robust: one true → result true (left adjoint property)universal_fragile: one false → result false (right adjoint property)Quantifier.duality: classification of quantifiers by adjoint type
The counterfactual case is one instance of this general principle, which also explains:
- Presupposition projection
- Homogeneity in plurals
- NPI licensing
- Free choice inferences
Quantifier Embedding #
The three theories make different predictions when counterfactuals are embedded under quantifiers in mixed scenarios (some individuals satisfy the consequent, others don't).
- Universal theory: each individual's closest worlds include both B and ¬B worlds, so each individual CF is FALSE. The quantifier then operates over all-false values.
- Selectional theory: within each selected world, individual outcomes are Boolean (some true, some false). The quantifier evaluates within the world, giving determinate results. All selection functions agree.
- Homogeneity theory: each individual CF has presupposition failure (closest worlds disagree), so all quantified forms are undefined.
| Sentence | Universal | Selectional | Homogeneity |
|---|---|---|---|
| Every d □→ | FALSE | FALSE | PRESUP FAIL |
| Some d □→ | FALSE | TRUE | PRESUP FAIL |
| No d □→ | TRUE | FALSE | PRESUP FAIL |
| Not every d □→ | TRUE | TRUE | PRESUP FAIL |
The universal and selectional theories agree on "every" and "not every" but DISAGREE on "some" and "no".
Evaluate embedded counterfactual under a quantifier via projection. Strong quantifiers (every, no) use conjunctive projection; weak quantifiers (some, not every) use disjunctive projection.
Equations
Instances For
"No" quantifier: conjunctive projection over NEGATED individual results. "No X would V" = "Every X would NOT V" = conjunctive over ¬individual.
Equations
- One or more equations did not get rendered due to their size.
Instances For
"Not every" quantifier: disjunctive projection over NEGATED individual results. "Not every X would V" = "∃X. ¬(X would V)" = disjunctive over ¬individual.
Equations
- One or more equations did not get rendered due to their size.
Instances For
Selectional Theory: Embedded Determinacy #
The paper's key theoretical insight (§2.2.2): embedded selectional counterfactuals are DETERMINATE even though unembedded ones can be indeterminate. This is because the quantifier operates INSIDE the scope of the selection function — within each selected world, individual results are Bool (true/false), not Truth3.
In the win-some-lose-some lottery scenario, every candidate selection function selects a world with mixed outcomes. The quantifier evaluates WITHIN that world, giving a determinate result. Supervaluating over selection functions then preserves determinacy because all give the same quantified result.
Embedded selectional determinacy: when individual results are all determinate (Bool), the projected result is always determinate.
This is the formal content of the paper's claim that embedded selectional counterfactuals are determinate in mixed scenarios.
Strength determines pattern: under selectional semantics with mixed Bool individual results (some true, some false):
- Conjunctive projection (strong quantifiers) yields
.false - Disjunctive projection (weak quantifiers) yields
.true
"No" in mixed scenario gives FALSE under selectional semantics.
"No d would B if A" = ∀d.¬CF(d). In a mixed world where some individuals satisfy B and some don't, negating gives a mixed list of ¬results. Conjunctive projection of a mixed list is FALSE.
"Not every" in mixed scenario gives TRUE under selectional semantics.
"Not every d would B if A" = ∃d.¬CF(d). In a mixed world, negating gives a mixed list. Disjunctive projection of a mixed list is TRUE.
All four quantifiers in mixed scenarios: under selectional semantics
with mixed Bool individual results, strong quantifiers (every, no) yield
.false and weak quantifiers (some, not every) yield .true.
Universal Theory: All Individual Counterfactuals False #
Under the universal theory in a mixed scenario, EVERY individual counterfactual is false (because not all closest worlds satisfy the consequent for any given individual). The quantifier then operates over a list of all-false values.
This gives DIFFERENT predictions from the selectional theory:
- Universal: every=FALSE, some=FALSE, no=TRUE, not-every=TRUE
- Selectional: every=FALSE, some=TRUE, no=FALSE, not-every=TRUE
The theories agree on "every" and "not every" but DISAGREE on "some" and "no" — the key empirical discriminators.
Universal theory embedded predictions: when all individual counterfactuals are false (as the universal theory predicts in mixed scenarios), strong quantifiers give false and weak quantifiers ALSO give false.
Contrast with the selectional theory where weak quantifiers give true.
Grounding Selection Functions in Causal Models #
The selection function s(w, A) can be grounded via causal intervention:
s(w, A) = the world that results from intervening to make A true at w
This connects to @cite{nadathur-lauer-2020}:
normalDevelopment(s ⊕ {A = true})gives the counterfactual A-world- Counterfactual dependence (necessity) = selection-based conditionals
See Theories/NadathurLauer2020/ for the causal model infrastructure.
Intervention-based selection: Select the world resulting from do(A).
Given a causal dynamics and situation, the selected A-world is the result of normal development after intervening to make A true.
Equations
- Semantics.Conditionals.Counterfactual.interventionSelection dyn s antecedent = Core.StructuralEquationModel.normalDevelopment dyn (s.extend antecedent true)
Instances For
Counterfactual via intervention.
"If A were true, B would be true" using causal model semantics.
Equations
- Semantics.Conditionals.Counterfactual.causalCounterfactual dyn s antecedent consequent = (Semantics.Conditionals.Counterfactual.interventionSelection dyn s antecedent).hasValue consequent true
Instances For
Causal counterfactual matches necessity test for negative antecedent.
"If A were false, B would be false" = A is necessary for B. This connects @cite{stalnaker-1968} selection to @cite{lewis-1973}/@cite{nadathur-lauer-2020} counterfactual dependence.
Selectional counterfactual = dist applied to closest worlds mapped
through B. The selectional theory uses the same distributivity
operator as DIST_π (@cite{santorio-2018}).
The selectional counterfactual is literally supervaluation (@cite{fine-1975}) over closest worlds. Each closest world is a specification point — a legitimate resolution of the selection-function tie. When all closest worlds agree on B, the counterfactual is definite; when they disagree, it is indefinite.
Combined with `selectional_eq_dist`, this shows three independent
implementations are the same operator:
- `Semantics.Supervaluation.superTrue` (Finset-based, general)
- `Core.Duality.dist` (List-based, `Truth3.lean`)
- `selectionalCounterfactual` (List-based, match on closest worlds)
Selectional counterfactual = supervaluation over closest worlds.
When the closest-worlds set is non-empty, the selectional semantics
equals superTrue B over the closest worlds as a specification space.
This makes explicit that Stalnaker's "supervaluate over ties" IS
Fine's supervaluation with Spec = W and admissible = closest(w, A).