Equations
Equations
- One or more equations did not get rendered due to their size.
Instances For
Equations
- RSA.EgreEtAl2023.instBEqValue.beq x✝ y✝ = (x✝.ctorIdx == y✝.ctorIdx)
Instances For
Equations
Equations
- One or more equations did not get rendered due to their size.
Equations
Instances For
Equations
- One or more equations did not get rendered due to their size.
Instances For
Equations
Equations
- RSA.EgreEtAl2023.instBEqTolerance.beq x✝ y✝ = (x✝.ctorIdx == y✝.ctorIdx)
Instances For
Equations
Equations
- One or more equations did not get rendered due to their size.
Equations
Instances For
Equations
- One or more equations did not get rendered due to their size.
Instances For
Equations
- One or more equations did not get rendered due to their size.
Instances For
Equations
- RSA.EgreEtAl2023.exactlyMeaning n x = (x.toNat == n)
Instances For
BIR weight: Σ_{y ≥ |n-x|} P(y) under uniform P(y) on {0,...,n}. Section 3.2.2, p.1085: y ranges over {0,...,n}, not the full value domain.
Equations
Instances For
BIR posterior = L0 for "around n".
Equations
Instances For
Closed form (Section 3.2.2): P(x=k | around n) = (n - |n-k| + 1) / (n+1)²
Equations
Instances For
Instances For
Instances For
Instances For
Instances For
Instances For
Tolerance posterior: marginalize BIR joint over values.
Equations
Instances For
Instances For
Instances For
BIR produces triangular posterior: v3 > v2 > v1 > v0.
BIR posterior is symmetric: P(n+k) = P(n-k).
Ratio Inequality: posterior concentrates more on center than prior. Under uniform prior, reduces to P(v3|around3) / P(v1|around3) > 1.
"Around" conveys shape (peaked); "between" does not (flat). Peak-to-edge ratio: around = 7/4, between = 1.
"Around" has wider support than narrow "between".
"Around 3" covers nearby values; "exactly 3" does not.
"Between 1 5" assigns uniform probability across its interval.
BIR joint marginalizes to favor large tolerances (more states compatible). With y ∈ {0,...,3}, y3 has 7 compatible values while y0 has 1.
Adjacent values have similar BIR probabilities (each step ≥ 50%).
Cumulative sorites effect: P(v3) > P(v0).
Speaker utility: U(m, o) = -D_KL(P_o || L⁰_m).
Equations
- RSA.EgreEtAl2023.speakerUtility observed l0 = 0 - RSA.EgreEtAl2023.klDivergence✝ (RSA.EgreEtAl2023.speakerBelief observed) l0
Instances For
For a speaker who observed 3, "around 3" has better utility than "between 0 6" (same support, but flat). This is the paper's key result: peaked shape yields lower KL from peaked belief.
Same support: P(w|o₁) > 0 ↔ P(w|o₂) > 0.
Equations
- RSA.EgreEtAl2023.SameSupport d₁ d₂ = ∀ (x : α), d₁ x > 0 ↔ d₂ x > 0
Instances For
Weak Quality: ∃ i, Quality(m, o, i).
Equations
- RSA.EgreEtAl2023.RespectsWeakQuality m_true obs = ∃ (i : I), RSA.EgreEtAl2023.RespectsQuality m_true obs i
Instances For
(A-1a) Quality preserved under same support.
(A-1b) Weak Quality preserved under same support.
U¹(m, o, i) = Σ_w P(w|o) · log L⁰(w | m, i) — speaker utility at level 1. This is the KL-based utility: higher when L⁰ matches the observation.
Equations
- One or more equations did not get rendered due to their size.
Instances For
S¹(m | o, i) = SoftMax over U¹ utilities across messages.
Equations
- One or more equations did not get rendered due to their size.
Instances For
(A-6) Core Lemma over ℝ: the utility difference U(m,d₂,i) - U(m,d₁,i) is constant across all messages m and interpretations i, provided Σd₁ = Σd₂.
Under Quality, log L⁰(w|m,i) = f(w) + c(m,i) where f(w) = log prior(w) and c(m,i) = −log Z(m,i). Since f doesn't depend on m,i and Σd₁ = Σd₂, the c(m,i) term cancels in the difference, making K independent of m and i.
(A-7) Same support → S¹ equal over ℝ: when utility vectors differ by a constant,
softmax is invariant by Core.softmax_add_const.
By A-6, U¹(·, d₂, i) = U¹(·, d₁, i) + K for some constant K. By A-5 (translation invariance), softmax(u + K, α) = softmax(u, α).
(A-8) LU Limitation over ℝ: same support → Sⁿ(m|o₁) = Sⁿ(m|o₂) for all n ≥ 1. At level 1, this is a direct corollary of A-7. The paper's full inductive argument (higher recursion depths) follows the same pattern: each Lⁿ is built from Sⁿ⁻¹ which are equal by inductive hypothesis, so Uⁿ differs by a constant, so Sⁿ is equal by softmax translation invariance.
BIR and WIR differ quantitatively under uniform priors.
Equations
- RSA.EgreEtAl2023.obs_peaked RSA.EgreEtAl2023.Value.v1 = 1 / 6
- RSA.EgreEtAl2023.obs_peaked RSA.EgreEtAl2023.Value.v2 = 1 / 6
- RSA.EgreEtAl2023.obs_peaked RSA.EgreEtAl2023.Value.v3 = 1 / 3
- RSA.EgreEtAl2023.obs_peaked RSA.EgreEtAl2023.Value.v4 = 1 / 6
- RSA.EgreEtAl2023.obs_peaked RSA.EgreEtAl2023.Value.v5 = 1 / 6
- RSA.EgreEtAl2023.obs_peaked x✝ = 0
Instances For
Equations
- RSA.EgreEtAl2023.obs_flat RSA.EgreEtAl2023.Value.v1 = 1 / 5
- RSA.EgreEtAl2023.obs_flat RSA.EgreEtAl2023.Value.v2 = 1 / 5
- RSA.EgreEtAl2023.obs_flat RSA.EgreEtAl2023.Value.v3 = 1 / 5
- RSA.EgreEtAl2023.obs_flat RSA.EgreEtAl2023.Value.v4 = 1 / 5
- RSA.EgreEtAl2023.obs_flat RSA.EgreEtAl2023.Value.v5 = 1 / 5
- RSA.EgreEtAl2023.obs_flat x✝ = 0
Instances For
C.1: Standard utility U_std(m,o) = Σ_w P(w|o) · log(Σ_{o'} L(w,o')). Under standard utility, U_std differs for same-support observations because the marginal Σ_{o'} L(w,o') washes out observation-specific shape.
Equations
- One or more equations did not get rendered due to their size.
Instances For
C.2: Bergen utility U_bergen(m,o) = Σ_w P(w|o) · log L(w|o). Under Bergen utility, the observation enters both the weight and the listener posterior, so same-support observations yield different utilities (the peaked observation gets higher utility from a peaked L0).
Equations
- One or more equations did not get rendered due to their size.
Instances For
Equations
Instances For
Peaked observation has better utility from triangular L0 than flat does. This is because the peaked observation puts more weight on center values where L0 also has higher probability — better KL alignment.
Both observations get the SAME utility under a uniform L0 (from "between"). This demonstrates the LU limitation: uniform L0 cannot distinguish shapes.
Equations
Instances For
BIR weight = marginalization of aroundMeaning over valid tolerances y ≤ n.
BIR (L0) ranking matches closed-form prediction: v3 > v2 > v1 > v0.
BIR posterior matches closed-form for each value (n=3).