Dong, Hu, Hui, Zhang, Vulić, Bobu & Collier (2026) #
@cite{dong-etal-2026}
Value of Information: A Framework for Human–Agent Communication.
Overview #
LLM agents face a clarify-or-commit dilemma: ask the user for clarification (incurring cognitive cost) or act on incomplete information (risking error). This paper proposes a Value of Information (VoI) framework: agents ask question q only when VoI(q) exceeds communication cost c.
RSA framing #
The paper explicitly adopts an RSA perspective (@cite{frank-goodman-2012};
@cite{goodman-frank-2016}), viewing dialogue as rational action. VoI(q)
(eq. 4: V_post(b,q) − V(b)) is structurally the same as questionUtility
(@cite{van-rooy-2003}): the expected gain in decision value from asking q.
Connection to @cite{hawkins-etal-2025} #
VoI captures WHETHER to ask (the questioner's decision), while the commit action (argmax over expected utility) captures WHAT to do. The VoI framework is complementary to @cite{hawkins-etal-2025}'s respondent model.
Risk-Sensitivity and Action Utility #
The key qualitative finding (Appendix A, Figure 4): in the Mixed 20Q task, the VoI agent asks more questions for medical diagnosis (U = 10) than animal guessing (U = 1), because higher stakes increase expected regret.
Action-utility scoring (@cite{hawkins-etal-2025}'s β > 0) encodes stakes
into s1Score: exp(α · U(target, guess)) scales with reward k. In richer
models (multiple targets with partial matches), this creates δ-sensitive
S1 preferences — the mechanism behind the @cite{tsvilodub-etal-2026}
formalization's cross-config comparisons.
In this binary identification task, however, the action-utility effect is degenerate: the L0 gate zeros the wrong guess, so S1 assigns probability 1 to the correct guess regardless of k. The task is too simple for action utility to produce qualitative differences — both k = 1 and k = 10 yield identical S1/L1 predictions after normalization.
Binary identification task (simplified from the paper's Mixed-Stakes 20 Questions, which uses 100 animals / 15 diseases).
Instances For
Equations
- One or more equations did not get rendered due to their size.
Instances For
Equations
- One or more equations did not get rendered due to their size.
Boolean match: does guess match target?
Equations
- Phenomena.Clarification.Studies.DongEtAl2026.targetMatches Phenomena.Clarification.Studies.DongEtAl2026.Target.t₁ Phenomena.Clarification.Studies.DongEtAl2026.Target.t₁ = true
- Phenomena.Clarification.Studies.DongEtAl2026.targetMatches Phenomena.Clarification.Studies.DongEtAl2026.Target.t₂ Phenomena.Clarification.Studies.DongEtAl2026.Target.t₂ = true
- Phenomena.Clarification.Studies.DongEtAl2026.targetMatches x✝¹ x✝ = false
Instances For
Binary identification as RSA reference game with action-utility scoring.
s1Score(L0, α, target, guess) = if L0(target|guess) = 0 then 0 else exp(α · k)
At k = 1 and k = 10, S1(correct|target) = 1 after normalization — the binary task is degenerate. Action-utility scoring IS the right mechanism (@cite{hawkins-etal-2025}'s β = 1), but the task is too simple for it to produce qualitative differences.
Equations
- One or more equations did not get rendered due to their size.
Instances For
Animal guessing (k = 1).
Equations
Instances For
Medical diagnosis (k = 10).
Equations
Instances For
Both configs produce identical S1/L1 predictions — the binary task is degenerate. The L0 gate zeros wrong guesses, so S1 assigns probability 1 to the correct guess at any k.
S1 prefers correct guess (medical diagnosis, k = 10). Same qualitative prediction despite 10× stakes.
L1 correctly infers target from guess (medical). Same qualitative prediction despite 10× stakes.