Signal Detectability Theory (@cite{luce-1959}, §2.E) @cite{luce-1959} #
Signal Detection Theory (SDT) models the observer's task of discriminating between two hypotheses — "signal present" (sN) and "noise only" (N) — when each hypothesis generates a noisy internal response. Under the equal-variance Gaussian model, the noise distribution has mean 0 and the signal+noise distribution has mean d' (the sensitivity or detectability parameter), both with unit variance.
The observer's decision rule is to respond "signal" when the internal response exceeds a criterion c. This yields two key probabilities:
- Hit rate: H = P("signal" | sN) = 1 - Phi(c - d'/2)
- False alarm rate: F = P("signal" | N) = 1 - Phi(c + d'/2)
where Phi is the standard normal CDF.
The ROC curve (Receiver Operating Characteristic) traces out the (F, H) pairs as c varies, for a fixed d'. When d' > 0, the curve lies above the diagonal (H > F), reflecting genuine discriminability.
Connection to Luce's Choice Framework #
Luce (§2.E) shows that SDT can be embedded in his choice-theoretic framework. The likelihood ratio at observation x is:
L(x) = f_{sN}(x) / f_N(x) = exp(d' * x - d'^2/2)
This is a ratio scale value, and the observer's "choice" between reporting signal vs noise follows a Luce model with v(signal)/v(noise) = L(x).
Connection to Thurstone #
SDT is essentially Thurstone's discriminal process theory (§2.B-D) applied to the two-alternative detection context. The d' parameter is the standardized distance between the two discriminal dispersions, and the logistic approximation to the Gaussian CDF connects SDT choice probabilities to the Luce-Thurstone logistic choice model.
A Signal Detection Theory model with equal-variance Gaussian assumptions.
d_prime: sensitivity (d'), the standardized distance between the signal+noise and noise-only distribution means. Non-negative by convention (d' = 0 means no discriminability).criterion: the observer's response criterion c. The observer responds "signal" when the internal response exceeds c. c = 0 is unbiased; c > 0 is conservative (favors "noise"); c < 0 is liberal (favors "signal").
- d_prime : ℝ
Sensitivity: distance between signal+noise and noise means (in SD units).
- criterion : ℝ
Response criterion: threshold for "signal" response.
Sensitivity is non-negative.
Instances For
Hit rate: P("signal" | signal present) = 1 - Phi(c - d'/2).
Under the equal-variance Gaussian model, the signal+noise distribution has mean d' and variance 1. The observer responds "signal" when the internal response exceeds c. After standardizing: H = P(X > c | sN) = P(Z > c - d') = 1 - Phi(c - d')
In the symmetric parameterization (means at +d'/2 and -d'/2): H = 1 - Phi(c - d'/2).
Instances For
False alarm rate: P("signal" | noise only) = 1 - Phi(c + d'/2).
Under the equal-variance Gaussian model, the noise distribution has mean 0 and variance 1. The observer responds "signal" when the internal response exceeds c. After standardizing: F = P(X > c | N) = P(Z > c) = 1 - Phi(c)
In the symmetric parameterization (means at +d'/2 and -d'/2): F = 1 - Phi(c + d'/2).
Equations
- m.falseAlarmRate = 1 - Core.normalCDF (m.criterion + m.d_prime / 2)
Instances For
Hit rate is between 0 and 1.
False alarm rate is between 0 and 1.
The probit function (inverse normal CDF): for p ∈ (0, 1), returns
the unique x with Φ(x) = p. Returns 0 for out-of-range inputs.
Existence follows from the Intermediate Value Theorem applied to the
continuous, strictly monotone normalCDF with limits 0 at -∞ and 1 at +∞.
Equations
- Core.probit p = if h : 0 < p ∧ p < 1 then Classical.choose ⋯ else 0
Instances For
Recover d' from hit and false alarm rates. d' = Phi^{-1}(1 - F) - Phi^{-1}(1 - H) = z(H) - z(F) where z = probit (the quantile function of the standard normal).
Equations
- Core.dPrimeFromRates hitRate falseAlarmRate = Core.probit hitRate - Core.probit falseAlarmRate
Instances For
Roundtrip: recovering d' from the hit and false alarm rates of an SDT model returns the original d'.
d' recovered from hit rate H and false alarm rate F is positive iff H > F. This is the fundamental connection between SDT sensitivity and observable discrimination: an observer with d' > 0 can tell signal from noise above chance.
The ROC curve for a given d': maps false alarm rate to hit rate as the criterion varies. Parameterized by criterion c:
(F(c), H(c)) = (1 - Phi(c + d'/2), 1 - Phi(c - d'/2))
Eliminating c: H = Phi(Phi^{-1}(F) + d'), or equivalently H = 1 - Phi(Phi^{-1}(1-F) - d').
We define the ROC curve directly as a function of the false alarm rate.
Equations
- Core.rocCurve d_prime falseAlarmRate = 1 - Core.normalCDF (Core.probit (1 - falseAlarmRate) - d_prime)
Instances For
For d' > 0, the ROC curve lies above the diagonal: H > F.
This is the fundamental result of SDT — positive sensitivity means the observer performs above chance. Intuitively, when d' > 0, the signal+noise distribution is shifted right of the noise distribution, so for any criterion, more of the signal distribution exceeds the criterion than the noise distribution.
Proof: H - F = [1 - Phi(c - d'/2)] - [1 - Phi(c + d'/2)] = Phi(c + d'/2) - Phi(c - d'/2) > 0 since Phi is strictly increasing and d' > 0 implies c + d'/2 > c - d'/2.
Higher d' implies higher area under the ROC curve.
The area under the ROC curve (AUC) equals Phi(d'/sqrt(2)) for the equal-variance Gaussian model. Since Phi is increasing, higher d' yields higher AUC.
The Gaussian likelihood ratio at observation x, given d':
L(x) = f_{sN}(x) / f_N(x) = exp(d' * x - d'^2 / 2)
where f_N N(0,1) and f_{sN} N(d', 1). This follows from:
log L(x) = log f_{sN}(x) - log f_N(x)
= [-(x-d')^2/2] - [-x^2/2]
= d'*x - d'^2/2.
Instances For
The likelihood ratio is always positive (being an exponential).
SDT embedded as a Luce choice model: the "choice" between reporting signal vs noise, given observation x, has score ratio equal to the likelihood ratio.
In Luce's framework (Chapter 1), the response ratio is: P("signal" | x) / P("noise" | x) = v(signal) / v(noise) = L(x)
We construct a RationalAction on Fin 2 (signal = 0, noise = 1) with scores proportional to the likelihood ratio.
Equations
Instances For
The policy of the Luce-SDT model for "signal" (action 0) is the likelihood ratio divided by (1 + likelihood ratio), i.e., the logistic transform of the log-likelihood ratio.
The odds ratio of the SDT Luce model equals the likelihood ratio.
This is the core connection: Luce's choice-theoretic ratio scale v(signal)/v(noise) IS the likelihood ratio from SDT.
Logistic Approximation Constant #
The SDT hit rate Φ(d'/2 - c) is well-approximated by logistic(k·(d'/2-c))
where k = π/√3 ≈ 1.814 is the variance-matching constant:
`Φ(x) ≈ logistic(x · π/√3)` with max error ~0.023
This is the Thurstone-Luce bridge for the detection context: both SDT
(Gaussian noise) and the Gumbel-Luce model (Gumbel noise) are Random
Utility Models. The Gumbel-Luce model gives exactly logistic
probabilities (McFadden's theorem, GumbelLuce.lean). The Gaussian
model gives Φ. These agree up to the numerical approximation Φ ≈ logistic.
The constant k = π/√3 equals thurstoneLuceK(1/√2), unifying the SDT
and Thurstone parameterizations (see logisticApproxConst_eq_thurstoneLuceK
in GaussianChoice.lean).
The logistic approximation constant: k = π/√3 ≈ 1.814.
This is the variance-matching scale factor between the normal and
logistic distributions: the standard logistic has variance π²/3,
so Φ(x) ≈ logistic(x · π/√3) with max error ~0.023.
Note: the optimal constant minimizing the sup-norm is ~1.702
(slightly smaller than π/√3). We use π/√3 because it has a clean
analytical derivation from variance matching and equals
thurstoneLuceK(1/√2) (see logisticApproxConst_eq_thurstoneLuceK).