Documentation

Linglib.Core.Agent.SignalDetection

Signal Detectability Theory (@cite{luce-1959}, §2.E) @cite{luce-1959} #

Signal Detection Theory (SDT) models the observer's task of discriminating between two hypotheses — "signal present" (sN) and "noise only" (N) — when each hypothesis generates a noisy internal response. Under the equal-variance Gaussian model, the noise distribution has mean 0 and the signal+noise distribution has mean d' (the sensitivity or detectability parameter), both with unit variance.

The observer's decision rule is to respond "signal" when the internal response exceeds a criterion c. This yields two key probabilities:

where Phi is the standard normal CDF.

The ROC curve (Receiver Operating Characteristic) traces out the (F, H) pairs as c varies, for a fixed d'. When d' > 0, the curve lies above the diagonal (H > F), reflecting genuine discriminability.

Connection to Luce's Choice Framework #

Luce (§2.E) shows that SDT can be embedded in his choice-theoretic framework. The likelihood ratio at observation x is:

L(x) = f_{sN}(x) / f_N(x) = exp(d' * x - d'^2/2)

This is a ratio scale value, and the observer's "choice" between reporting signal vs noise follows a Luce model with v(signal)/v(noise) = L(x).

Connection to Thurstone #

SDT is essentially Thurstone's discriminal process theory (§2.B-D) applied to the two-alternative detection context. The d' parameter is the standardized distance between the two discriminal dispersions, and the logistic approximation to the Gaussian CDF connects SDT choice probabilities to the Luce-Thurstone logistic choice model.

structure Core.SDTModel :

A Signal Detection Theory model with equal-variance Gaussian assumptions.

  • d_prime: sensitivity (d'), the standardized distance between the signal+noise and noise-only distribution means. Non-negative by convention (d' = 0 means no discriminability).
  • criterion: the observer's response criterion c. The observer responds "signal" when the internal response exceeds c. c = 0 is unbiased; c > 0 is conservative (favors "noise"); c < 0 is liberal (favors "signal").
  • d_prime :

    Sensitivity: distance between signal+noise and noise means (in SD units).

  • criterion :

    Response criterion: threshold for "signal" response.

  • d_prime_nonneg : 0 self.d_prime

    Sensitivity is non-negative.

Instances For
    noncomputable def Core.SDTModel.hitRate (m : SDTModel) :

    Hit rate: P("signal" | signal present) = 1 - Phi(c - d'/2).

    Under the equal-variance Gaussian model, the signal+noise distribution has mean d' and variance 1. The observer responds "signal" when the internal response exceeds c. After standardizing: H = P(X > c | sN) = P(Z > c - d') = 1 - Phi(c - d')

    In the symmetric parameterization (means at +d'/2 and -d'/2): H = 1 - Phi(c - d'/2).

    Equations
    Instances For
      noncomputable def Core.SDTModel.falseAlarmRate (m : SDTModel) :

      False alarm rate: P("signal" | noise only) = 1 - Phi(c + d'/2).

      Under the equal-variance Gaussian model, the noise distribution has mean 0 and variance 1. The observer responds "signal" when the internal response exceeds c. After standardizing: F = P(X > c | N) = P(Z > c) = 1 - Phi(c)

      In the symmetric parameterization (means at +d'/2 and -d'/2): F = 1 - Phi(c + d'/2).

      Equations
      Instances For

        Hit rate is between 0 and 1.

        False alarm rate is between 0 and 1.

        noncomputable def Core.probit (p : ) :

        The probit function (inverse normal CDF): for p ∈ (0, 1), returns the unique x with Φ(x) = p. Returns 0 for out-of-range inputs.

        Existence follows from the Intermediate Value Theorem applied to the continuous, strictly monotone normalCDF with limits 0 at -∞ and 1 at +∞.

        Equations
        Instances For
          theorem Core.probit_spec {p : } (hp0 : 0 < p) (hp1 : p < 1) :

          Specification: Φ(probit(p)) = p for p ∈ (0, 1).

          Specification: probit(Φ(x)) = x for all x (left inverse).

          noncomputable def Core.dPrimeFromRates (hitRate falseAlarmRate : ) :

          Recover d' from hit and false alarm rates. d' = Phi^{-1}(1 - F) - Phi^{-1}(1 - H) = z(H) - z(F) where z = probit (the quantile function of the standard normal).

          Equations
          Instances For

            Roundtrip: recovering d' from the hit and false alarm rates of an SDT model returns the original d'.

            theorem Core.probit_strictMono {p₁ p₂ : } (hp₁_lo : 0 < p₁) (hp₁_hi : p₁ < 1) (hp₂_lo : 0 < p₂) (hp₂_hi : p₂ < 1) (h : p₁ < p₂) :
            probit p₁ < probit p₂

            Probit is strictly monotone on (0, 1): p₁ < p₂ → probit(p₁) < probit(p₂). Since normalCDF is strictly monotone and probit is its inverse, the ordering is preserved.

            theorem Core.dPrimeFromRates_pos_iff {H F : } (hH_lo : 0 < H) (hH_hi : H < 1) (hF_lo : 0 < F) (hF_hi : F < 1) :

            d' recovered from hit rate H and false alarm rate F is positive iff H > F. This is the fundamental connection between SDT sensitivity and observable discrimination: an observer with d' > 0 can tell signal from noise above chance.

            noncomputable def Core.rocCurve (d_prime falseAlarmRate : ) :

            The ROC curve for a given d': maps false alarm rate to hit rate as the criterion varies. Parameterized by criterion c:

            (F(c), H(c)) = (1 - Phi(c + d'/2), 1 - Phi(c - d'/2))

            Eliminating c: H = Phi(Phi^{-1}(F) + d'), or equivalently H = 1 - Phi(Phi^{-1}(1-F) - d').

            We define the ROC curve directly as a function of the false alarm rate.

            Equations
            Instances For
              theorem Core.roc_diagonal (f : ) (hf : 0 < f) (hf' : f < 1) :
              rocCurve 0 f = f

              For d' = 0, the ROC curve is the diagonal: H = F (chance performance).

              For d' > 0, the ROC curve lies above the diagonal: H > F.

              This is the fundamental result of SDT — positive sensitivity means the observer performs above chance. Intuitively, when d' > 0, the signal+noise distribution is shifted right of the noise distribution, so for any criterion, more of the signal distribution exceeds the criterion than the noise distribution.

              Proof: H - F = [1 - Phi(c - d'/2)] - [1 - Phi(c + d'/2)] = Phi(c + d'/2) - Phi(c - d'/2) > 0 since Phi is strictly increasing and d' > 0 implies c + d'/2 > c - d'/2.

              theorem Core.roc_area_increases_with_dprime (d₁ d₂ : ) (h : d₁ < d₂) :
              normalCDF (d₁ / 2) < normalCDF (d₂ / 2)

              Higher d' implies higher area under the ROC curve.

              The area under the ROC curve (AUC) equals Phi(d'/sqrt(2)) for the equal-variance Gaussian model. Since Phi is increasing, higher d' yields higher AUC.

              noncomputable def Core.likelihoodRatio (d_prime x : ) :

              The Gaussian likelihood ratio at observation x, given d':

              L(x) = f_{sN}(x) / f_N(x) = exp(d' * x - d'^2 / 2)

              where f_N N(0,1) and f_{sN} N(d', 1). This follows from: log L(x) = log f_{sN}(x) - log f_N(x) = [-(x-d')^2/2] - [-x^2/2] = d'*x - d'^2/2.

              Equations
              Instances For
                theorem Core.likelihoodRatio_pos (d_prime x : ) :
                0 < likelihoodRatio d_prime x

                The likelihood ratio is always positive (being an exponential).

                noncomputable def Core.sdt_as_luce (d_prime x : ) :

                SDT embedded as a Luce choice model: the "choice" between reporting signal vs noise, given observation x, has score ratio equal to the likelihood ratio.

                In Luce's framework (Chapter 1), the response ratio is: P("signal" | x) / P("noise" | x) = v(signal) / v(noise) = L(x)

                We construct a RationalAction on Fin 2 (signal = 0, noise = 1) with scores proportional to the likelihood ratio.

                Equations
                Instances For
                  theorem Core.sdt_luce_signal_prob (d_prime x : ) :
                  (sdt_as_luce d_prime x).policy () 0 = likelihoodRatio d_prime x / (likelihoodRatio d_prime x + 1)

                  The policy of the Luce-SDT model for "signal" (action 0) is the likelihood ratio divided by (1 + likelihood ratio), i.e., the logistic transform of the log-likelihood ratio.

                  theorem Core.sdt_luce_odds_ratio (d_prime x : ) :
                  (sdt_as_luce d_prime x).score () 0 / (sdt_as_luce d_prime x).score () 1 = likelihoodRatio d_prime x

                  The odds ratio of the SDT Luce model equals the likelihood ratio.

                  This is the core connection: Luce's choice-theoretic ratio scale v(signal)/v(noise) IS the likelihood ratio from SDT.

                  Logistic Approximation Constant #

                  The SDT hit rate Φ(d'/2 - c) is well-approximated by logistic(k·(d'/2-c)) where k = π/√3 ≈ 1.814 is the variance-matching constant:

                  `Φ(x) ≈ logistic(x · π/√3)` with max error ~0.023
                  

                  This is the Thurstone-Luce bridge for the detection context: both SDT (Gaussian noise) and the Gumbel-Luce model (Gumbel noise) are Random Utility Models. The Gumbel-Luce model gives exactly logistic probabilities (McFadden's theorem, GumbelLuce.lean). The Gaussian model gives Φ. These agree up to the numerical approximation Φ ≈ logistic.

                  The constant k = π/√3 equals thurstoneLuceK(1/√2), unifying the SDT and Thurstone parameterizations (see logisticApproxConst_eq_thurstoneLuceK in GaussianChoice.lean).

                  noncomputable def Core.logisticApproxConst :

                  The logistic approximation constant: k = π/√3 ≈ 1.814.

                  This is the variance-matching scale factor between the normal and logistic distributions: the standard logistic has variance π²/3, so Φ(x) ≈ logistic(x · π/√3) with max error ~0.023.

                  Note: the optimal constant minimizing the sup-norm is ~1.702 (slightly smaller than π/√3). We use π/√3 because it has a clean analytical derivation from variance matching and equals thurstoneLuceK(1/√2) (see logisticApproxConst_eq_thurstoneLuceK).

                  Equations
                  Instances For