Documentation

Linglib.Phenomena.Nonliteral.Irony.Studies.SpinosoDiPiano2025

Spinoso-@cite{spinoso-di-piano-etal-2025} — (RSA)² @cite{spinoso-di-piano-etal-2025} @cite{kao-goodman-2015} #

@cite{bergen-goodman-2015}

(RSA)²: A Rhetorical-Strategy-Aware Rational Speech Act Framework for Figurative Language Understanding.

The Model #

Replaces RSA's literal meaning indicator with a rhetorical function f_r(c, m, u) parameterized by strategy r ∈ {literal, ironic}. The strategy is a latent variable marginalized at L1, yielding the paper's title: (RSA)².

Parameters: α = 1 (paper's default), uniform P(u|c), uniform P(r).

What We Formalize #

We formalize the conceptual hand-specified model from Section 3.2 of the paper, where f_r is a deterministic indicator (0/1). The paper's experimental results use a neural network to approximate f_r from human data; that quantitative fit is outside our scope.

For indicator meanings with one matching world per (u,r) pair, P(m|c) drops out of L0 normalization, so our RSAConfig (which puts worldPrior only in L1) produces equivalent predictions. Similarly, uniform P(u|c) drops out of S1. The joint marginalization in RSAConfig is algebraically equivalent to the paper's per-strategy normalization then mixing (Eq 7).

Comparison to @cite{kao-goodman-2015} #

Both models derive irony from context-dependent pragmatic inference over the same weather domain. The key difference:

Dimension@cite{kao-goodman-2015}Spinoso-@cite{spinoso-di-piano-etal-2025}
LatentQUD (state/valence/arousal)Strategy (literal/ironic)
WorldWeather × Valence × Arousal (20 states)Weather only (5 states)
MechanismArousal QUD enables valence flipAntonym mapping enables flip
ClaimAffect (arousal) is necessaryAffect is unnecessary

The simplification IS the result: irony emerges from strategy inference alone, without modeling affect dimensions, matching the same qualitative predictions.

Verified Predictions #

#TheoremConfigDescription
1ironic_readingterribleCfg"amazing" → terrible weather (Fig 3)
2literal_readingpleasantCfg"amazing" → amazing weather
3infer_ironicterribleCfg"amazing" → ironic strategy
4infer_literalpleasantCfg"amazing" → literal strategy
5terrible_ironicpleasantCfg"terrible" → amazing weather
6terrible_literalterribleCfg"terrible" → terrible weather
7ok_strategy_neutralterribleCfg"ok" → strategies equiprobable
8bad_ironicpleasantCfg"bad" → good weather (interior scale)
9good_ironicterribleCfg"good" → bad weather (interior scale)
10ok_strategy_neutral_pleasantpleasantCfg"ok" → strategies equiprobable

Theorems 1+2 and 5+6 demonstrate context-dependence (same utterance, opposite interpretation). Theorems 3+4 are unique to (RSA)² — the strategy posterior is directly observable, unlike the QUD posterior in @cite{kao-goodman-2015}. Theorems 7+10 test a boundary case: since opposite(ok) = ok, the ironic and literal strategies produce identical L0 distributions for "ok" in BOTH contexts, making L1's strategy inference uninformative. Theorems 8+9 test interior scale positions (bad/good rather than endpoints terrible/amazing).

Structural Properties #

rhetoricalMeaning_swap captures the core mechanism algebraically: ironic meaning at world w equals literal meaning at the antonym world opposite(w). This follows from opposite being an involution and grounds the ironic strategy as "literal interpretation in the opposite world."

irony_iff_prior_favors_antonym is the deepest result: the (RSA)² model's entire behavior reduces to comparing the world prior at two points. Irony emerges iff worldPrior(opposite(u.toWeather)) > worldPrior(u.toWeather). This is a much stronger claim than individual prediction theorems — it explains WHY the cross-model agreement with @cite{kao-goodman-2015} holds: both models agree whenever the weather prior is sufficiently asymmetric, because (RSA)²'s prediction IS just a prior comparison.

Implementation Note #

The paper uses U = W = Weather (utterances are weather descriptions, worlds are weather states). RSAConfig requires distinct types, so we use a thin Utterance wrapper with an explicit toWeather conversion.

Utterance type: weather descriptions used as speech acts. Structurally mirrors Weather but a distinct type for RSAConfig.

Instances For
    Equations
    • One or more equations did not get rendered due to their size.
    Instances For
      Equations
      • One or more equations did not get rendered due to their size.

      The two rhetorical strategies from (RSA)². The literal strategy maps utterances to their face-value meaning; the ironic strategy maps them to their evaluative antonym.

      Instances For
        Equations
        • One or more equations did not get rendered due to their size.
        Instances For
          Equations
          • One or more equations did not get rendered due to their size.

          The paper's rhetorical function f_r(c, m, u) (Eq 4), specialized to the hand-specified indicator case:

          • literal: true iff the utterance's weather meaning matches the world
          • ironic: true iff the antonym of the utterance's meaning matches the world

          Derives from opposite rather than enumerating cases.

          Equations
          Instances For

            Strategy swap: ironic meaning at w equals literal meaning at opposite(w). The ironic strategy is structurally equivalent to literal interpretation "in the opposite world." Follows from opposite being an involution.

            @[reducible]

            (RSA)² model, parametric in weather context prior P(m|c).

            Latent := Strategy — the rhetorical strategy is the latent variable. S1 score is L0^α (belief-based, α = 1), uniform strategy and utterance priors. World prior enters at L1 (equivalent to paper's Eq 4–7 for indicator meanings).

            Equations
            • One or more equations did not get rendered due to their size.
            Instances For
              @[reducible, inline]

              Pleasant weather context (priors from @cite{kao-goodman-2015}).

              Equations
              • One or more equations did not get rendered due to their size.
              Instances For
                @[reducible, inline]

                Terrible weather context (priors from @cite{kao-goodman-2015}).

                Equations
                • One or more equations did not get rendered due to their size.
                Instances For

                  Ironic reading: in terrible weather, L1 hearing "amazing" infers the weather is terrible (not amazing). The listener recognizes the speaker is being ironic — saying the opposite of what they mean. Matches the paper's Figure 3 (right panel).

                  Literal reading: in pleasant weather, L1 hearing "amazing" infers the weather is amazing (face-value content). Same utterance, opposite interpretation — context (the world prior) determines which strategy dominates.

                  In terrible weather, L1 infers the speaker is using the ironic strategy when saying "amazing". This is directly observable in (RSA)² — unlike @cite{kao-goodman-2015} where the QUD posterior is the analogous quantity.

                  In pleasant weather, L1 hearing "terrible" infers the weather is actually amazing — the ironic flip. Analogous to @cite{kao-goodman-2015}'s ironic_valence_flip, but over weather states rather than valence.

                  Interior irony: in pleasant weather, L1 hearing "bad" infers the weather is good (not bad). Tests the antonym mapping on non-endpoint scale positions: opposite(bad) = good, so the ironic reading maps to good.

                  Interior irony: in terrible weather, L1 hearing "good" infers the weather is bad (not good). Symmetric to bad_ironic: opposite(good) = bad.

                  Since opposite(ok) = ok, the ironic and literal strategies produce identical L0 distributions for "ok". The strategy posterior is therefore uninformative — L1 assigns equal probability to both strategies.

                  The ok boundary case holds in pleasant weather too — the strategy neutrality is context-independent (it's a structural consequence of opposite(ok) = ok, not of the weather prior).

                  L1's unnormalized score is zero at weather states matching neither the literal nor the ironic reading. The (RSA)² model only considers two candidate interpretations per utterance: u.toWeather (literal) and opposite(u.toWeather) (ironic). All other weather states are ruled out.

                  Proof: meaning(r, u, w) = 0 for both strategies when w matches neither reading, so L0(w|u,r) = 0, hence rpow(0, 1) = 0, hence S1(u|w,r) = 0, hence the L1 score (which sums over strategies) is 0.

                  Irony in (RSA)² reduces to a prior comparison: L1 assigns higher probability to the ironic reading iff the world prior favors the antonym weather state over the literal one.

                  This is the paper's core structural claim formalized: affect dimensions and QUD projection are unnecessary for irony — context (the world prior) alone determines whether an utterance is interpreted ironically. The entire model's behavior for non-midpoint utterances is captured by a single inequality: wp(opposite(u.toWeather)) > wp(u.toWeather).

                  Proof: L1_score at each matching world equals wp (the S1 values are deterministic — either 0 or 1 — so the prior passes through unchanged). Then the biconditional follows from policy_gt_of_score_gt and its contrapositive.