Documentation

Linglib.Phenomena.Nonliteral.Metaphor.KaoEtAl2014

@cite{kao-etal-2014-hyperbole} — Metaphor @cite{kao-etal-2014-metaphor} #

"Formalizing the Pragmatics of Metaphor Understanding" Proceedings of the Annual Meeting of the Cognitive Science Society, 36, 719-724

The Model #

Domain: "He is a whale" metaphor. 2 categories (whale, person) × 2³ feature combinations (large, graceful, majestic) = 16 world states. 2 utterances (= categories). 3 QUDs (= features).

Parameters: α = 3, P(whale) = 1/100, P(person) = 99/100

Qualitative Findings #

#FindingTheorem
1P(person | "whale") > P(whale | "whale")nonliteral
2P(large=T | "whale") > P(large=F | "whale")feature_large
3P(graceful=T | "whale") > P(graceful=F | "whale")feature_graceful
4P(majestic=T | "whale") > P(majestic=F | "whale")feature_majestic
5Specific QUD → higher P(large=T) than vague QUDcontext_sensitivity
6P(person | "person") > P(whale | "person")literal_correct

The 6 qualitative findings from @cite{kao-etal-2014-hyperbole}. Each model of metaphor should formalize and prove all 6 findings.

  • nonliteral : Finding

    Hearing "whale" about a person, the listener infers the referent is a person, not literally a whale.

  • feature_large : Finding

    Metaphor elevates the "large" feature above its prior.

  • feature_graceful : Finding

    Metaphor elevates the "graceful" feature above its prior.

  • feature_majestic : Finding

    Metaphor elevates the "majestic" feature above its prior.

  • context_sensitivity : Finding

    A specific QUD ("Is he large?") raises P(large=T) higher than a vague QUD ("What is he like?").

  • literal_correct : Finding

    Hearing "person", the listener correctly infers the referent is a person.

Instances For
    Equations
    • One or more equations did not get rendered due to their size.
    Instances For
      Equations
      • One or more equations did not get rendered due to their size.
      Instances For

        Categories: whale (metaphor vehicle) and person (literal referent). Categories double as utterance types.

        Instances For
          Equations
          • One or more equations did not get rendered due to their size.
          Instances For

            QUDs: which feature is the speaker trying to communicate?

            Instances For
              Equations
              • One or more equations did not get rendered due to their size.
              Instances For
                @[reducible, inline]

                World = category × large × graceful × majestic.

                Equations
                Instances For
                  Equations
                  • One or more equations did not get rendered due to their size.
                  Equations
                  • One or more equations did not get rendered due to their size.

                  Feature prior P(large, graceful, majestic | category). Unnormalized counts (×10000) from Experiment 1b / memo code.

                  Equations
                  Instances For

                    L0 meaning: P(features|category) when category matches utterance, 0 otherwise. The feature prior is baked into L0 following RSAConfig convention.

                    Equations
                    Instances For

                      Sum L0 over the QUD equivalence class of w under goal q.

                      Equations
                      • One or more equations did not get rendered due to their size.
                      Instances For
                        noncomputable def Phenomena.Nonliteral.Metaphor.KaoEtAl2014.cfg (goalPrior : Goal) (hp : ∀ (g : Goal), 0 goalPrior g) :

                        @cite{kao-etal-2014-hyperbole} metaphor model, parametric in goal prior.

                        S1 score is rpow(projected_L0, α) — the paper's Eq. 5 without utterance cost. This directly encodes the paper's equations and lets rsa_predict handle the interval arithmetic.

                        Equations
                        • One or more equations did not get rendered due to their size.
                        Instances For
                          @[reducible, inline]

                          Vague QUD condition: uniform goal prior ("What is he like?").

                          Equations
                          • One or more equations did not get rendered due to their size.
                          Instances For
                            @[reducible, inline]

                            Specific QUD condition: large-biased goal prior ("Is he large?").

                            Equations
                            • One or more equations did not get rendered due to their size.
                            Instances For

                              The listener infers the referent is a person, not literally a whale.

                              P(large=T | "whale") > P(large=F | "whale").

                              P(graceful=T | "whale") > P(graceful=F | "whale").

                              P(majestic=T | "whale") > P(majestic=F | "whale").

                              Under specific QUD, P(large=T | "whale") is higher than under vague QUD.

                              Hearing "person", the listener correctly infers the referent is a person.

                              Map each empirical finding to the RSA model prediction that accounts for it.

                              Equations
                              • One or more equations did not get rendered due to their size.
                              Instances For

                                The RSA model accounts for all 6 empirical findings from @cite{kao-etal-2014-hyperbole}.