@cite{dale-reiter-1995} #
@cite{grice-1975}
Computational Interpretations of the Gricean Maxims in the Generation of Referring Expressions. Cognitive Science 19(2), 233–263.
Core Argument #
Four computational interpretations of Grice's Brevity maxim (Q2) are possible for referring expression generation (REG):
- Full Brevity: generate the shortest possible RE. NP-hard (reduction from minimum set cover).
- Greedy Heuristic: at each step, add the most discriminating attribute. Polynomial but still globally optimizing.
- Local Brevity: no redundant attributes, but allows reordering.
- No Brevity: iterate through a fixed preference order, include any attribute that rules out ≥ 1 distractor. May include globally redundant attributes.
Psycholinguistic evidence (speakers routinely over-describe) and computational complexity (Full Brevity is NP-hard) support No Brevity. The paper presents the Incremental Algorithm (IA), which operationalizes Q1 (be informative) with No-Brevity Q2.
The Incremental Algorithm (Figure 6) #
Given target referent r, contrast set C, and preference-ordered attribute list P:
- For each attribute Aᵢ in P:
- Get the target's value V for Aᵢ
- If ⟨Aᵢ, V⟩ rules out any distractor in C, include it and remove those distractors from C
- If C is empty, stop (success)
- Return the collected attribute-value pairs
Key properties: linear in |P|, no backtracking, no optimization. The preference order determines which attributes are included — a different order can produce a different (possibly longer) description.
Verified Data #
Worked example (§4.4) verified against paper text.
Connection to RSA #
RSA's S1 score decomposes as: α · informativity(u) − cost(u).
- informativity = Q1 (Grice's "be informative enough")
- cost = Q2 pressure (Grice's "be brief")
The Brevity interpretations correspond to regimes in RSA's (α, cost) parameter space:
- Full Brevity ≈ α → ∞, cost > 0 (hard optimization with cost)
- No Brevity ≈ cost ≈ 0 (informativity only, no brevity pressure)
The IA's PreferredAttributes list orders attributes by cognitive
accessibility. The noise discrimination ordering in RSA.Noise
(color > size > material) provides a related but distinct ordering
based on perceptual reliability.
The four computational interpretations of Grice's Brevity maxim (Q2), ordered from most to least constrained. All four satisfy Q1 (informativeness) when successful; they differ only in how strictly they enforce Q2 (brevity).
- fullBrevity : BrevityInterpretation
Generate the shortest possible RE. NP-hard by reduction from minimum set cover (Garey & Johnson, 1979).
- greedyHeuristic : BrevityInterpretation
At each step, add the attribute that rules out the most distractors. Polynomial but still globally optimizing.
- localBrevity : BrevityInterpretation
No redundant attributes (each must rule out ≥ 1 new distractor), but allows reordering to find a shorter description.
- noBrevity : BrevityInterpretation
Fixed preference order. Include any attribute that rules out ≥ 1 distractor. May include attributes that are globally redundant (because order is fixed, not optimized). Called the "Incremental Algorithm Interpretation" in the paper; the recommended interpretation.
Instances For
Equations
- One or more equations did not get rendered due to their size.
Instances For
Constraint strength: higher value = more constrained Q2. Full Brevity (3) is strictest, No Brevity (0) is weakest.
Equations
- Phenomena.Reference.Studies.DaleReiter1995.BrevityInterpretation.fullBrevity.strength = 3
- Phenomena.Reference.Studies.DaleReiter1995.BrevityInterpretation.greedyHeuristic.strength = 2
- Phenomena.Reference.Studies.DaleReiter1995.BrevityInterpretation.localBrevity.strength = 1
- Phenomena.Reference.Studies.DaleReiter1995.BrevityInterpretation.noBrevity.strength = 0
Instances For
An attribute in the REG knowledge base. The paper's "type" attribute
(head noun, e.g., "dog") is distinguished from modifier attributes
(adjectives like colour, size), which map to PropertyDomain.
- headNoun : REGAttribute
Head noun type at the basic level (e.g., "dog", "cat"). The paper's
BasicLevelValuefunction maps species-level types (chihuahua, siamese-cat) to basic-level types (dog, cat); we use basic-level values directly. - modifier
(d : Core.PropertyDomain)
: REGAttribute
Modifying property (colour, size, material, ...).
Instances For
Equations
- One or more equations did not get rendered due to their size.
Instances For
Equations
- One or more equations did not get rendered due to their size.
- Phenomena.Reference.Studies.DaleReiter1995.instBEqREGAttribute.beq x✝¹ x✝ = false
Instances For
Equations
- One or more equations did not get rendered due to their size.
Instances For
A knowledge base entity: attribute-value pairs. Values are strings for generality (the paper uses a subsumption taxonomy on values; we simplify to flat strings).
- attrs : List (REGAttribute × String)
Instances For
Equations
- One or more equations did not get rendered due to their size.
Instances For
Look up an attribute's value for an entity.
Equations
- One or more equations did not get rendered due to their size.
Instances For
Does an attribute-value pair rule out a distractor? A distractor is ruled out if it either lacks the attribute entirely or has a different value for it.
Equations
Instances For
The Incremental Algorithm (Figure 6, simplified).
Iterates through the preference-ordered attribute list. For each attribute, if the target has a value and that value rules out ≥ 1 remaining distractor, include it and remove those distractors. Stop when all distractors are eliminated or attributes are exhausted.
Simplifications vs. the paper:
- No
FindBestValue(subsumption taxonomy on values) - No
UserKnows(epistemic accessibility filter) - No
BasicLevelValue(Rosch basic-level categories — we use basic-level values directly in entity definitions) - No forced head noun inclusion (the paper always includes a type attribute; we include it only when discriminating)
Equations
- Phenomena.Reference.Studies.DaleReiter1995.incrementalAlgorithm target distractors preferred = Phenomena.Reference.Studies.DaleReiter1995.incrementalAlgorithm.go target preferred distractors []
Instances For
Equations
- One or more equations did not get rendered due to their size.
- Phenomena.Reference.Studies.DaleReiter1995.incrementalAlgorithm.go target [] x✝¹ x✝ = x✝
- Phenomena.Reference.Studies.DaleReiter1995.incrementalAlgorithm.go target x✝¹ [] x✝ = x✝
Instances For
Did the IA succeed? All distractors are ruled out by the result.
Equations
- One or more equations did not get rendered due to their size.
Instances For
Three objects in a kennel. The paper uses species-level types
(chihuahua, siamese-cat) internally, but BasicLevelValue maps these
to basic-level types (dog, cat) for the referring expression. We use
the basic-level values directly.
Object1: a small black dog (TARGET). Underlying species: chihuahua; BasicLevelValue = "dog".
Equations
- One or more equations did not get rendered due to their size.
Instances For
Object2: a large white dog. Underlying species: chihuahua; BasicLevelValue = "dog".
Equations
- One or more equations did not get rendered due to their size.
Instances For
Object3: a small black cat. Underlying species: siamese-cat; BasicLevelValue = "cat".
Equations
- One or more equations did not get rendered due to their size.
Instances For
P = {type, colour, size, ...} (§4.4). The paper lists colour before size in the preference order.
Equations
- One or more equations did not get rendered due to their size.
Instances For
§4.4: The IA produces "the black dog" — type=dog rules out Object3 (cat ≠ dog); colour=black rules out Object2 (white ≠ black). Size is never reached.
§4.4: "if P had been {type, size, colour, ...} instead of {type, colour, size, ...}, MakeReferringExpression would have returned {⟨type, dog⟩, ⟨size, small⟩} instead." The preference order determines which attributes are included.
Both preference orders succeed — the IA identifies the referent regardless of attribute ordering (in this example).
The IA can produce non-minimal descriptions because it processes attributes in a fixed order. An attribute included early may become globally redundant once a later attribute is also included.
Target: a red plastic cup.
Equations
- One or more equations did not get rendered due to their size.
Instances For
Distractor 1: a blue glass cup.
Equations
- One or more equations did not get rendered due to their size.
Instances For
Distractor 2: a blue plastic cup.
Equations
- One or more equations did not get rendered due to their size.
Instances For
With [type, material, colour], the IA produces {material=plastic, colour=red} — 2 modifier attributes.
With [type, colour, material], the IA produces {colour=red} alone — 1 modifier attribute. Colour=red rules out BOTH distractors at once (both are blue), so material is never needed.
The material-first result includes a globally redundant attribute: colour=red alone suffices, but the IA also includes material=plastic because it was processed first and ruled out cup2. This is the No-Brevity regime — locally useful attributes are kept even when globally unnecessary.
The hierarchy is strict: FB > GH > LB > NB.
No Brevity targets Q2: it weakens the "don't over-inform" sub-maxim, not Q1. The IA still enforces Q1 — each included attribute must rule out at least one distractor.
The paper's preference order places colour before size among modifier attributes. This aligns with the RSA noise discrimination ordering: colour (0.98) has higher discrimination than size (0.60), so colour modifiers provide more signal to the L0 listener. Higher-signal attributes are both more preferred (D&R) and more discriminating (RSA Noise).
The full discrimination ordering: colour > size > material. This predicts that speakers should prefer to include colour modifiers (high signal) over material modifiers (low signal), aligning with the empirical finding that colour is used redundantly more than material.
The IA's modifier attributes map to PropertyDomain, connecting
the classical REG representation to linglib's type infrastructure.
This means noise parameters, comparison-class properties, and
cross-study data are all accessible for IA modifier attributes.
The IA and RSA S1 solve the same problem — producing a referring expression that identifies a target among distractors — but via different mechanisms:
- IA: greedy, deterministic, fixed attribute order, no cost
- RSA S1: probabilistic, soft-maximizes informativity − cost
Both decompose into Q1 (informativity) and Q2 (brevity):
| Framework | Q1 | Q2 |
|---|---|---|
| D&R IA | include if discriminating | preference order |
| RSA S1 | α · log P_L0(w | u) |
When RSA cost = 0, S1 has no brevity pressure, corresponding to No Brevity. When cost > 0, S1 penalizes longer utterances, moving toward Full Brevity as α → ∞.