@cite{israel-2001}: Minimizers, Maximizers, and the Rhetoric of Scalar Reasoning #
Formalizes the core contributions of Israel's Scalar Model of Polarity:
- The 2×2 taxonomy (Figure 1): polarity items classified by scalar value (high/low) × rhetorical force (emphatic/attenuating)
- Inverted polarity items (§3, Figure 3): maximizer NPIs (wild horses, all the tea in China) and minimizer PPIs (at the drop of a hat, for a pittance) — items whose scalar value is opposite to what the basic Scalar Model predicts
- The thematic resolution (§4): inversion tracks propositional role — facilitating roles (stimulus, instrument, reward) produce inverted items, impeding roles (patient, theme, resource) produce canonical items
- The pecuniary paradox: a red cent (NPI, resource = impeding) vs for peanuts (PPI, reward = facilitating) — same monetary domain, different propositional roles
Connection to linglib infrastructure #
ScalarValue,Canonicity,LikelihoodEffectdefined inCore/Lexical/PolarityItem.lean- Inverted items added to
Fragments/English/PolarityItems.lean LikelihoodEffectis a propositional-role concept, not a theta-role function. It connects to proto-role entailments (Dowty 1991) via bridge theorems below, but is independently defined: the relevant distinction is how a participant affects event likelihood, which cross-cuts traditional theta labels.
The basic Scalar Model predicts four cells:
| Emphatic | Attenuating | |
|---|---|---|
| NPI | low: a wink, inch | high: much, long |
| PPI | high: tons, utterly | low: sorta, rather |
Emphatic items license maximally informative interpretations; attenuating items license minimally informative interpretations. NPI contexts are scale-reversing (DE); PPI contexts are scale-preserving (UE).
A polarity item datum with its Israel 2001 classification.
Instances For
Equations
- One or more equations did not get rendered due to their size.
Instances For
"I didn't sleep a wink." — canonical emphatic NPI (low, impeding)
Equations
- One or more equations did not get rendered due to their size.
Instances For
"She didn't budge an inch." — canonical emphatic NPI (low, impeding)
Equations
- Phenomena.Polarity.Studies.Israel2001.didntBudge = { item := Fragments.English.PolarityItems.budgeAnInch, sentence := "She didn't budge an inch.", grammatical := true }
Instances For
"She is insanely good-looking." — canonical emphatic PPI (high)
Equations
- One or more equations did not get rendered due to their size.
Instances For
"She's sorta clever." — canonical attenuating PPI (low)
Equations
- One or more equations did not get rendered due to their size.
Instances For
"He's not all that clever." — canonical attenuating NPI (high)
Equations
- One or more equations did not get rendered due to their size.
Instances For
Inverted items break the simple correlation between scalar value and polarity type. They are explained by propositional role (§4).
"Wild horses couldn't keep me away." — inverted emphatic NPI (high, facilitating)
Equations
- Phenomena.Polarity.Studies.Israel2001.wildHorsesDatum = { item := Fragments.English.PolarityItems.wildHorses, sentence := "Wild horses couldn't keep me away.", grammatical := true }
Instances For
"I wouldn't do it for all the tea in China." — inverted emphatic NPI (high, facilitating)
Equations
- One or more equations did not get rendered due to their size.
Instances For
"I wouldn't touch it with a ten-foot pole." — inverted emphatic NPI (high, facilitating)
Equations
- Phenomena.Polarity.Studies.Israel2001.tenFootPoleDatum = { item := Fragments.English.PolarityItems.aTenFootPole, sentence := "I wouldn't touch it with a ten-foot pole.", grammatical := true }
Instances For
"Godfrey is scared of his own shadow." — inverted emphatic PPI (low, facilitating)
Equations
- One or more equations did not get rendered due to their size.
Instances For
"You could have knocked me over with a feather." — inverted emphatic PPI (low, facilitating)
Equations
- One or more equations did not get rendered due to their size.
Instances For
"We'll be back in a jiffy." — inverted emphatic PPI (low, facilitating)
Equations
- Phenomena.Polarity.Studies.Israel2001.jiffyDatum = { item := Fragments.English.PolarityItems.inAJiffy, sentence := "We'll be back in a jiffy.", grammatical := true }
Instances For
"He got Madonna to play for peanuts." — inverted emphatic PPI (low, facilitating)
Equations
- Phenomena.Polarity.Studies.Israel2001.peanutsDatum = { item := Fragments.English.PolarityItems.forAPittance, sentence := "He got Madonna to play for peanuts.", grammatical := true }
Instances For
The pecuniary paradox (§3, examples 15–16): both a red cent and for peanuts denote small monetary values, but a red cent is an NPI and for peanuts is a PPI. The resolution: they occupy different propositional roles.
- a red cent = Resource (what you spend) → impeding → canonical NPI
- for peanuts = Reward (what you gain) → facilitating → inverted PPI
"He won't spend a red cent on your wedding." — canonical NPI, resource/expense
Equations
- One or more equations did not get rendered due to their size.
Instances For
"He got Madonna to play for peanuts." — inverted PPI, reward
Equations
- One or more equations did not get rendered due to their size.
Instances For
All items from this paper with full classification.
Equations
- One or more equations did not get rendered due to their size.
Instances For
@cite{israel-2001} §4 discusses Dowty's proto-roles (fn. 6) as a
possible basis for the canonical/inverted distinction. The connection
to EntailmentProfile is:
- Proto-Agent entailments (causation, volition, movement, independent existence) → participant facilitates event realization
- Proto-Patient entailments (change of state, incremental theme, causally affected, stationary) → participant impedes event realization (bigger obstacle → less likely)
This is NOT a function on ThetaRole (which is a derived convenience
label in linglib). Instead, LikelihoodEffect is an independent
propositional-role concept that correlates with proto-role entailments
but cross-cuts theta labels in cases like the pecuniary paradox
(where both arguments may be "themes" in a traditional analysis).
Proto-Agent dominance predicts facilitating role.
If an argument position has more P-Agent entailments than P-Patient entailments, the participant tends to facilitate event realization. This is a heuristic, not a theorem — the pecuniary paradox shows that propositional role can diverge from proto-role counts.
Equations
- One or more equations did not get rendered due to their size.
Instances For
Fauconnier (1975b) noted that perception verbs allow dual scalar readings. "Eve didn't hear even the faintest noise" and "Eve didn't hear even the loudest noise" are both emphatic, but use different scales:
- faintest: existential scale (ranking stimuli by likely existence) → canonical impeding role
- loudest: perceptual-ability scale (ranking experiencers by acuity) → inverted facilitating role
The dual reading arises because perception is bicausal: it depends both on the stimulus's salience AND the perceiver's acuity.
Scale type for the ambiguous-superlative phenomenon
- existential : PerceptionScaleType
- perceptualAbility : PerceptionScaleType
Instances For
Equations
- One or more equations did not get rendered due to their size.
Instances For
- sentence : String
- superlative : String
- scaleType : PerceptionScaleType
- likelihoodEffect : Core.Lexical.PolarityItem.LikelihoodEffect
- notes : String
Instances For
Equations
- One or more equations did not get rendered due to their size.
Instances For
Equations
- One or more equations did not get rendered due to their size.
Instances For
Equations
- One or more equations did not get rendered due to their size.
Instances For
@cite{israel-2001} §2 connects the Scalar Model to the Fauconnier-Ladusaw tradition of monotonicity-based licensing:
- Scale-reversing contexts (NPI-licensing): inferences run from high to
low values. In formal terms, these are downward entailing (DE) contexts
— Mathlib's
Antitone, linglib'sIsDE. - Scale-preserving contexts (PPI-licensing): inferences run from low to
high values. In formal terms, these are upward entailing (UE) contexts
— Mathlib's
Monotone, linglib'sIsUE.
Israel's key departure from Ladusaw: the relevant inferences need not be strictly logical — they can be pragmatic entailments within a scalar model. This is why the Scalar Model can handle cases that pure monotonicity misses.
Israel's "scale-reversing" corresponds to formal DE (= Antitone). This is the bridge between the Scalar Model (pragmatic) and the Fauconnier-Ladusaw tradition (logical).
- reversing : ScaleDirection
- preserving : ScaleDirection
Instances For
Equations
- One or more equations did not get rendered due to their size.
Instances For
Map from polarity type to expected scale direction in licensing contexts.
Equations
- Phenomena.Polarity.Studies.Israel2001.expectedScaleDirection Core.Lexical.PolarityItem.PolarityType.npiWeak = some Phenomena.Polarity.Studies.Israel2001.ScaleDirection.reversing
- Phenomena.Polarity.Studies.Israel2001.expectedScaleDirection Core.Lexical.PolarityItem.PolarityType.npiStrong = some Phenomena.Polarity.Studies.Israel2001.ScaleDirection.reversing
- Phenomena.Polarity.Studies.Israel2001.expectedScaleDirection Core.Lexical.PolarityItem.PolarityType.npi_fci = some Phenomena.Polarity.Studies.Israel2001.ScaleDirection.reversing
- Phenomena.Polarity.Studies.Israel2001.expectedScaleDirection Core.Lexical.PolarityItem.PolarityType.ppi = some Phenomena.Polarity.Studies.Israel2001.ScaleDirection.preserving
- Phenomena.Polarity.Studies.Israel2001.expectedScaleDirection Core.Lexical.PolarityItem.PolarityType.fci = none