@cite{bill-etal-2025} — DP Conjunction Complexity #
"Is DP conjunction always complex? The view from child Georgian and Hungarian" Semantics & Pragmatics 18, Article 5, 1-20.
Main Question #
@cite{mitrovic-sauerland-2014} claim DP conjunction universally decomposes into J (set intersection) + MU (subset) + ☉ (type-shifter). Combined with the Transparency Principle — children prefer 1-to-1 form-meaning mappings — this predicts J-MU expressions (where all pieces are overt) should be easier for children to comprehend than J-only or MU-only.
Experiment #
Act-out task: children and adults hear conjunctive sentences and manipulate objects to match. Two DVs: accuracy and sentence-played-n (replay count).
Key Findings #
- Georgian children: J-MU sentences required significantly more replays than J or MU sentences (opposite of prediction). No difference between J and MU.
- Hungarian: no significant sentence-type effects detected on either measure. (Null result — could reflect ceiling effects or insufficient power.)
- Adults: near-ceiling in both languages.
Theoretical Significance #
Results challenge both Mitrović & Sauerland's universal decomposition and alternative accounts.
Semantic Connection #
The M&S decomposition maps directly onto Montague/Conjunction.lean:
- J =
genConj(Partee & Rooth's generalized conjunction / set intersection) - MU =
inclFunc(INCL schema / subset relation) - ☉ =
typeRaise(individual → singleton set / generalized quantifier)
Cross-linguistic conjunction strategy.
@cite{mitrovic-sauerland-2014} decompose DP conjunction into three semantic pieces: J (set intersection), MU (subset), ☉ (type-shifter). Languages vary in which pieces are overtly realized.
- jOnly : ConjunctionStrategy
Only J particle overt (e.g., English "and", Hungarian "és", Georgian "da")
- muOnly : ConjunctionStrategy
Only MU particles overt (e.g., Japanese "mo...mo", Hungarian "is...is", Georgian "-c...-c")
- jMu : ConjunctionStrategy
Both J and MU overt (e.g., Hungarian "is és...is", Georgian "-c da...-c")
Instances For
Equations
- One or more equations did not get rendered due to their size.
Instances For
Number of overt functional morphemes per strategy.
Under @cite{mitrovic-sauerland-2016}, the underlying structure always has 3 semantic pieces (J + MU₁ + MU₂). What varies is how many are pronounced.
Equations
Instances For
Under @cite{mitrovic-sauerland-2016}, there are always 3 semantic pieces. The transparency ratio measures how many are overtly realized.
Instances For
@cite{mitrovic-sauerland-2016} + Transparency Principle predicts: more overt morphemes → easier to acquire (closer to 1-to-1 form-meaning mapping).
Equations
Instances For
Equations
- One or more equations did not get rendered due to their size.
Instances For
Georgian J particle
Equations
- Phenomena.Coordination.Studies.BillEtAl2025.georgian_da = { language := "Georgian", form := "da", gloss := "and", role := "J", boundMorpheme := false }
Instances For
Georgian MU particle (clitic)
Equations
- Phenomena.Coordination.Studies.BillEtAl2025.georgian_c = { language := "Georgian", form := "-c", gloss := "MU/also", role := "MU", boundMorpheme := true }
Instances For
Hungarian J particle
Equations
- Phenomena.Coordination.Studies.BillEtAl2025.hungarian_es = { language := "Hungarian", form := "és", gloss := "and", role := "J", boundMorpheme := false }
Instances For
Hungarian MU particle
Equations
- Phenomena.Coordination.Studies.BillEtAl2025.hungarian_is = { language := "Hungarian", form := "is", gloss := "MU/also", role := "MU", boundMorpheme := false }
Instances For
Both Georgian and Hungarian allow all three strategies. This is typologically rare — most languages have only one or two.
Equations
- One or more equations did not get rendered due to their size.
Instances For
Equations
- One or more equations did not get rendered due to their size.
Instances For
Key morphological difference: Georgian MU (-c) is a bound clitic, Hungarian MU (is) is a free morpheme. This may be relevant to the cross-linguistic difference in results (@cite{clark-2017}: free morphemes may be acquired more readily than bound).
Instances For
Equations
Instances For
Equations
- One or more equations did not get rendered due to their size.
Instances For
Equations
- One or more equations did not get rendered due to their size.
Instances For
Equations
- One or more equations did not get rendered due to their size.
Instances For
Equations
- One or more equations did not get rendered due to their size.
Instances For
Equations
- Phenomena.Coordination.Studies.BillEtAl2025.georgianAdults = { language := "Georgian", group := Phenomena.Coordination.Studies.BillEtAl2025.Group.adult, n := 41, ageRange := none }
Instances For
Equations
- One or more equations did not get rendered due to their size.
Instances For
Equations
- Phenomena.Coordination.Studies.BillEtAl2025.hungarianAdults = { language := "Hungarian", group := Phenomena.Coordination.Studies.BillEtAl2025.Group.adult, n := 30, ageRange := none }
Instances For
Age-accuracy correlation in Georgian children: medium positive. r(525) = 0.31, p < 0.001 (footnote 8).
Instances For
Age-sentencePlayedN correlation in Georgian children: small negative. r(497) = -0.18, p < 0.001 (footnote 9). Older children needed fewer replays.
Instances For
A single cell in the Group × SentenceType design.
- language : String
- group : Group
- sentenceType : ConjunctionStrategy
- accuracyPct : ℕ
Accuracy (percentage 0-100)
- nParticipants : ℕ
Number of participants
Instances For
Equations
- One or more equations did not get rendered due to their size.
Instances For
Georgian accuracy data (approximate from Figure 4). Adults near ceiling across all conditions. Children lower but no significant sentence-type effect on accuracy.
Equations
- One or more equations did not get rendered due to their size.
Instances For
Result of a Likelihood Ratio Test comparing nested models.
We encode statistical test results as data, not as theorems about the underlying population. A non-significant result means the test did not detect an effect — not that no effect exists.
- effect : String
- df : ℕ
- chiSquared : Float
- pValue : Float
- significant : Bool
Whether p < .05 (conventional threshold)
Instances For
Equations
- One or more equations did not get rendered due to their size.
Instances For
Table 1: LRT results for Georgian accuracy.
Only group is significant — sentence-type effect NOT detected. NOTE: This is a null result. The act-out task allowed unlimited replays, which may have washed out accuracy differences (see Section 3.1.2).
Equations
- One or more equations did not get rendered due to their size.
Instances For
Table 2: LRT results for Georgian sentence-played-n.
All effects significant — this is where the key finding emerges.
Equations
- One or more equations did not get rendered due to their size.
Instances For
Pairwise comparison for sentence-played-n (Table 3). Tukey-adjusted p-values. Values on log scale, encoded as thousandths (e.g., -176 = -0.176) so that comparisons are decidable.
- group : Group
- contrast : String
- estimate_thou : ℤ
Estimate on log scale, in thousandths (-176 = -0.176)
- se_thou : ℕ
Standard error in thousandths
- df : ℕ
- tRatio_thou : ℤ
t-ratio in thousandths
- pValue_tenThou : ℕ
p-value in ten-thousandths (1 = 0.0001, 670 = 0.067)
- significant : Bool
Instances For
Equations
- One or more equations did not get rendered due to their size.
Instances For
Georgian children: J vs J-MU (p < .0001). Negative = J-MU harder.
Equations
- One or more equations did not get rendered due to their size.
Instances For
Georgian children: J vs MU (p = .067, marginal).
Equations
- One or more equations did not get rendered due to their size.
Instances For
Georgian children: J-MU vs MU (p < .01). Positive = J-MU harder.
Equations
- One or more equations did not get rendered due to their size.
Instances For
Equations
- One or more equations did not get rendered due to their size.
Instances For
Adults show no pairwise differences (all p > .6).
Equations
- One or more equations did not get rendered due to their size.
Instances For
Table 4: LRT results for Hungarian accuracy.
No significant effects detected. NOTE: Null result — Hungarian children were somewhat older-behaving than Georgian children despite being younger (see fn. 4).
Equations
- One or more equations did not get rendered due to their size.
Instances For
Table 5: LRT results for Hungarian sentence-played-n.
Only group significant — sentence-type effect NOT detected. NOTE: Null result for sentence-type. Could reflect: (a) no actual difference, (b) insufficient power (n=25 children), (c) Hungarian MU (free morpheme "is") being easier than Georgian MU (bound clitic "-c"), washing out complexity effects.
Equations
- One or more equations did not get rendered due to their size.
Instances For
Georgian children replayed J-MU sentences significantly more than J sentences.
This is the OPPOSITE of what @cite{mitrovic-sauerland-2016} + Transparency Principle predicts. The prediction was that J-MU (most transparent) should be EASIEST.
Negative estimate means J < J-MU in replay count (J-MU harder).
Georgian children replayed J-MU sentences significantly more than MU sentences.
Positive estimate means J-MU > MU in replay count (J-MU harder).
No significant difference between J and MU for Georgian children.
NOTE: This is a null result (p = .067, marginal). We record the non-significance but do NOT assert that J and MU are equally difficult.
The Transparency Principle: Learning is easier for overt and unambiguous (1-to-1) form-meaning mappings than for covert and/or conflated (many-to-1) mappings.
Equations
Instances For
@cite{mitrovic-sauerland-2016} + Transparency Principle predicts J-MU is more transparent than both J-only and MU-only.
The Georgian sentence-played-n data contradicts this prediction: J-MU was HARDER (more replays), not easier. The significant pairwise comparisons go in the wrong direction.
Link to Phenomena/Gradability/Imprecision/FormMeaning.lean #
The Transparency Principle is the acquisition-side counterpart of the No Needless Manner Violations principle formalized in FormMeaning.lean.
Both principles relate form complexity to meaning:
- NNMV: More complex form → more precise meaning
- Transparency: More overt form-meaning mapping → easier acquisition
The andBoth datum in FormMeaning.lean is particularly relevant:
"Ann and Bert" (J-only) vs "both Ann and Bert" (≈ J+MU).
"Both" adds precision (removes homogeneity gap) — it's arguably an
overt realization of MU/distributivity, paralleling the J-MU strategy.
Bill et al.'s finding complicates this picture: in Georgian, adding overt MU+J (maximum transparency) made comprehension HARDER, suggesting that morphological complexity can outweigh transparency benefits.
Link to Phenomena/AdditiveParticles/Data.lean #
Japanese "mo" (listed as an additive particle in AdditiveParticles/Data.lean) is the canonical MU particle in Mitrović & Sauerland's framework. In conjunction, "mo...mo" = MU-only strategy:
Taroo-mo Hanako-mo neta Taro-MU Hanako-MU slept "Both Taro and Hanako slept"
Similarly, Hungarian "is" and Georgian "-c" serve as both additive particles and conjunction MU particles — unifying two phenomena under a single morpheme.
Semantic Decomposition (@cite{mitrovic-sauerland-2016}) #
The M&S decomposition maps onto three operations already formalized:
| M&S piece | Semantic operation | Montague/Conjunction.lean |
|---|---|---|
| J | Set intersection | genConj at ⟨⟨e,t⟩,⟨⟨e,t⟩,t⟩⟩ |
| MU | Subset (INCL) | inclFunc / inclProperty |
| ☉ | {x} formation | typeRaise (e → ⟨⟨e,t⟩,t⟩) |
The full derivation of "Mary and Susan sleep":
- ☉(Mary) = λP.P(Mary) — typeRaise
- MU(☉(Mary), sleep) = {Mary} ⊆ ⟦sleep⟧ — inclFunc
- Similarly for Susan
- J combines the two MU-results via conjunction — genConj at type t
The result: {Mary} ⊆ ⟦sleep⟧ ∧ {Susan} ⊆ ⟦sleep⟧ = sleep(Mary) ∧ sleep(Susan)
Type-raising an entity and checking subset inclusion of its singleton is equivalent to applying the predicate directly.
This is the core of the M&S decomposition: the roundtrip through ☉ + MU + J recovers ordinary conjunction semantics.
Full M&S derivation: "DP₁ and DP₂ VP" via ☉ + MU + J
yields the same result as Partee & Rooth's coordEntities.