Documentation

Linglib.Phenomena.Coordination.Studies.BillEtAl2025

@cite{bill-etal-2025} — DP Conjunction Complexity #

"Is DP conjunction always complex? The view from child Georgian and Hungarian" Semantics & Pragmatics 18, Article 5, 1-20.

Main Question #

@cite{mitrovic-sauerland-2014} claim DP conjunction universally decomposes into J (set intersection) + MU (subset) + ☉ (type-shifter). Combined with the Transparency Principle — children prefer 1-to-1 form-meaning mappings — this predicts J-MU expressions (where all pieces are overt) should be easier for children to comprehend than J-only or MU-only.

Experiment #

Act-out task: children and adults hear conjunctive sentences and manipulate objects to match. Two DVs: accuracy and sentence-played-n (replay count).

Key Findings #

Theoretical Significance #

Results challenge both Mitrović & Sauerland's universal decomposition and alternative accounts.

Semantic Connection #

The M&S decomposition maps directly onto Montague/Conjunction.lean:

Cross-linguistic conjunction strategy.

@cite{mitrovic-sauerland-2014} decompose DP conjunction into three semantic pieces: J (set intersection), MU (subset), ☉ (type-shifter). Languages vary in which pieces are overtly realized.

  • jOnly : ConjunctionStrategy

    Only J particle overt (e.g., English "and", Hungarian "és", Georgian "da")

  • muOnly : ConjunctionStrategy

    Only MU particles overt (e.g., Japanese "mo...mo", Hungarian "is...is", Georgian "-c...-c")

  • jMu : ConjunctionStrategy

    Both J and MU overt (e.g., Hungarian "is és...is", Georgian "-c da...-c")

Instances For
    Equations
    • One or more equations did not get rendered due to their size.
    Instances For

      Under @cite{mitrovic-sauerland-2016}, there are always 3 semantic pieces. The transparency ratio measures how many are overtly realized.

      Equations
      Instances For

        A conjunction particle in a specific language.

        Instances For
          Equations
          • One or more equations did not get rendered due to their size.
          Instances For

            Georgian J particle

            Equations
            Instances For

              Georgian MU particle (clitic)

              Equations
              Instances For

                Hungarian J particle

                Equations
                Instances For

                  Hungarian MU particle

                  Equations
                  Instances For

                    Both Georgian and Hungarian allow all three strategies. This is typologically rare — most languages have only one or two.

                    Equations
                    • One or more equations did not get rendered due to their size.
                    Instances For
                      Equations
                      • One or more equations did not get rendered due to their size.
                      Instances For

                        Key morphological difference: Georgian MU (-c) is a bound clitic, Hungarian MU (is) is a free morpheme. This may be relevant to the cross-linguistic difference in results (@cite{clark-2017}: free morphemes may be acquired more readily than bound).

                        Equations
                        • One or more equations did not get rendered due to their size.
                        Instances For

                          Age range for a participant group, in months.

                          • minMonths :
                          • maxMonths :
                          • meanMonths :
                          Instances For
                            Equations
                            • One or more equations did not get rendered due to their size.
                            Instances For

                              Participant group with demographics.

                              Instances For
                                Equations
                                • One or more equations did not get rendered due to their size.
                                Instances For
                                  Equations
                                  • One or more equations did not get rendered due to their size.
                                  Instances For
                                    Equations
                                    • One or more equations did not get rendered due to their size.
                                    Instances For

                                      Age-accuracy correlation in Georgian children: medium positive. r(525) = 0.31, p < 0.001 (footnote 8).

                                      Equations
                                      Instances For

                                        Age-sentencePlayedN correlation in Georgian children: small negative. r(497) = -0.18, p < 0.001 (footnote 9). Older children needed fewer replays.

                                        Equations
                                        Instances For

                                          A single cell in the Group × SentenceType design.

                                          Instances For
                                            Equations
                                            • One or more equations did not get rendered due to their size.
                                            Instances For

                                              Georgian accuracy data (approximate from Figure 4). Adults near ceiling across all conditions. Children lower but no significant sentence-type effect on accuracy.

                                              Equations
                                              • One or more equations did not get rendered due to their size.
                                              Instances For

                                                Result of a Likelihood Ratio Test comparing nested models.

                                                We encode statistical test results as data, not as theorems about the underlying population. A non-significant result means the test did not detect an effect — not that no effect exists.

                                                Instances For
                                                  Equations
                                                  • One or more equations did not get rendered due to their size.
                                                  Instances For

                                                    Table 1: LRT results for Georgian accuracy.

                                                    Only group is significant — sentence-type effect NOT detected. NOTE: This is a null result. The act-out task allowed unlimited replays, which may have washed out accuracy differences (see Section 3.1.2).

                                                    Equations
                                                    • One or more equations did not get rendered due to their size.
                                                    Instances For

                                                      Table 2: LRT results for Georgian sentence-played-n.

                                                      All effects significant — this is where the key finding emerges.

                                                      Equations
                                                      • One or more equations did not get rendered due to their size.
                                                      Instances For

                                                        Pairwise comparison for sentence-played-n (Table 3). Tukey-adjusted p-values. Values on log scale, encoded as thousandths (e.g., -176 = -0.176) so that comparisons are decidable.

                                                        • group : Group
                                                        • contrast : String
                                                        • estimate_thou :

                                                          Estimate on log scale, in thousandths (-176 = -0.176)

                                                        • se_thou :

                                                          Standard error in thousandths

                                                        • df :
                                                        • tRatio_thou :

                                                          t-ratio in thousandths

                                                        • pValue_tenThou :

                                                          p-value in ten-thousandths (1 = 0.0001, 670 = 0.067)

                                                        • significant : Bool
                                                        Instances For
                                                          Equations
                                                          • One or more equations did not get rendered due to their size.
                                                          Instances For

                                                            Georgian children: J vs J-MU (p < .0001). Negative = J-MU harder.

                                                            Equations
                                                            • One or more equations did not get rendered due to their size.
                                                            Instances For

                                                              Georgian children: J vs MU (p = .067, marginal).

                                                              Equations
                                                              • One or more equations did not get rendered due to their size.
                                                              Instances For

                                                                Georgian children: J-MU vs MU (p < .01). Positive = J-MU harder.

                                                                Equations
                                                                • One or more equations did not get rendered due to their size.
                                                                Instances For
                                                                  Equations
                                                                  • One or more equations did not get rendered due to their size.
                                                                  Instances For

                                                                    Adults show no pairwise differences (all p > .6).

                                                                    Equations
                                                                    • One or more equations did not get rendered due to their size.
                                                                    Instances For

                                                                      Table 4: LRT results for Hungarian accuracy.

                                                                      No significant effects detected. NOTE: Null result — Hungarian children were somewhat older-behaving than Georgian children despite being younger (see fn. 4).

                                                                      Equations
                                                                      • One or more equations did not get rendered due to their size.
                                                                      Instances For

                                                                        Table 5: LRT results for Hungarian sentence-played-n.

                                                                        Only group significant — sentence-type effect NOT detected. NOTE: Null result for sentence-type. Could reflect: (a) no actual difference, (b) insufficient power (n=25 children), (c) Hungarian MU (free morpheme "is") being easier than Georgian MU (bound clitic "-c"), washing out complexity effects.

                                                                        Equations
                                                                        • One or more equations did not get rendered due to their size.
                                                                        Instances For

                                                                          Georgian children replayed J-MU sentences significantly more than J sentences.

                                                                          This is the OPPOSITE of what @cite{mitrovic-sauerland-2016} + Transparency Principle predicts. The prediction was that J-MU (most transparent) should be EASIEST.

                                                                          Negative estimate means J < J-MU in replay count (J-MU harder).

                                                                          Georgian children replayed J-MU sentences significantly more than MU sentences.

                                                                          Positive estimate means J-MU > MU in replay count (J-MU harder).

                                                                          No significant difference between J and MU for Georgian children.

                                                                          NOTE: This is a null result (p = .067, marginal). We record the non-significance but do NOT assert that J and MU are equally difficult.

                                                                          The Transparency Principle: Learning is easier for overt and unambiguous (1-to-1) form-meaning mappings than for covert and/or conflated (many-to-1) mappings.

                                                                          Equations
                                                                          Instances For

                                                                            The Georgian sentence-played-n data contradicts this prediction: J-MU was HARDER (more replays), not easier. The significant pairwise comparisons go in the wrong direction.

                                                                            The Transparency Principle is the acquisition-side counterpart of the No Needless Manner Violations principle formalized in FormMeaning.lean.

                                                                            Both principles relate form complexity to meaning:

                                                                            The andBoth datum in FormMeaning.lean is particularly relevant: "Ann and Bert" (J-only) vs "both Ann and Bert" (≈ J+MU). "Both" adds precision (removes homogeneity gap) — it's arguably an overt realization of MU/distributivity, paralleling the J-MU strategy.

                                                                            Bill et al.'s finding complicates this picture: in Georgian, adding overt MU+J (maximum transparency) made comprehension HARDER, suggesting that morphological complexity can outweigh transparency benefits.

                                                                            Japanese "mo" (listed as an additive particle in AdditiveParticles/Data.lean) is the canonical MU particle in Mitrović & Sauerland's framework. In conjunction, "mo...mo" = MU-only strategy:

                                                                            Taroo-mo Hanako-mo neta Taro-MU Hanako-MU slept "Both Taro and Hanako slept"

                                                                            Similarly, Hungarian "is" and Georgian "-c" serve as both additive particles and conjunction MU particles — unifying two phenomena under a single morpheme.

                                                                            Semantic Decomposition (@cite{mitrovic-sauerland-2016}) #

                                                                            The M&S decomposition maps onto three operations already formalized:

                                                                            M&S pieceSemantic operationMontague/Conjunction.lean
                                                                            JSet intersectiongenConj at ⟨⟨e,t⟩,⟨⟨e,t⟩,t⟩⟩
                                                                            MUSubset (INCL)inclFunc / inclProperty
                                                                            {x} formationtypeRaise (e → ⟨⟨e,t⟩,t⟩)

                                                                            The full derivation of "Mary and Susan sleep":

                                                                            1. ☉(Mary) = λP.P(Mary) — typeRaise
                                                                            2. MU(☉(Mary), sleep) = {Mary} ⊆ ⟦sleep⟧ — inclFunc
                                                                            3. Similarly for Susan
                                                                            4. J combines the two MU-results via conjunction — genConj at type t

                                                                            The result: {Mary} ⊆ ⟦sleep⟧ ∧ {Susan} ⊆ ⟦sleep⟧ = sleep(Mary) ∧ sleep(Susan)

                                                                            Type-raising an entity and checking subset inclusion of its singleton is equivalent to applying the predicate directly.

                                                                            This is the core of the M&S decomposition: the roundtrip through ☉ + MU + J recovers ordinary conjunction semantics.

                                                                            Full M&S derivation: "DP₁ and DP₂ VP" via ☉ + MU + J yields the same result as Partee & Rooth's coordEntities.