CNPC conditions (Experiment 1) #
Bare wh + definite island-forming NP: worst CNPC condition. "I saw who Emma doubted the report that we had captured ___"
Equations
- Phenomena.FillerGap.Compare.cnpc_bare_def = { locality := 8, boundaries := 1, referentialLoad := 2, ease := 0 }
Instances For
Which-N + indefinite island-forming NP: best CNPC condition. "I saw which convict Emma doubted a report that we had captured ___"
Equations
- Phenomena.FillerGap.Compare.cnpc_which_indef = { locality := 8, boundaries := 1, referentialLoad := 1, ease := 2 }
Instances For
Non-island baseline (no extraction): "I saw who Emma doubted that ___"
Equations
- Phenomena.FillerGap.Compare.cnpc_baseline = { locality := 5, boundaries := 0, referentialLoad := 0, ease := 0 }
Instances For
Wh-island conditions (Experiment 2) #
Bare wh into wh-island: "Who did Albert learn whether they dismissed ___"
Equations
- Phenomena.FillerGap.Compare.whIsland_bare = { locality := 7, boundaries := 1, referentialLoad := 1, ease := 0 }
Instances For
Which-N into wh-island: "Which employee did Albert learn whether they dismissed ___"
Equations
- Phenomena.FillerGap.Compare.whIsland_which = { locality := 7, boundaries := 1, referentialLoad := 1, ease := 2 }
Instances For
Island conditions as a sum type for typeclass instance.
- cnpcBareDef : IslandCondition
- cnpcWhichIndef : IslandCondition
- cnpcBaseline : IslandCondition
- whIslandBare : IslandCondition
- whIslandWhich : IslandCondition
Instances For
Equations
- One or more equations did not get rendered due to their size.
Instances For
Equations
- Phenomena.FillerGap.Compare.instBEqIslandCondition.beq x✝ y✝ = (x✝.ctorIdx == y✝.ctorIdx)
Instances For
Equations
- One or more equations did not get rendered due to their size.
Complex fillers reduce processing difficulty in CNPC. This is the filler complexity paradox: more syntactic material in the filler makes the island easier to process, contrary to any account where cost increases monotonically with phrase size.
Pareto: cnpc_which_indef is easier than cnpc_bare_def because it has lower referentialLoad (1 < 2) and higher ease (2 > 0), with locality and boundaries equal.
Complex fillers reduce processing difficulty in wh-islands.
Pareto: whIsland_which is easier than whIsland_bare because it has higher ease (2 > 0), with all other dimensions equal.
Worst CNPC condition is harder than baseline.
Pareto: cnpc_bare_def is worse on locality (8 > 5), boundaries (1 > 0), and referentialLoad (2 > 0), with ease equal.
Worst CNPC condition (bare-def) is strictly harder than best (which-indef).
Pareto: bare-def has higher referentialLoad (2 > 1) and lower ease (0 < 2), with locality and boundaries equal.
Which-indef CNPC vs baseline is incomparable under Pareto.
Which-indef is worse on locality (8 > 5), boundaries (1 > 0), and
referentialLoad (1 > 0), but better on ease (2 > 0). The trade-off
between distance costs and retrieval facilitation is genuine — Pareto
honestly reports it as incomparable rather than forcing a cardinal
aggregate.
A nonstructural manipulation that changes island acceptability without altering the island configuration.
Each of the three accounts (competence, processing, discourse) makes a prediction about whether the manipulation should affect acceptability.
- description : String
- competencePredictsDifference : Bool
Does any competence theory predict an acceptability difference?
- processingPredictsDifference : Bool
Does the processing account predict a difference?
- discoursePredictsDifference : Bool
Does the discourse/backgroundedness account predict a difference?
- differenceObserved : Bool
Is a difference actually observed?
- significance : String
Statistical significance (p-value description)
Instances For
Equations
- One or more equations did not get rendered due to their size.
Instances For
Filler complexity in CNPC (Experiment 1, §5). which-N vs bare wh — same island structure, different filler.
Equations
- One or more equations did not get rendered due to their size.
Instances For
Filler complexity in wh-islands (Experiment 2, §6). which-N vs bare wh — same island structure.
Equations
- One or more equations did not get rendered due to their size.
Instances For
NP type in CNPC (Experiment 1, §5). Definite vs indefinite island-forming NP — same CNPC configuration.
Equations
- One or more equations did not get rendered due to their size.
Instances For
Filler complexity in adjunct islands (Experiment 3, §7). Complex vs simple temporal adjunct — same wh-island structure.
Equations
- One or more equations did not get rendered due to their size.
Instances For
MoS island manipulations #
Prosodic focus on embedded object in MoS islands (Experiments 1, 2a, 3b). Focus changes information structure without changing syntax or processing load.
Equations
- One or more equations did not get rendered due to their size.
Instances For
Say + manner adverb creates island (Experiment 3a). Adding an adverb doesn't change CP structure but adds manner weight.
Equations
- One or more equations did not get rendered due to their size.
Instances For
Verb-frame frequency in MoS islands (all experiments). Frequency is the proposed mechanism.
Equations
- One or more equations did not get rendered due to their size.
Instances For
All manipulations: + @cite{lu-degen-2025}.
Equations
- One or more equations did not get rendered due to their size.
Instances For
Processing correctly predicts the observed difference.
Equations
Instances For
Competence correctly predicts the observed (non-)difference.
Equations
Instances For
Discourse correctly predicts the observed (non-)difference.
Equations
Instances For
Processing scores 4/7: correct on all 4 H&S manipulations, but misses prosodic focus, say+adverb, and frequency (predicts effect, none found).
Competence scores 1/7 — only the frequency null result (where it correctly predicts no effect).
Discourse scores 3/7: correct on prosodic focus, say+adverb, and frequency null. Misses the 4 H&S effects which are processing, not discourse.
Processing and discourse are perfectly complementary: for every manipulation, exactly one of the two accounts is correct (XOR). This means they have full coverage (together 7/7) with zero overlap.
Ordering predictions that Pareto dominance can verify.
Note: which-indef CNPC vs baseline is incomparable (trade-off between
distance costs and retrieval facilitation), so it is not included here.
See which_indef_vs_baseline_incomparable.
Equations
- One or more equations did not get rendered due to their size.
Instances For
All Pareto-orderable predictions verified.
The binary strong/weak classification (constraintStrength in Islands.Data) is challenged by H&S's data.
The CNPC is classified as "strong", yet its acceptability varies by 25 points (60 → 85) under nonstructural manipulation. If "strong" means "consistently blocks the dependency", the CNPC is not consistently strong.
Similarly, wh-islands are classified as "weak" (ameliorated with D-linking), but H&S show that the amelioration tracks processing difficulty specifically, not D-linking per se — the same effect appears with nonreferential adjuncts (Experiment 3).
Yet CNPC acceptability varies by 25+ points under nonstructural manipulation — gradient, not categorical.
construction-based island analysis #
@cite{deane-1991} argues that island constraints are construction-specific GAP restrictions, not universal Subjacency. This means:
- The grammar overgerates (licenses extractions freely)
- Construction-specific constraints (GAP restrictions) block some extractions
- Remaining gradient acceptability is explained by processing
This is exactly the division of labor the processing comparison reveals: grammar determines structural possibility, processing determines ease.
The F-G typology (Phenomena.FillerGap.Studies.Sag2010) classifies which
constructions are islands. The processing model explains within-island
gradient effects (filler complexity, NP type).
Sag's two island constructions are a proper subset of all F-G types. The non-island types (interrogative, relative, the-clause) freely permit extraction, consistent with the processing account's prediction that apparent island effects in these are gradient, not categorical.
The constructions Sag identifies as islands (topicalization, exclamatives) are not among those tested by (CNPC, wh-islands, adjuncts). This is significant: H&S test processing-based islands, whiidentifies grammar-based islands — they explain different cases.
Together they cover both:
- Grammar-based islands: topicalization [GAP ⟨⟩], exclamatives [GAP ⟨⟩]
- Processing-based "islands": CNPC, wh-islands, adjuncts (gradient effects)
Summary: Island Effects Three-Way Comparison #
Nonstructural manipulations of island acceptability #
| Manipulation | Competence | Processing | Discourse | Observed |
|---|---|---|---|---|
| Filler complexity (CNPC) | No effect | Effect ✓ | No effect | p<0.0001 |
| Filler complexity (wh-island) | No effect | Effect ✓ | No effect | p=0.001 |
| NP definiteness (CNPC) | No effect | Effect ✓ | No effect | Marginal |
| Adjunct complexity (wh-island) | No effect | Effect ✓ | No effect | p<0.01 |
| Prosodic focus (MoS) | No effect | No effect | Effect ✓ | p<0.001 |
| Say+adverb island | No effect | No effect | Effect ✓ | p<0.001 |
| Verb-frame frequency (MoS) | No effect ✓ | Effect | No effect ✓ | n.s. |
Score: Processing 4/7, Discourse 3/7, Competence 1/7. Together: 7/7.
Key findings #
Filler complexity paradox (): more complex wh-phrases improve island acceptability. Predicted by processing, not by competence or discourse.
Prosodic amelioration: focus on embedded object ameliorates MoS islands. Predicted by discourse, not by competence or processing.
Say+adverb replication: adding manner adverbs to bridge verb say creates new islands. Predicted by discourse alone.
Perfect complementarity: processing (4/7) and discourse (3/7) cover disjoint manipulations. Together they explain all 7 observed patterns.
Theoretical upshot #
Island effects arise from (at least) three distinct mechanisms:
- Grammar: categorical blocking (topicalization, exclamatives —)
- Processing: gradient difficulty from memory load (CNPC, wh-islands —)
- Discourse: information-structural backgroundedness (MoS — @cite{lu-degen-2025})
Both domains use ProcessingModel.ProcessingProfile with Pareto dominance
for weight-free ordinal comparison.
Connection to #
Sag's F-G typology (Phenomena.FillerGap.Studies.Sag2010) identifies grammar-based
islands (topicalization, exclamatives with [GAP ⟨⟩]). covers
processing-based islands (CNPC, wh-islands). @cite{lu-degen-2025} covers
discourse-based islands (MoS). Together they provide a three-mechanism account.
Manner-of-Speaking Islands #
@cite{lu-degen-2025} introduce a discourse-based account of island effects that complements both competence and processing accounts. MoS islands arise from information-structural backgroundedness, not syntactic configuration or processing cost. This is a third mechanism alongside grammar-based and processing-based islands.
The three sources are now tracked by constraintSource in Islands.Data:
.syntactic→ competence grammar.processing→ performance/memory.discourse→ information structure
Together, these three mechanisms partition the space of island phenomena:
- Grammar-based: topicalization, exclamatives
- Processing-based: CNPC, wh-islands () — gradient with filler complexity
- Discourse-based: MoS complements — gradient with prosodic focus
MoS islands are discourse-sourced, distinct from syntactic/processing islands.