Documentation

Linglib.Phenomena.TenseAspect.Studies.Koev2017

@cite{koev-2017} Empirical Data @cite{koev-2017} #

Theory-neutral data from @cite{koev-2017}:1–38). The Bulgarian evidential (the -l participle in evidential contexts) is felicitous when the speaker's evidence acquisition is spatiotemporally distant from the described event: either temporally non-overlapping (standard indirect evidence) or spatially distant (same time, different place). Direct witness (same time, same place) is infelicitous.

The paper also demonstrates that the evidential contribution is not at issue (projects past negation and modals) and that the speaker is fully committed to the proposition (non-modal analysis, contra @cite{izvorski-1997}).

Whether the described event and the learning event overlap in time.

Instances For
    Equations
    • One or more equations did not get rendered due to their size.
    Instances For

      Whether the described event and the learning event share a location.

      Instances For
        Equations
        • One or more equations did not get rendered due to their size.
        Instances For

          An evidential felicity datum from @cite{koev-2017}. Each records the spatiotemporal configuration of the described event and the learning event, and whether the evidential is felicitous.

          • temporal : TemporalOverlap

            Temporal overlap between described and learning events

          • spatial : SpatialRelation

            Spatial relation between described and learning events

          • evidentialFelicitous : Bool

            Whether the Bulgarian evidential is felicitous in this configuration

          • exampleNum : String

            Example number in @cite{koev-2017}

          Instances For
            Equations
            • One or more equations did not get rendered due to their size.
            Instances For

              (3)/(25a): Standard indirect evidence — speaker was not present when the event occurred. Non-overlapping in time, same place. Felicitous.

              Equations
              • One or more equations did not get rendered due to their size.
              Instances For

                Direct witness — speaker perceived the event as it happened. Overlapping in time, same place. Infelicitous.

                Equations
                • One or more equations did not get rendered due to their size.
                Instances For

                  (25b): Smoke from chimney — speaker perceives evidence of the event from a different location, at the same time. Overlapping in time, different place. Felicitous — spatial distance suffices.

                  Equations
                  • One or more equations did not get rendered due to their size.
                  Instances For

                    The evidential does not weaken commitment: "EV(p) and I know because I was there" is not contradictory (unlike a modal which would predict contradiction). @cite{koev-2017}, §3.

                    Equations
                    Instances For

                      The evidential contribution projects past negation: "It is not the case that Ivan EV-came" presupposes indirect evidence while negating the proposition. @cite{koev-2017}, §5.

                      Equations
                      Instances For

                        △ predicts felicity: the evidential is felicitous iff the described event and the learning event are spatiotemporally distant (temporally non-overlapping or spatially distant).

                        Equations
                        • One or more equations did not get rendered due to their size.
                        Instances For

                          All core data points.

                          Equations
                          • One or more equations did not get rendered due to their size.
                          Instances For

                            △ correctly predicts felicity for all core data points.

                            Bridge theorems connecting @cite{koev-2017}'s spatiotemporal distance analysis to existing linglib infrastructure, organized around the paper's four properties (property 6):

                            Plus structural bridges:

                            Central Claim: Learning Events #

                            The paper's deepest contribution is ontological: evidentials introduce a learning event e_l — the event through which the speaker acquired the reported information. The formal representation (74b):

                            ∃e_l ∧ learn_{cs(k)}(e_l, sp(k), p) ∧ τ(e_l) ≤ time(k) ∧ e △ e_l

                            The learn predicate is subscripted with cs(k) (context set), not with p (scope proposition). This is the formal mechanism for not-at-issue status: the evidential restricts the context set directly (≈ presupposition), while the assertion commits the speaker to p via DECL (72).

                            A learning scenario: the evidential introduces a learning event e_l — the event through which the speaker acquired the reported information — paired with the described event e.

                            The paper's representation (74b): ∃e_l ∧ learn_{cs(k)}(e_l, sp(k), p) ∧ τ(e_l) ≤ time(k) ∧ e △ e_l

                            The cs(k) Subscript #

                            The learn predicate is subscripted with cs(k) (the context set at discourse move k), not with the scope proposition p. This is the formal mechanism for not-at-issue status: the evidential contribution restricts the context set directly (≈ presupposition in PrProp.presup), while the assertion commits the speaker to p via DECL (72), which maps to PrProp.assertion.

                            The mapping is:

                            • learn_{cs(k)}(e_l, sp(k), p)PrProp.presup (restricts cs)
                            • DECL(72): dc^sp(c) ⊆ pPrProp.assertion (commits to p)

                            This explains why the evidential projects past negation (property 6iv): PrProp.neg preserves presup while negating assertion.

                            What's Captured #

                            What's Not Captured (Future Work) #

                            • The learn predicate itself: We don't model the knowledge-change semantics of learn(e_l, sp(k), p). This would require time-indexed epistemic states: K_sp(p, t) ∧ ¬K_sp(p, t') for t' < τ(e_l).
                            • Propositional content p: The structure pairs events but doesn't carry the proposition learned. Adding p : W → Bool would require a world type parameter constraining downstream usage.
                            • Speech time constraint: τ(e_l) ≤ time(k) ensures the learning event is past. This interacts with tense morphology (the L-participle is morphologically past) but is not modeled here.
                            • Evidence source typology: The paper distinguishes reportative, inferential, and assumptive evidence (§5) via different learn predicates. We collapse these into a single △ constraint.
                            • described : Semantics.Events.Ev Time

                              The described event (what happened: e.g., Ivan kissing Maria)

                            • learning : Semantics.Events.Ev Time

                              The learning event (how the speaker found out: e.g., hearing a report)

                            Instances For

                              △ holds for this scenario (temporal component): the described and learning events have non-overlapping temporal traces.

                              Equations
                              Instances For

                                △ holds for this scenario (full spatiotemporal version): temporal disjointness OR spatial distance.

                                Equations
                                Instances For

                                  Computable temporal △ for ℤ events: ¬(τ(e) overlaps τ(e_l)). Since integer comparison is decidable, we can evaluate △ from the event structure directly.

                                  Equations
                                  Instances For

                                    triangleTemporalB agrees with the propositional isTemporallyDisjoint: the Bool computation and the Prop predicate coincide for ℤ events.

                                    Construct a PrProp from a learning scenario, making the cs(k) → presup mapping constructive.

                                    The presupposition is derived from the event structure (△ holds or not), and the assertion is the scope proposition p (committed via DECL). This is the concrete realization of Koev's (74b):

                                    • presup := △(described, learning) — the evidential's cs(k) contribution
                                    • assertion := p — the scope proposition
                                    Equations
                                    Instances For

                                      Described event: interval [0, 5].

                                      Equations
                                      • One or more equations did not get rendered due to their size.
                                      Instances For

                                        Learning event (indirect): interval [10, 15] — strictly later.

                                        Equations
                                        • One or more equations did not get rendered due to their size.
                                        Instances For

                                          Learning event (direct witness): interval [2, 4] — overlaps described.

                                          Equations
                                          • One or more equations did not get rendered due to their size.
                                          Instances For

                                            Learning event (spatial distance): interval [0, 5] — same time, different place (smoke from chimney).

                                            Equations
                                            • One or more equations did not get rendered due to their size.
                                            Instances For

                                              Indirect evidence scenario: described event [0,5], learning event [10,15].

                                              Equations
                                              • One or more equations did not get rendered due to their size.
                                              Instances For

                                                Smoke-from-chimney scenario: described event [0,5], learning event [0,5] at a different location.

                                                Equations
                                                • One or more equations did not get rendered due to their size.
                                                Instances For

                                                  Indirect evidence: described and learning events are temporally disjoint — described event [0,5] finished before learning event [10,15] started. △ satisfied via temporal disjointness.

                                                  Direct witness: described event [0,5] and learning event [2,4] overlap. They are NOT temporally disjoint — △ fails (when also co-located).

                                                  Despite temporal overlap, any location function assigning different locations to the described and learning events yields △. This captures the smoke-from-chimney scenario (§4): spatial distance suffices.

                                                  The paper separates two constraints in (74b): - e △ e_l : spatiotemporal distance (the evidential's contribution) - τ(e) < τ(e_l) : temporal ordering (the past tense's contribution)

                                                  These are independent: △ can hold via spatial distance alone (smoke
                                                  scenario has △ without temporal ordering), and temporal ordering is
                                                  imposed by tense morphology, not the evidential. 
                                                  

                                                  Temporal ordering: the described event PRECEDES the learning event. This is the past tense's contribution, NOT the evidential's. Paper (74b): τ(e) < τ(e_l).

                                                  The smoke scenario has NO temporal ordering (events are simultaneous), yet △ holds via spatial distance. This demonstrates that △ and temporal ordering are independent constraints.

                                                  All four properties (property 6) follow from toEvidentialProp:

                                                  - **(i) Spatiotemporal meaning**: presup = △(described, learning),
                                                    derived from event structure via `triangleTemporalB`
                                                  - **(ii) Speaker commitment**: assertion = p (non-modal, full commitment)
                                                  - **(iii) Not at issue**: △ is in presup (cs restriction), not assertion
                                                  - **(iv) Projection**: PrProp.neg preserves presup → △ projects past ¬ 
                                                  

                                                  Property (6i): the presupposition of the constructed PrProp IS the △ condition, derived from the event structure. When △ holds (indirect evidence), the presupposition is satisfied at every world.

                                                  When △ fails (direct witness), the presupposition fails — the evidential sentence is undefined (infelicitous).

                                                  Equations
                                                  • One or more equations did not get rendered due to their size.
                                                  Instances For

                                                    Property (6ii): the assertion of a scenario's PrProp IS the scope proposition. The speaker commits to p, not to a modalized version. This holds by construction: DECL (72) maps to PrProp.assertion.

                                                    A modal evidential would assert □_e(p) — "p must be true given evidence e" — a DIFFERENT proposition from p.

                                                    This is a simplified stub; the full Kratzer-grounded version is Izvorski1997.Bridge.izvorskiEv, which uses necessity f g p as the assertion and !(accessibleWorlds f w).isEmpty as the presup.

                                                    Equations
                                                    Instances For

                                                      The modal analysis CAN weaken the assertion: there exist instantiations where the modal's assertion diverges from the scope proposition, while Koev's assertion is always p by construction.

                                                      Property (6iv): the evidential presupposition projects past negation. Negating the evidential negates the assertion (p → ¬p) but preserves the presupposition (△). This follows from PrProp's general negation rule and captures the paper's formalization (78).

                                                      For the indirect evidence case, temporal disjointness + ordering gives isBefore: τ(e).finish ≤ τ(e_l).start.

                                                      Construct Cumming's EvidentialFrame from the learning scenario: T = τ(e).finish, A = τ(e_l).start. This bridges Koev's event-based analysis to Cumming's point-based (S, A, T) frame.

                                                      Equations
                                                      • One or more equations did not get rendered due to their size.
                                                      Instances For

                                                        Cumming's downstream evidence (T ≤ A) holds for the indirect frame — the temporal special case of Koev's △.

                                                        The existing Bulgarian nfutL entry has EP = downstream (T ≤ A), which is the temporal special case of Koev's △: when spatial distance is not at play, △ reduces to temporal disjointness, and temporal disjointness + described-before-learning gives T ≤ A.