From Lemmas to Dependencies: What Signals Drive Light Verbs Classification?

From Lemmas to Dependencies: What Signals Drive Light Verbs Classification?
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Light verb constructions (LVCs) are a challenging class of verbal multiword expressions, especially in Turkish, where rich morphology and productive complex predicates create minimal contrasts between idiomatic predicate meanings and literal verb–argument uses. This paper asks what signals drive LVC classification by systematically restricting model inputs. Using UD-derived supervision, we compare lemma-driven baselines (lemma TF–IDF + Logistic Regression; BERTurk trained on lemma sequences), a grammar-only Logistic Regression over UD morphosyntax (UPOS/DEPREL/MORPH), and a full-input BERTurk baseline. We evaluate on a controlled diagnostic set with Random negatives, lexical controls (NLVC), and LVC positives, reporting split-wise performance to expose decision-boundary behavior. Results show that coarse morphosyntax alone is insufficient for robust LVC detection under controlled contrasts, while lexical identity supports LVC judgments but is sensitive to calibration and normalization choices. Overall, Our findings motivate targeted evaluation of Turkish MWEs and show that ``lemma-only’’ is not a single, well-defined representation, but one that depends critically on how normalization is operationalized.


💡 Research Summary

The paper investigates which linguistic cues are essential for detecting Light Verb Constructions (LVCs) in Turkish, a language where rich morphology and productive complex predicates create minimal contrasts between idiomatic and literal verb‑argument uses. Using Universal Dependencies (UD) treebanks as a source of weak supervision, the authors derive binary sentence‑level labels indicating the presence or absence of an LVC. They then systematically restrict the information available to four model families: (1) a lemma‑only baseline that converts each sentence into a TF‑IDF vector of lemmas and trains a Logistic Regression classifier; (2) a BERTurk model fine‑tuned on sequences of lemmas, preserving contextual information but no surface forms; (3) a grammar‑only baseline that discards lexical identity and represents each sentence as a bag‑of‑features over UD UPOS tags, dependency relations (DEPREL), and morphological features (MORPH), again with Logistic Regression; and (4) a full‑input BERTurk model that receives the original tokenized text (including morphology and context).

To evaluate these models under controlled conditions, the authors construct a diagnostic set of 147 sentences, balanced across three conditions (49 each): (i) RANDOM negatives – sentences without any LVC and without the target light‑verb lemmas; (ii) NLVC lexical controls – sentences that reuse the same light‑verb lemmas as the positives but are constructed to convey a literal verb‑argument meaning; and (iii) LVC positives – sentences that contain an idiomatic light‑verb construction. All items were manually authored and validated by three annotators for naturalness, plausibility, and label agreement, ensuring minimal confounds.

Results reveal a clear hierarchy of signal usefulness. The grammar‑only Logistic Regression model performs well on RANDOM negatives (high accuracy) but fails to distinguish NLVC from LVC, achieving a low F1 (~0.42) on the crucial contrast. This demonstrates that coarse morphosyntactic cues (UPOS/DEPREL/MORPH) are insufficient for capturing the semantic shift that defines an LVC. The lemma‑only TF‑IDF baseline improves discrimination (F1 ≈ 0.71) but suffers from a high false‑negative rate on true LVCs, indicating that lexical identity alone can be overly rigid. The lemma‑sequence BERTurk model further benefits from contextual encoding, reaching an F1 of about 0.78; however, its performance is sensitive to how lemmas are normalized (e.g., stemming vs. canonical lemmatization), with variations of up to 5 percentage points. The full‑input BERTurk, which combines lexical, morphological, and contextual information, achieves the best overall performance (F1 ≈ 0.84, accuracy ≈ 89%).

The authors interpret these findings as evidence that (1) morphosyntactic information alone cannot reliably identify LVCs in Turkish; (2) lexical cues are decisive but must be represented carefully, as “lemma‑only” is not a monolithic representation but a family of variants whose preprocessing choices materially affect model behavior; and (3) robust evaluation of MWEs requires controlled diagnostic suites rather than aggregate accuracy on heterogeneous corpora. They advocate for explicit reporting of lemma normalization pipelines, for the release of their diagnostic dataset and code, and for future work to explore hybrid models that can dynamically integrate lexical and syntactic signals. The study thus contributes both methodological insights into probing linguistic knowledge in neural models and practical resources for Turkish MWE research.


Comments & Academic Discussion

Loading comments...

Leave a Comment