Loading...
The effects of lexical coverage, discourse structure, and text variables on listening comprehension
Citations
Altmetric:
Genre
Thesis/Dissertation
Date
2025-05
Advisor
Committee member
Group
Department
Teaching & Learning
Permanent link to this record
Collections
Research Projects
Organizational Units
Journal Issue
DOI
https://doi.org/10.34944/wrhy-qy93
Abstract
Lexical knowledge, which is an integral part of second language (L2) development, concerns the requisite amount of vocabulary needed to understand a text. The conclusion in the research literature is that the more lexical items learners know, the better they comprehend reading texts and listening passages. Researchers have attempted to establish lexical thresholds or lexical coverage levels for texts, passages, and genres that can be used to inform vocabulary programs for L2 learners. Lexical coverage as a percentage of known lexical items in a text has been researched in both reading and listening fields. However, no researchers have attempted to evaluate different text types in the presence of lexical coverage manipulation. The purpose of this study is to investigate the differential and interactive effects of lexical coverage, text type—monologues and dialogues—and individual and task variables. In this study, lexical coverage was manipulated by replacing lexical items with pseudowords that mimic English phonological, suprasegmental, and grammatical functions. Lexical coverage at 98%, 95%, and 90% coverage was investigated in conjunction with the manipulation of three monologues and three dialogues. The L2 listening texts were presented to Japanese university participants in nine intact classes in a counter-balanced design to control for topic, lexical coverage, and text type. The results of the multiple-choice listening comprehension tests for each task were evaluated using the Many Facets Rasch Model to estimate topic and item difficulty and instrument performance. The participants’ Rasch person ability estimates for their dictation scores and listening vocabulary levels tests were used as covariates in the main analysis investigating listening comprehension performance in the presence of different text types and lexical coverage levels using Generalized Linear Mixed Effect Modelling.
In this study, I examined how varying levels of lexical coverage influence listening comprehension in monologic and dialogic passages. Six authentic monologues and dialogues were adapted to fit the research context, and in the process, 2%, 5%, and 10% of the content words in each passage were replaced with pseudowords. This procedure resulted in 18 audio versions with lexical coverage levels of 98%, 95%, and 90%. After listening to a passage, participants completed 10 multiple-choice comprehension questions and rated their familiarity with the topic. Comprehension scores, topic familiarity ratings, and proficiency were incorporated into generalized linear mixed models to assess the effects of lexical coverage, text type, and individual listening proficiency alongside aural receptive lexical knowledge on listening comprehension.
The most parsimonious generalized mixed effects model, Model 7, contained only significant factors and predictors. The results of this model indicated that the participants were significantly less likely to answer comprehension questions correctly when listening to passages at 90% lexical coverage compared to 95% or 98% coverage. However, there was no significant difference in performance between 95% and 98% coverage. Post-hoc comparisons showed that the impact of lexical coverage was not consistent across all tasks (Dialogue 1, Dialogue 2, Dialogue 3, Monologue 4, Monologue 5, and Monologue 6). Specifically, lexical coverage had no significant effect on comprehension for Dialogue 1, Dialogue 2, and Monologue 6, while it did significantly influence performance for Dialogue 3, Monologue 4, and Monologue 5. While the reasons for the lack of significant effects in some tasks remain unclear, an analysis of Dialogue 2, on which the participants had the lowest comprehension performance, suggested that factors such as the pseudoword replacement protocol, syntactic complexity, and topic theme might have contributed.
The findings also indicated that the text type of the tasks did not significantly affect the participants’ listening comprehension. Although previous research has suggested that dialogues might be easier for listeners due to features such as repetition, confirmation, and negotiation (e.g., Driscoll et al., 2003; Flowerdew & Tauroza, 1995; Fox Tree & Schrock, 1999; Jung, 2003), the literature on task difficulty differences between monologues and dialogues remains inconclusive (e.g., Bavelas et al., 2000; Branigan et al., 2011; Brindley & Slatyer, 2002). This study aligns with other EFL/ESL research, suggesting that task difficulty results from a combination of task, test items, and individual factors, making it difficult to predict or control. A possible explanation for the lack of significance in this study is that the simplification process removed many authentic features of dialogues, such as pauses, repetitions, and discourse markers, which are often absent in monologues.
Additionally, the results showed that topic familiarity was not a significant predictor of comprehension test performance. This finding contrasts with previous studies (e.g., Giordano, 2021; Kostin, 2004; Nissan et al., 1996). One explanation is that the participants might have confused topic familiarity with task difficulty or comprehension, as they rated topics differently on the background questionnaire compared to after listening. Although neither familiarity rating significantly predicted comprehension scores, the participants appeared to interpret the rating scale inconsistently. This suggests that using a single-item, four-point Likert scale to measure familiarity might lack reliability and methodological soundness.
Finally, Model 7 demonstrated that the participants who performed better on the dictation test and aural receptive vocabulary tests were significantly more likely to achieve higher comprehension scores. These findings highlight the utility of the dictation test in distinguishing participants by proficiency when standardized tests are unavailable. Additionally, the Listening Vocabulary Levels Test, which measured aural receptive lexical knowledge, proved to be a valuable predictor given that it explained unique variance beyond the general listening proficiency captured by the dictation scores.
The value of experimental lexical coverage research is discussed considering the corpus and vocabulary research. Suggestions for methodological improvements are also discussed. Finally, practical suggestions are situated within Nation’s (2007) Four Strands framework so that the current study can be of value to ESL/EFL instructors who hope to provide listening materials at varied lexical coverage levels for their students.
Description
Citation
Citation to related work
Has part
ADA compliance
For Americans with Disabilities Act (ADA) accommodation, including help with reading this content, please contact scholarshare@temple.edu
