Evaluating Language Models Using Linguistic Variations in Multilingual Learners’ Writing: A Teacher Study

International Society of the Learning Sciences (2025)

Authors: Kaycie Barron, Nora Tseng, Shamya Karumbaiah, Cynthia T. Baeza

Abstract: This paper investigates teachers’ perceptions on linguistic variations in
bi/multilingual learners’ (MLs) writing to evaluate the (in)effectiveness of Multilingual Large
Language Models (MLLMs), which are artificial intelligence (AI) models that generate texts in multiple languages. Due to their inherent linguistic biases, these models often struggle to
interpret MLs’ linguistic variations. To address this gap, we elicit teacher feedback on prevalent linguistic variations in MLs’ writing and assess how Meta Llama 3.1, a state-of-the-art MLLM, responds to these variations. Using translanguaging as a lens—the fluid use of multiple languages to convey meaning across social contexts—we propose a new approach to evaluate MLLMs in multilingual learning contexts. With the increasing prevalence of AI in K12
classrooms, this paper advocates for the inclusion of bi/multilingual educators to better align
the use of AI with progressive pedagogies such as translanguaging.