Losing our Tail -- Again: On (Un)Natural Selection And Multilingual Large Language Models
Losing our Tail -- Again: On (Un)Natural Selection And Multilingual Large Language Models
Multilingual Large Language Models (LLMs) considerably changed how technologies can influence language. While previous technologies could mediate or assist humans, there is now a tendency to offload the task of writing itself to these technologies, enabling them to change our linguistic ecosystem more directly. While they provide us quick access to information and impressively fluent output, beneath their apparent sophistication lies a subtle, more insidious threat: the gradual decline and loss of linguistic diversity. With this opinion piece, I explore how model collapse, with a particular focus on translation technology, can lead to the loss of linguistic forms, grammatical features, and cultural nuance. Model collapse refers to the eventual consequence of self-consuming training loops, where models reinforce their own biases and lose linguistic diversity. Drawing on recent work in Computer Vision, Natural Language Processing (NLP) and Machine Translation (MT), I argue that the tails of our linguistic distributions are vanishing, and with them, the narratives and identities they carry. This is a call to resist linguistic flattening and to reimagine NLP as a field that encourages, values and protects expressive multilingual lexical and linguistic diversity and creativity.
Eva Vanmassenhove
语言学
Eva Vanmassenhove.Losing our Tail -- Again: On (Un)Natural Selection And Multilingual Large Language Models[EB/OL].(2025-07-09)[2025-07-16].https://arxiv.org/abs/2507.03933.点此复制
评论