Cat, Rat, Meow: On the Alignment of Language Model and Human Term-Similarity Judgments
Cat, Rat, Meow: On the Alignment of Language Model and Human Term-Similarity Judgments
Small and mid-sized generative language models have gained increasing attention. Their size and availability make them amenable to being analyzed at a behavioral as well as a representational level, allowing investigations of how these levels interact. We evaluate 32 publicly available language models for their representational and behavioral alignment with human similarity judgments on a word triplet task. This provides a novel evaluation setting to probe semantic associations in language beyond common pairwise comparisons. We find that (1) even the representations of small language models can achieve human-level alignment, (2) instruction-tuned model variants can exhibit substantially increased agreement, (3) the pattern of alignment across layers is highly model dependent, and (4) alignment based on models' behavioral responses is highly dependent on model size, matching their representational alignment only for the largest evaluated models.
Lorenz Linhardt、Tom Neuh?user、Lenka Tětková、Oliver Eberle
计算技术、计算机技术
Lorenz Linhardt,Tom Neuh?user,Lenka Tětková,Oliver Eberle.Cat, Rat, Meow: On the Alignment of Language Model and Human Term-Similarity Judgments[EB/OL].(2025-04-10)[2025-05-02].https://arxiv.org/abs/2504.07965.点此复制
评论