Evaluating Zero-Shot Multilingual Aspect-Based Sentiment Analysis with Large Language Models
Evaluating Zero-Shot Multilingual Aspect-Based Sentiment Analysis with Large Language Models
Aspect-based sentiment analysis (ABSA), a sequence labeling task, has attracted increasing attention in multilingual contexts. While previous research has focused largely on fine-tuning or training models specifically for ABSA, we evaluate large language models (LLMs) under zero-shot conditions to explore their potential to tackle this challenge with minimal task-specific adaptation. We conduct a comprehensive empirical evaluation of a series of LLMs on multilingual ABSA tasks, investigating various prompting strategies, including vanilla zero-shot, chain-of-thought (CoT), self-improvement, self-debate, and self-consistency, across nine different models. Results indicate that while LLMs show promise in handling multilingual ABSA, they generally fall short of fine-tuned, task-specific models. Notably, simpler zero-shot prompts often outperform more complex strategies, especially in high-resource languages like English. These findings underscore the need for further refinement of LLM-based approaches to effectively address ABSA task across diverse languages.
Yanqing He、Yun Xue、Bolei Ma、Zheyu Zhang、Ningyuan Deng、Chengyan Wu
计算技术、计算机技术语言学
Yanqing He,Yun Xue,Bolei Ma,Zheyu Zhang,Ningyuan Deng,Chengyan Wu.Evaluating Zero-Shot Multilingual Aspect-Based Sentiment Analysis with Large Language Models[EB/OL].(2025-06-09)[2025-07-21].https://arxiv.org/abs/2412.12564.点此复制
评论