Are Large Language Models Good In-context Learners for Financial Sentiment Analysis?
Are Large Language Models Good In-context Learners for Financial Sentiment Analysis?
Recently, large language models (LLMs) with hundreds of billions of parameters have demonstrated the emergent ability, surpassing traditional methods in various domains even without fine-tuning over domain-specific data. However, when it comes to financial sentiment analysis (FSA)$\unicode{x2013}$a fundamental task in financial AI$\unicode{x2013}$these models often encounter various challenges, such as complex financial terminology, subjective human emotions, and ambiguous inclination expressions. In this paper, we aim to answer the fundamental question: whether LLMs are good in-context learners for FSA? Unveiling this question can yield informative insights on whether LLMs can learn to address the challenges by generalizing in-context demonstrations of financial document-sentiment pairs to the sentiment analysis of new documents, given that finetuning these models on finance-specific data is difficult, if not impossible at all. To the best of our knowledge, this is the first paper exploring in-context learning for FSA that covers most modern LLMs (recently released DeepSeek V3 included) and multiple in-context sample selection methods. Comprehensive experiments validate the in-context learning capability of LLMs for FSA.
Xinyu Wei、Luojia Liu
财政、金融信息产业经济
Xinyu Wei,Luojia Liu.Are Large Language Models Good In-context Learners for Financial Sentiment Analysis?[EB/OL].(2025-03-06)[2025-05-17].https://arxiv.org/abs/2503.04873.点此复制
评论