|国家预印本平台
首页|Layered Insights: Generalizable Analysis of Authorial Style by Leveraging All Transformer Layers

Layered Insights: Generalizable Analysis of Authorial Style by Leveraging All Transformer Layers

Layered Insights: Generalizable Analysis of Authorial Style by Leveraging All Transformer Layers

来源:Arxiv_logoArxiv
英文摘要

We propose a new approach for the authorship attribution task that leverages the various linguistic representations learned at different layers of pre-trained transformer-based models. We evaluate our approach on three datasets, comparing it to a state-of-the-art baseline in in-domain and out-of-domain scenarios. We found that utilizing various transformer layers improves the robustness of authorship attribution models when tested on out-of-domain data, resulting in new state-of-the-art results. Our analysis gives further insights into how our model's different layers get specialized in representing certain stylistic features that benefit the model when tested out of the domain.

Vishal Anand、Smaranda Muresan、Kathleen McKeown、Milad Alshomary、Nikhil Reddy Varimalla

计算技术、计算机技术

Vishal Anand,Smaranda Muresan,Kathleen McKeown,Milad Alshomary,Nikhil Reddy Varimalla.Layered Insights: Generalizable Analysis of Authorial Style by Leveraging All Transformer Layers[EB/OL].(2025-07-03)[2025-07-18].https://arxiv.org/abs/2503.00958.点此复制

评论