|国家预印本平台
首页|Dependency Parsing with LSTMs: An Empirical Evaluation

Dependency Parsing with LSTMs: An Empirical Evaluation

Dependency Parsing with LSTMs: An Empirical Evaluation

来源:Arxiv_logoArxiv
英文摘要

We propose a transition-based dependency parser using Recurrent Neural Networks with Long Short-Term Memory (LSTM) units. This extends the feedforward neural network parser of Chen and Manning (2014) and enables modelling of entire sequences of shift/reduce transition decisions. On the Google Web Treebank, our LSTM parser is competitive with the best feedforward parser on overall accuracy and notably achieves more than 3% improvement for long-range dependencies, which has proved difficult for previous transition-based parsers due to error propagation and limited context information. Our findings additionally suggest that dropout regularisation on the embedding layer is crucial to improve the LSTM's generalisation.

Kevin Duh、Adhiguna Kuncoro、Yuji Matsumoto、Yuichiro Sawai

计算技术、计算机技术

Kevin Duh,Adhiguna Kuncoro,Yuji Matsumoto,Yuichiro Sawai.Dependency Parsing with LSTMs: An Empirical Evaluation[EB/OL].(2016-04-21)[2025-06-03].https://arxiv.org/abs/1604.06529.点此复制

评论