|国家预印本平台
首页|The Parameterized Complexity of Learning Monadic Second-Order Logic

The Parameterized Complexity of Learning Monadic Second-Order Logic

The Parameterized Complexity of Learning Monadic Second-Order Logic

来源:Arxiv_logoArxiv
英文摘要

Within the model-theoretic framework for supervised learning introduced by Grohe and Tur\'an (TOCS 2004), we study the parameterized complexity of learning concepts definable in monadic second-order logic (MSO). We show that the problem of learning an MSO-definable concept from a training sequence of labeled examples is fixed-parameter tractable on graphs of bounded clique-width, and that it is hard for the parameterized complexity class para-NP on general graphs. It turns out that an important distinction to be made is between 1-dimensional and higher-dimensional concepts, where the instances of a k-dimensional concept are k-tuples of vertices of a graph. For the higher-dimensional case, we give a learning algorithm that is fixed-parameter tractable in the size of the graph, but not in the size of the training sequence, and we give a hardness result showing that this is optimal. By comparison, in the 1-dimensional case, we obtain an algorithm that is fixed-parameter tractable in both.

Steffen van Bergerem、Nina Runde、Martin Grohe

计算技术、计算机技术

Steffen van Bergerem,Nina Runde,Martin Grohe.The Parameterized Complexity of Learning Monadic Second-Order Logic[EB/OL].(2023-09-19)[2025-08-02].https://arxiv.org/abs/2309.10489.点此复制

评论