|国家预印本平台
首页|Sparse Probabilistic Graph Circuits

Sparse Probabilistic Graph Circuits

Sparse Probabilistic Graph Circuits

来源:Arxiv_logoArxiv
英文摘要

Deep generative models (DGMs) for graphs achieve impressively high expressive power thanks to very efficient and scalable neural networks. However, these networks contain non-linearities that prevent analytical computation of many standard probabilistic inference queries, i.e., these DGMs are considered \emph{intractable}. While recently proposed Probabilistic Graph Circuits (PGCs) address this issue by enabling \emph{tractable} probabilistic inference, they operate on dense graph representations with $\mathcal{O}(n^2)$ complexity for graphs with $n$ nodes and \emph{$m$ edges}. To address this scalability issue, we introduce Sparse PGCs, a new class of tractable generative models that operate directly on sparse graph representation, reducing the complexity to $\mathcal{O}(n + m)$, which is particularly beneficial for $m \ll n^2$. In the context of de novo drug design, we empirically demonstrate that SPGCs retain exact inference capabilities, improve memory efficiency and inference speed, and match the performance of intractable DGMs in key metrics.

Martin Rektoris、Milan Papež、Václav Šmídl、Tomáš Pevný

计算技术、计算机技术

Martin Rektoris,Milan Papež,Václav Šmídl,Tomáš Pevný.Sparse Probabilistic Graph Circuits[EB/OL].(2025-08-11)[2025-08-24].https://arxiv.org/abs/2508.07763.点此复制

评论