|国家预印本平台
首页|Training Neural Networks for Modularity aids Interpretability

Training Neural Networks for Modularity aids Interpretability

Training Neural Networks for Modularity aids Interpretability

来源:Arxiv_logoArxiv
英文摘要

An approach to improve network interpretability is via clusterability, i.e., splitting a model into disjoint clusters that can be studied independently. We find pretrained models to be highly unclusterable and thus train models to be more modular using an ``enmeshment loss'' function that encourages the formation of non-interacting clusters. Using automated interpretability measures, we show that our method finds clusters that learn different, disjoint, and smaller circuits for CIFAR-10 labels. Our approach provides a promising direction for making neural networks easier to interpret.

Satvik Golechha、Dylan Cope、Nandi Schoots

计算技术、计算机技术

Satvik Golechha,Dylan Cope,Nandi Schoots.Training Neural Networks for Modularity aids Interpretability[EB/OL].(2025-07-26)[2025-08-10].https://arxiv.org/abs/2409.15747.点此复制

评论