Fully Interpretable Deep Learning Model of Transcriptional Control
Fully Interpretable Deep Learning Model of Transcriptional Control
Abstract The universal expressibility assumption of Deep Neural Networks (DNNs) is the key motivation behind recent work in the system biology community to employ DNNs to solve important problems in functional genomics and molecular genetics. Because of the black box nature of DNNs, such assumptions, while useful in practice, are unsatisfactory for scientific analysis. In this paper, we give an example of a DNN in which every layer is interpretable. Moreover, this DNN is biologically validated and predictive. We derive our DNN from a systems biology model that was not previously recognized as having a DNN structure. This DNN is concerned with a key unsolved biological problem, which is to understand the DNA regulatory code which controls how genes in multicellular organisms are turned on and off. Although we apply our DNN to data from the early embryo of the fruit fly Drosophila, this system serves as a testbed for analysis of much larger data sets obtained by systems biology studies on a genomic scale.
Liu Yi、Barr Kenneth、Reinitz John
Department of Statistics, University of ChicagoDepartment of Human Genetics, University of ChicagoDepartments of Statistics, Ecology and Evolution, Molecular Genetics & Cell Biology, Institute of Genomics and Systems Biology, University of Chicago
生物科学研究方法、生物科学研究技术分子生物学生物科学现状、生物科学发展
Liu Yi,Barr Kenneth,Reinitz John.Fully Interpretable Deep Learning Model of Transcriptional Control[EB/OL].(2025-03-28)[2025-04-27].https://www.biorxiv.org/content/10.1101/655639.点此复制
评论