Save GPU RAM Usage in Convolutional Layers to Load Huge Images
Save GPU RAM Usage in Convolutional Layers to Load Huge Images
Image recognition models have evolved tremendously. Despite the progress for general images, histopathological images are not easy targets. One of the reasons is that histopathological images can be 100000-200000px in height and width which are often too large for a deep neural network model to handle directly because the RAM of the GPU is limited. Mitigating the obstacle is expected to be a progress in the histopathological image analysis. In this study, we save the RAM consumption in a convolutional layer by allocating only the required data to GPU only when needed and by dividing the calculation into per channel. This RAM Saving Convolutional layer (RSConv) can load larger images than a normal convolutional layer. The code is available at https://github.com/tand826/RAMSavingConv2d.
Ando Takumi
生物科学研究方法、生物科学研究技术计算技术、计算机技术
Ando Takumi.Save GPU RAM Usage in Convolutional Layers to Load Huge Images[EB/OL].(2025-03-28)[2025-04-26].https://www.biorxiv.org/content/10.1101/2023.09.19.558533.点此复制
评论