Exploring energy consumption of AI frameworks on a 64-core RV64 Server CPU
Exploring energy consumption of AI frameworks on a 64-core RV64 Server CPU
In today's era of rapid technological advancement, artificial intelligence (AI) applications require large-scale, high-performance, and data-intensive computations, leading to significant energy demands. Addressing this challenge necessitates a combined approach involving both hardware and software innovations. Hardware manufacturers are developing new, efficient, and specialized solutions, with the RISC-V architecture emerging as a prominent player due to its open, extensible, and energy-efficient instruction set architecture (ISA). Simultaneously, software developers are creating new algorithms and frameworks, yet their energy efficiency often remains unclear. In this study, we conduct a comprehensive benchmark analysis of machine learning (ML) applications on the 64-core SOPHON SG2042 RISC-V architecture. We specifically analyze the energy consumption of deep learning inference models across three leading AI frameworks: PyTorch, ONNX Runtime, and TensorFlow. Our findings show that frameworks using the XNNPACK back-end, such as ONNX Runtime and TensorFlow, consume less energy compared to PyTorch, which is compiled with the native OpenBLAS back-end.
Giulio Malenza、Francesco Targa、Adriano Marques Garcia、Marco Aldinucci、Robert Birke
计算技术、计算机技术
Giulio Malenza,Francesco Targa,Adriano Marques Garcia,Marco Aldinucci,Robert Birke.Exploring energy consumption of AI frameworks on a 64-core RV64 Server CPU[EB/OL].(2025-04-03)[2025-04-30].https://arxiv.org/abs/2504.03774.点此复制
评论