|国家预印本平台
首页|BitHEP -- The Limits of Low-Precision ML in HEP

BitHEP -- The Limits of Low-Precision ML in HEP

BitHEP -- The Limits of Low-Precision ML in HEP

来源:Arxiv_logoArxiv
英文摘要

The increasing complexity of modern neural network architectures demands fast and memory-efficient implementations to mitigate computational bottlenecks. In this work, we evaluate the recently proposed BitNet architecture in HEP applications, assessing its performance in classification, regression, and generative modeling tasks. Specifically, we investigate its suitability for quark-gluon discrimination, SMEFT parameter estimation, and detector simulation, comparing its efficiency and accuracy to state-of-the-art methods. Our results show that while BitNet consistently performs competitively in classification tasks, its performance in regression and generation varies with the size and type of the network, highlighting key limitations and potential areas for improvement.

Claudius Krause、Daohan Wang、Ramon Winterhalder

计算技术、计算机技术自然科学理论自然科学研究方法

Claudius Krause,Daohan Wang,Ramon Winterhalder.BitHEP -- The Limits of Low-Precision ML in HEP[EB/OL].(2025-04-04)[2025-05-01].https://arxiv.org/abs/2504.03387.点此复制

评论