|国家预印本平台
首页|Learning curves theory for hierarchically compositional data with power-law distributed features

Learning curves theory for hierarchically compositional data with power-law distributed features

Learning curves theory for hierarchically compositional data with power-law distributed features

来源:Arxiv_logoArxiv
英文摘要

Recent theories suggest that Neural Scaling Laws arise whenever the task is linearly decomposed into power-law distributed units. Alternatively, scaling laws also emerge when data exhibit a hierarchically compositional structure, as is thought to occur in language and images. To unify these views, we consider classification and next-token prediction tasks based on probabilistic context-free grammars -- probabilistic models that generate data via a hierarchy of production rules. For classification, we show that having power-law distributed production rules results in a power-law learning curve with an exponent depending on the rules' distribution and a large multiplicative constant that depends on the hierarchical structure. By contrast, for next-token prediction, the distribution of production rules controls the local details of the learning curve, but not the exponent describing the large-scale behaviour.

Francesco Cagnetta、Hyunmo Kang、Matthieu Wyart

语言学

Francesco Cagnetta,Hyunmo Kang,Matthieu Wyart.Learning curves theory for hierarchically compositional data with power-law distributed features[EB/OL].(2025-05-11)[2025-06-18].https://arxiv.org/abs/2505.07067.点此复制

评论