Bonsai: Interpretable Tree-Adaptive Grounded Reasoning
Bonsai: Interpretable Tree-Adaptive Grounded Reasoning
To develop general-purpose collaborative agents, humans need reliable AI systems that can (1) adapt to new domains and (2) transparently reason with uncertainty to allow for verification and correction. Black-box models demonstrate powerful data processing abilities but do not satisfy these criteria due to their opaqueness, domain specificity, and lack of uncertainty awareness. We introduce Bonsai, a compositional and probabilistic reasoning system that generates adaptable inference trees by retrieving relevant grounding evidence and using it to compute likelihoods of sub-claims derived from broader natural language inferences. Bonsai's reasoning power is tunable at test-time via evidence scaling and it demonstrates reliable handling of varied domains including transcripts, photographs, videos, audio, and databases. Question-answering and human alignment experiments demonstrate that Bonsai matches the performance of domain-specific black-box methods while generating interpretable, grounded, and uncertainty-aware reasoning traces.
Kate Sanders、Benjamin Van Durme
计算技术、计算机技术
Kate Sanders,Benjamin Van Durme.Bonsai: Interpretable Tree-Adaptive Grounded Reasoning[EB/OL].(2025-04-04)[2025-05-01].https://arxiv.org/abs/2504.03640.点此复制
评论