|国家预印本平台
首页|Self-reflecting Large Language Models: A Hegelian Dialectical Approach

Self-reflecting Large Language Models: A Hegelian Dialectical Approach

Self-reflecting Large Language Models: A Hegelian Dialectical Approach

来源:Arxiv_logoArxiv
英文摘要

Investigating NLP through a philosophical lens has recently caught researchers' eyes, as it bridges computational methods with classical schools of philosophy. This paper introduces a philosophical framework inspired by the Hegelian Dialectic to enable LLMs' self-reflection, utilizing a self-dialectical approach to emulate internal critiques and synthesize new scientific ideas (spanning domains such as mathematics, physics, and more). Additionally, we explore the effect of generation temperature in LLMs by introducing a dynamic annealing approach, which encourages creativity in the early stages and gradually focuses on refinement and nuance, as well as a constant-temperature strategy. Furthermore, we implement a Multi-Agent Majority Voting (MAMV) strategy to assess the validity and novelty of the generated ideas, which proves useful in the absence of domain experts. We also evaluate the effectiveness of our method in generating novel scientific ideas and improving LLMs' reasoning capabilities. Our experiments demonstrate promising results in ideation, along with significant improvements in mathematical and symbolic reasoning.

Sara Abdali、Can Goksen、Michael Solodko、Julie E. Maybee、Kazuhito Koishida、Saeed Amizadeh

科学、科学研究数学物理学信息传播、知识传播文化理论

Sara Abdali,Can Goksen,Michael Solodko,Julie E. Maybee,Kazuhito Koishida,Saeed Amizadeh.Self-reflecting Large Language Models: A Hegelian Dialectical Approach[EB/OL].(2025-06-23)[2025-07-16].https://arxiv.org/abs/2501.14917.点此复制

评论