|国家预印本平台
首页|Advances in LLMs with Focus on Reasoning, Adaptability, Efficiency and Ethics

Advances in LLMs with Focus on Reasoning, Adaptability, Efficiency and Ethics

Advances in LLMs with Focus on Reasoning, Adaptability, Efficiency and Ethics

来源:Arxiv_logoArxiv
英文摘要

This survey paper outlines the key developments in the field of Large Language Models (LLMs), such as enhancing their reasoning skills, adaptability to various tasks, increased computational efficiency, and ability to make ethical decisions. The techniques that have been most effective in bridging the gap between human and machine communications include the Chain-of-Thought prompting, Instruction Tuning, and Reinforcement Learning from Human Feedback. The improvements in multimodal learning and few-shot or zero-shot techniques have further empowered LLMs to handle complex jobs with minor input. They also manage to do more with less by applying scaling and optimization tricks for computing power conservation. This survey also offers a broader perspective on recent advancements in LLMs going beyond isolated aspects such as model architecture or ethical concerns. It categorizes emerging methods that enhance LLM reasoning, efficiency, and ethical alignment. It also identifies underexplored areas such as interpretability, cross-modal integration and sustainability. With recent progress, challenges like huge computational costs, biases, and ethical risks remain constant. Addressing these requires bias mitigation, transparent decision-making, and clear ethical guidelines. Future research will focus on enhancing models ability to handle multiple input, thereby making them more intelligent, safe, and reliable.

Asifullah khan、Muhammad Zaeem Khan、Saleha Jamshed、Sadia Ahmad、Aleesha Zainab、Kaynat Khatib、Faria Bibi、Abdul Rehman

计算技术、计算机技术

Asifullah khan,Muhammad Zaeem Khan,Saleha Jamshed,Sadia Ahmad,Aleesha Zainab,Kaynat Khatib,Faria Bibi,Abdul Rehman.Advances in LLMs with Focus on Reasoning, Adaptability, Efficiency and Ethics[EB/OL].(2025-06-14)[2025-07-16].https://arxiv.org/abs/2506.12365.点此复制

评论