Bootstrapping Human-Like Planning via LLMs
Bootstrapping Human-Like Planning via LLMs
Robot end users increasingly require accessible means of specifying tasks for robots to perform. Two common end-user programming paradigms include drag-and-drop interfaces and natural language programming. Although natural language interfaces harness an intuitive form of human communication, drag-and-drop interfaces enable users to meticulously and precisely dictate the key actions of the robot's task. In this paper, we investigate the degree to which both approaches can be combined. Specifically, we construct a large language model (LLM)-based pipeline that accepts natural language as input and produces human-like action sequences as output, specified at a level of granularity that a human would produce. We then compare these generated action sequences to another dataset of hand-specified action sequences. Although our results reveal that larger models tend to outperform smaller ones in the production of human-like action sequences, smaller models nonetheless achieve satisfactory performance.
David Porfirio、Vincent Hsiao、Morgan Fine-Morris、Leslie Smith、Laura M. Hiatt
计算技术、计算机技术自动化技术、自动化技术设备
David Porfirio,Vincent Hsiao,Morgan Fine-Morris,Leslie Smith,Laura M. Hiatt.Bootstrapping Human-Like Planning via LLMs[EB/OL].(2025-06-27)[2025-07-17].https://arxiv.org/abs/2506.22604.点此复制
评论