From Language to Action: Can LLM-Based Agents Be Used for Embodied Robot Cognition?

Shinas Shaji*,1,3, Fabian Huppertz*,1, Alex Mitrevski2, and Sebastian Houben1,3
1Institute of AI and Autonomous Systems (A2S), Hochschule Bonn-Rhein-Sieg, Germany
2Division for Systems and Control, Chalmers University of Technology, Sweden
3Fraunhofer Institute for Intelligent Analysis and Information Systems, Germany
*Corresponding author
Architecture diagram

Abstract

In order to flexibly act in an everyday environment, a robotic agent needs a variety of cognitive capabilities that enable it to reason about plans and perform execution recovery. Large language models (LLMs) have been shown to demonstrate emergent cognitive aspects, such as reasoning and language understanding; however, the ability to control embodied robotic agents requires reliably bridging high-level language to low-level functionalities for perception and control. In this paper, we investigate the extent to which an LLM can serve as a core component for planning and execution reasoning in a cognitive robot architecture. For this purpose, we propose a cognitive architecture in which an agentic LLM serves as the core component for planning and reasoning, while components for working and episodic memories support learning from experience and adaptation. An instance of the architecture is then used to control a mobile manipulator in a simulated household environment, where environment interaction is done through a set of high-level tools for perception, reasoning, navigation, grasping, and placement, all of which are made available to the LLM-based agent. We evaluate our proposed system on two household tasks (object placement and object swapping), which evaluate the agent's reasoning, planning, and memory utilisation. The results demonstrate that the LLM-driven agent can complete structured tasks and exhibits emergent adaptation and memory-guided planning, but also reveal significant limitations, such as hallucinations about the task success and poor instruction following by refusing to acknowledge and complete sequential tasks. These findings highlight both the potential and challenges of employing LLMs as embodied cognitive controllers for autonomous robots.

BibTeX

@inproceedings{shaji_huppertz_mitrevski_houben2026,
    authors   = {Shaji, Shinas and Huppertz, Fabian and Mitrevski, Alex and Houben, Sebastian},
    title     = {{From Language to Action: Can LLM-Based Agents Be Used for Embodied Robot Cognition?}},
    booktitle = {Proceedings of the IEEE International Conference on Robotics and Automation (ICRA)},
    year      = {2026}
}