NL2CA: Auto-formalizing Cognitive Decision-Making from Natural Language Using an Unsupervised CriticNL2LTL Framework

Reading time: 5 minute
...

📝 Original Info

  • Title: NL2CA: Auto-formalizing Cognitive Decision-Making from Natural Language Using an Unsupervised CriticNL2LTL Framework
  • ArXiv ID: 2512.18189
  • Date: 2025-12-20
  • Authors: Zihao Deng, Yijia Li, Renrui Zhang, Peijun Ye

📝 Abstract

Cognitive computing models offer a formal and interpretable way to characterize human's deliberation and decision-making, yet their development remains labor-intensive. In this paper, we propose NL2CA, a novel method for auto-formalizing cognitive decision-making rules from natural language descriptions of human experience. Different from most related work that exploits either pure manual or human guided interactive modeling, our method is fully automated without any human intervention. The approach first translates text into Linear Temporal Logic (LTL) using a fine-tuned large language model (LLM), then refines the logic via an unsupervised Critic Tree, and finally transforms the output into executable production rules compatible with symbolic cognitive frameworks. Based on the resulted rules, a cognitive agent is further constructed and optimized through cognitive reinforcement learning according to the real-world behavioral data. Our method is validated in two domains: (1) NL-to-LTL translation, where our CriticNL2LTL module achieves consistent performance across both expert and large-scale benchmarks without human-in-the-loop feed-backs, and (2) cognitive driving simulation, where agents automatically constructed from human interviews have successfully learned the diverse decision patterns of about 70 trials in different critical scenarios. Experimental results demonstrate that NL2CA enables scalable, interpretable, and human-aligned cognitive modeling from unstructured textual data, offering a novel paradigm to automatically design symbolic cognitive agents.

💡 Deep Analysis

Figure 1

📄 Full Content

NL2CA: Auto-formalizing Cognitive Decision-Making from Natural Language Using an Unsupervised CriticNL2LTL Framework Zihao Deng1,2, Yijia Li1,2, Renrui Zhang1,2, Peijun Ye1* 1Institute of Automation, Chinese Academy of Sciences 2School of Artificial Intelligence, University of Chinese Academy of Sciences {dengzihao2025, liyijia2023, zhangrenrui2024, peijun.ye}@ia.ac.cn Abstract Cognitive computing models offer a formal and interpretable way to characterize human’s deliberation and decision- making, yet their development remains labor-intensive. In this paper, we propose NL2CA, a novel method for auto- formalizing cognitive decision-making rules from natural lan- guage descriptions of human experience. Different from most related work that exploits either pure manual or human- guided interactive modeling, our method is fully automated without any human intervention. The approach first translates text into Linear Temporal Logic (LTL) using a fine-tuned large language model (LLM), then refines the logic via an un- supervised Critic Tree, and finally transforms the output into executable production rules compatible with symbolic cog- nitive frameworks. Based on the resulted rules, a cognitive agent is further constructed and optimized through cognitive reinforcement learning according to the real-world behavioral data. Our method is validated in two domains: (1) NL-to-LTL translation, where our CriticNL2LTL module achieves con- sistent performance across both expert and large-scale bench- marks without human-in-the-loop feed-backs, and (2) cog- nitive driving simulation, where agents automatically con- structed from human interviews have successfully learned the diverse decision patterns of about 70 trials in different criti- cal scenarios. Experimental results demonstrate that NL2CA enables scalable, interpretable, and human-aligned cognitive modeling from unstructured textual data, offering a novel paradigm to automatically design symbolic cognitive agents. Introduction Cognitive architectures such as ACT-R (Anderson and Lebiere 2014) and Soar (Laird 2022) are rule-based models, which can simulate various cognitive tasks such as mem- ory, problem solving, and learning. Such architectures have achieved wide applications in human behavior modeling due to their excellent simulation of human cognitive processes and high interpretability. But such architectures require ex- tensive expert effort to instantiate and depend almost exclu- sively on human experts’ prior knowledge to design. Large language models (LLM) such as GPT-4 (Achiam et al. 2023) and Deepseek (Liu et al. 2024), on the other hand, have shown promising capabilities in imitating human *Corresponding author Copyright © 2026, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. reasoning and behavior after a post-training on corpus with human feedbacks (Kumar et al. 2025). Even though LLMs are still being criticized for hallucination (Rawte et al. 2023) and diminishing returns for scaling (Villalobos et al. 2024), their knowledge of human cognitive process can still be used to automate the instantiation of the decision-making model with cognitive architectures (Kirk, Wray, and Laird 2023) (Niu et al. 2024) (Wray, Kirk, and Laird 2025). The core component of the cognitive architecture are the declarative memory and procedural memory represented by a set of production rules, each with a precondition and an effect. Cognitive agents operate in perceive–plan–act cycles, dynamically matching environmental features against these rules to determine their actions. There are already some studies on automatically generating production rules using LLM (Zhu and Simmons 2024) (Kirk et al. 2024), but almost all previous work aims to generate the production rules by directly querying the LLM about how the agent should act under an unknown situation. Such approaches construct the cognitive models without human prior knowledge and data, which could lead to a relatively poor alignment with actual human behaviors. In this work, we propose a novel approach for cognitive agent construction: NL2CA (Natural Language to Cognitive Agent), which leverages LLMs to formalize production rules from natural language descriptions of human experiences. As shown in Figure 1, the LLM is used to interpret textual descriptions of human deliberations and convert them into structured representations. Thus, the LLM serves not merely as a rule-generator, but also as a knowledge-extractor. The generated production rules are based both on the human ex- perience document and the LLM, so they are expected to be more human-aligned. Realizing this vision requires LLM to have both the capa- bility of providing correct knowledge and the capability of formalizing production rules from human experience docu- ments. Since the former capability is already well demon- strated in previous work, our work focuses mainly on im- proving the LLM’s capability to formaliz

📸 Image Gallery

Figure_3.png Pipeline.png Scene1_JS.png Scene2_JS.png Scene3_JS.png scene1.png scene2.png scene3.png

Reference

This content is AI-processed based on open access ArXiv data.

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut