Principles2Plan: LLM-Guided System for Operationalising Ethical Principles into Plans
📝 Abstract
Ethical awareness is critical for robots operating in human environments, yet existing automated planning tools provide little support. Manually specifying ethical rules is labour-intensive and highly context-specific. We present Principles2Plan, an interactive research prototype demonstrating how a human and a Large Language Model (LLM) can collaborate to produce context-sensitive ethical rules and guide automated planning. A domain expert provides the planning domain, problem details, and relevant high-level principles such as beneficence and privacy. The system generates operationalisable ethical rules consistent with these principles, which the user can review, prioritise, and supply to a planner to produce ethically-informed plans. To our knowledge, no prior system supports users in generating principle-grounded rules for classical planning contexts. Principles2Plan showcases the potential of human-LLM collaboration for making ethical automated planning more practical and feasible.
💡 Analysis
Ethical awareness is critical for robots operating in human environments, yet existing automated planning tools provide little support. Manually specifying ethical rules is labour-intensive and highly context-specific. We present Principles2Plan, an interactive research prototype demonstrating how a human and a Large Language Model (LLM) can collaborate to produce context-sensitive ethical rules and guide automated planning. A domain expert provides the planning domain, problem details, and relevant high-level principles such as beneficence and privacy. The system generates operationalisable ethical rules consistent with these principles, which the user can review, prioritise, and supply to a planner to produce ethically-informed plans. To our knowledge, no prior system supports users in generating principle-grounded rules for classical planning contexts. Principles2Plan showcases the potential of human-LLM collaboration for making ethical automated planning more practical and feasible.
📄 Content
Principles2Plan: LLM-Guided System for Operationalising Ethical Principles into Plans Tammy Zhong, Yang Song, Maurice Pagnucco School of Computer Science and Engineering, University of New South Wales, Sydney, Australia {tammy.zhong,yang.song1,morri}@unsw.edu.au Abstract Ethical awareness is critical for robots operating in human environments, yet existing automated planning tools provide little support. Manually specifying ethical rules is labour- intensive and highly context-specific. We present Princi- ples2Plan, an interactive research prototype demonstrating how a human and a Large Language Model (LLM) can col- laborate to produce context-sensitive ethical rules and guide automated planning. A domain expert provides the planning domain, problem details, and relevant high-level principles such as beneficence and privacy. The system generates op- erationalisable ethical rules consistent with these principles, which the user can review, prioritise, and supply to a planner to produce ethically-informed plans. To our knowledge, no prior system supports users in generating principle-grounded rules for classical planning contexts. Principles2Plan show- cases the potential of human-LLM collaboration for making ethical automated planning more practical and feasible. Introduction The deployment of robots around people raises the challenge of ensuring that their actions achieve goals while respect- ing ethical principles. High-level ethical principles, such as beneficence, depend heavily on context. For example, in an autonomous vehicle scenario, a passenger needing urgent medical attention may justify taking an unauthorised short- cut to reach the hospital quickly, whereas for a leisure trip, following standard traffic rules may be preferable to avoid unnecessary risk. In both cases, the principle applies, yet the resulting actions differ. This illustrates a key challenge: interpreting abstract ethical principles in real-world scenar- ios is nuanced, context-dependent, and often controversial, making fully automated ethical planning difficult. We aim to develop an interactive software platform, based on exist- ing work, that encourages human-machine collaboration to interpret these principles in a given classical planning prob- lem and generate plans that not only achieve goals, but also consider the ethics of the plan that achieves such goals. Computational Machine Ethics (CME) approaches are often divided into top-down, bottom-up, and hybrid ap- proaches. Top-down methods (Vanderelst and Winfield 2018; Pagnucco et al. 2021; Grandi et al. 2023) specify rules Copyright © 2026, Association for the Advancement of Artificial Intelligence (www.aaai.org ). All rights reserved. or guidelines in advance, ensuring transparency but lacking adaptability. Bottom-up approaches (Jiang et al. 2025; Li, Cai, and Xiao 2025) rely on data to infer ethical behaviour, trading off interpretability for flexibility. Hybrid approaches (Allen, Smit, and Wallach 2005; Ramanayake and Nallur 2024) attempt to combine these strengths, but typically still require extensive manual effort to encode ethical rules or ex- amples. Advances in large language models (LLMs) offer a practical means to reduce the manual effort of encoding such rules or examples, which we consider in a planning context. Recent work has explored incorporating LLMs into au- tomated planning in various ways (Pallagani et al. 2024). Beyond attempts to use LLMs to generate plans directly, they have been applied to facilitate planning processes, in- cluding model construction (Oswald et al. 2024), human– LLM collaboration (Wu, Ai, and Hsu 2023), and transla- tion of natural language into structured languages (Ahn et al. 2022; Liu et al. 2023; Favier et al. 2025; Zhong et al. 2025). Few contemporary studies leverage LLMs to support au- tomated planning with explicit specifications (Favier et al. 2025; Zhong et al. 2025). Favier et al. (2025) use LLMs to decompose and encode general natural language constraints in PDDL3, while Zhong, Song, and Pagnucco (2026) trans- late high-level ethical principles into context-specific rules represented as action costs in PDDL. Although the latter tar- gets ethics—an underexplored area in automated planning— it lacks a user-facing interface, which Favier et al. (2025) provides. We present Principles2Plan, a prototype that en- ables users to generate ethical plans. While prior work lies at the intersection of users, LLMs, and automated planning, no existing system supports collaborative human–LLM re- finement and operationalisation of ethical principles. Prin- ciples2Plan addresses this gap by integrating an interactive interface with the pipeline introduced by Zhong, Song, and Pagnucco (2026). Principles2Plan is a prototype that leverages LLMs and human oversight to incorporate ethical considerations into automated planning. Building on the human-in-the-loop pipeline introduced in (Zhong, Song, and Pagnucco 2026), the system takes user input,
This content is AI-processed based on ArXiv data.