astra-langchain4j: Experiences Combining LLMs and Agent Programming
Given the emergence of Generative AI over the last two years and the increasing focus on Agentic AI as a form of Multi-Agent System it is important to explore both how such technologies can impact the use of traditional Agent Toolkits and how the wealth of experience encapsulated in those toolkits can influence the design of the new agentic platforms. This paper presents an overview of our experience developing a prototype large language model (LLM) integration for the ASTRA programming language. It presents a brief overview of the toolkit, followed by three example implementations, concluding with a discussion of the experiences garnered through the examples.
💡 Research Summary
The paper “astra‑langchain4j: Experiences Combining LLMs and Agent Programming” reports on the design, implementation, and early evaluation of a library that bridges the ASTRA agent programming language with large language models (LLMs) via the Java‑based LangChain4J framework. The authors begin by motivating the work: the rapid rise of generative AI (ChatGPT, Gemini, etc.) has revived interest in agentic AI, yet traditional BDI‑style agent toolkits such as ASTRA lack built‑in support for LLM calls, prompt management, and retrieval‑augmented generation (RAG). To address this gap, they built astra‑langchain4j, a set of four ASTRA modules (OpenAI, Gemini, Template, BeliefRAG) and two helper agent classes (OpenAIAgent, GeminiAgent). The modules encapsulate LLM initialization, chat interaction, and template handling while preserving ASTRA’s modular architecture (sensors, actions, predicates, events).
The Template module supplies three concrete template types: PromptTemplate for constructing parameterised prompts, ResponseTemplate for extracting variables from LLM replies, and RAGTemplate for combining static prompt text with dynamically generated knowledge from the agent’s belief base. The BeliefRAG module translates ASTRA predicates (e.g., food(string)) into natural‑language snippets that are injected into prompts, enabling the agent to supply contextual facts without hard‑coding them. All classes are distributed via Maven Central, so developers can add a single dependency to any ASTRA project.
Three demonstrators illustrate the library’s capabilities. The first is a “Travel Planner” inspired by Microsoft’s AutoGen example. Four specialist agents (Planner, Local, Language, Summary) are instantiated as subclasses of a generic AssistantAgent. They communicate through a RoundRobinGroupChat and follow the FIPA‑Request interaction protocol, with a Main orchestrator agent assigning tasks, aggregating responses, and producing a final itinerary. The second example, the “Joker Agent,” shows how a simple variable placeholder (${animal}) can be bound at runtime to generate a joke prompt, send it to an LLM, and print the answer. The third, the “Happy Agent,” demonstrates a Yes/No question template, where the LLM’s answer is parsed into an “answer” variable for downstream logic.
From these experiments the authors draw several practical insights. Modularity proved essential: by leveraging ASTRA’s existing module system, the LLM integration remained clean, reusable, and extensible to other models (e.g., Claude, LLaMA) with minimal code changes. Template‑based prompt construction dramatically improved readability and reusability, but it also introduced a new class of bugs related to missing or mismatched bindings; the authors recommend adding schema‑validation or unit‑test utilities for templates. Belief‑RAG worked well for simple factual predicates but struggled with richer knowledge representations such as ontologies or time‑series data, suggesting future work on more sophisticated belief‑to‑text converters. Performance and cost were non‑trivial concerns: each LLM call incurs network latency and token‑based pricing, so caching repeated queries and judicious model selection are necessary for scalable deployments. Finally, while the combination of FIPA‑Request and RoundRobinGroupChat provided a straightforward coordination pattern, handling asynchronous replies and time‑outs added complexity that could be mitigated by adopting a more event‑driven architecture.
In conclusion, astra‑langchain4j serves as a practical bridge between classic BDI agents and modern generative AI, offering a reusable set of patterns—modular LLM actions, parameterised prompt templates, and belief‑driven RAG—that can guide future agent platforms seeking to integrate LLMs. The paper outlines a roadmap for extending the approach: integrating ontology‑based belief extraction, multi‑model routing, cost‑aware caching, and richer interaction protocols such as FIPA‑Contract‑Net. These directions aim to deepen the synergy between symbolic agent reasoning and the probabilistic, language‑centric capabilities of today’s LLMs.
Comments & Academic Discussion
Loading comments...
Leave a Comment