논문 편집기에 통합된 다중 에이전트 기반 학술 작성 도우미
📝 Abstract
Large language models are increasingly embedded into academic writing workflows, yet existing assistants remain external to the editor, preventing deep interaction with document state, structure, and revision history. This separation makes it impossible to support agentic, context-aware operations directly within LaTeX editors such as Overleaf. We present PaperDebugger, an in-editor, multi-agent, and plugin-based academic writing assistant that brings LLM-driven reasoning directly into the writing environment. Enabling such in-editor interaction is technically non-trivial: it requires reliable bidirectional synchronization with the editor, fine-grained version control and patching, secure state management, multi-agent scheduling, and extensible communication with external tools. PaperDebugger addresses these challenges through a Chrome-approved extension, a Kubernetes-native orchestration layer, and a Model Context Protocol (MCP) toolchain that integrates literature search, reference lookup, document scoring, and revision pipelines. Our demo showcases a fully integrated workflow, including localized edits, structured reviews, parallel agent execution, and diff-based updates, encapsulated within a minimal-intrusion user interface (UI). Early aggregated analytics demonstrate active user engagement and validate the practicality of an editor-native, agentic writing assistant. More details about this demo and video could be found at https://github.com/PaperDebugger/PaperDebugger . CCS Concepts • Human-centered computing → Interactive systems and tools; • Computing methodologies → Natural language processing; • Information systems → Information retrieval.
💡 Analysis
Large language models are increasingly embedded into academic writing workflows, yet existing assistants remain external to the editor, preventing deep interaction with document state, structure, and revision history. This separation makes it impossible to support agentic, context-aware operations directly within LaTeX editors such as Overleaf. We present PaperDebugger, an in-editor, multi-agent, and plugin-based academic writing assistant that brings LLM-driven reasoning directly into the writing environment. Enabling such in-editor interaction is technically non-trivial: it requires reliable bidirectional synchronization with the editor, fine-grained version control and patching, secure state management, multi-agent scheduling, and extensible communication with external tools. PaperDebugger addresses these challenges through a Chrome-approved extension, a Kubernetes-native orchestration layer, and a Model Context Protocol (MCP) toolchain that integrates literature search, reference lookup, document scoring, and revision pipelines. Our demo showcases a fully integrated workflow, including localized edits, structured reviews, parallel agent execution, and diff-based updates, encapsulated within a minimal-intrusion user interface (UI). Early aggregated analytics demonstrate active user engagement and validate the practicality of an editor-native, agentic writing assistant. More details about this demo and video could be found at https://github.com/PaperDebugger/PaperDebugger . CCS Concepts • Human-centered computing → Interactive systems and tools; • Computing methodologies → Natural language processing; • Information systems → Information retrieval.
📄 Content
PaperDebugger: A Plugin-Based Multi-Agent System for In-Editor Academic Writing, Review, and Editing Junyi Hou e0945797@u.nus.edu National University of Singapore Singapore, Singapore Andre Lin Huikai andre_lin@u.nus.edu National University of Singapore Singapore, Singapore Nuo Chen nuochen@u.nus.edu National University of Singapore Singapore, Singapore Yiwei Gong imwithye@gmail.com Independent Singapore, Singapore Bingsheng He dcsheb@nus.edu.sg National University of Singapore Singapore, Singapore Abstract Large language models are increasingly embedded into academic writing workflows, yet existing assistants remain external to the editor, preventing deep interaction with document state, structure, and revision history. This separation makes it impossible to sup- port agentic, context-aware operations directly within LaTeX edi- tors such as Overleaf. We present PaperDebugger, an in-editor, multi-agent, and plugin-based academic writing assistant that brings LLM-driven reasoning directly into the writing environ- ment. Enabling such in-editor interaction is technically non-trivial: it requires reliable bidirectional synchronization with the editor, fine-grained version control and patching, secure state manage- ment, multi-agent scheduling, and extensible communication with external tools. PaperDebugger addresses these challenges through a Chrome-approved extension, a Kubernetes-native orchestration layer, and a Model Context Protocol (MCP) toolchain that integrates literature search, reference lookup, document scoring, and revision pipelines. Our demo showcases a fully integrated workflow, includ- ing localized edits, structured reviews, parallel agent execution, and diff-based updates, encapsulated within a minimal-intrusion user interface (UI). Early aggregated analytics demonstrate active user engagement and validate the practicality of an editor-native, agen- tic writing assistant. More details about this demo and video could be found at https://github.com/PaperDebugger/PaperDebugger . CCS Concepts • Human-centered computing →Interactive systems and tools; • Computing methodologies →Natural language process- ing; • Information systems →Information retrieval. Keywords In-editor writing assistance; LLM agents; Multi-agent orchestration; Overleaf integration; Academic writing tools This work is licensed under a Creative Commons Attribution 4.0 International License. WWW Companion ’26, Dubai, United Arab Emirates © 2026 Copyright held by the owner/author(s). ACM ISBN 979-8-4007-2308-7/2026/04 https://doi.org/10.1145/3774905.3793122 ACM Reference Format: Junyi Hou, Andre Lin Huikai, Nuo Chen, Yiwei Gong, and Bingsheng He. 2026. PaperDebugger: A Plugin-Based Multi-Agent System for In-Editor Academic Writing, Review, and Editing. In Companion Proceedings of the ACM Web Conference 2026 (WWW Companion ’26), April 13–17, 2026, Dubai, United Arab Emirates. ACM, New York, NY, USA, 4 pages. https://doi.org/10 . 1145/3774905.3793122 1 Introduction Large language models (LLMs) are increasingly used in assisting academic writing workflows, from brainstorming and outlining to line-level editing and reviewer-response drafting. Recent systems for human–AI co-writing and assisted feedback demonstrate that LLM-based suggestions can improve fluency and reduce mechanical writing effort at scale [3, 4]. Research on writing-support tools fur- ther shows that structured interventions can meaningfully improve writing efficiency and user experience [2, 5, 6]. Despite these developments, the majority of existing tools still operate outside the primary editing environment, requiring copy- and-paste workflows and fragmenting interaction history [3, 7]. This external workflow introduces context switching, breaks writ- ing flow, and makes revision history difficult to preserve. Addition- ally, external tools provide limited revision provenance; feedback, applied changes, and reasoning disappear once the interaction win- dow closes. Tools such as Writefull provide in-editor language suggestions but remain largely surface-level, offering limited trans- parency into applied changes [8]. To address these challenges, we present PaperDebugger, an in-editor LLM agent system that integrates directly into Overleaf, a widely used academic writing editor. Instead of treating the writing process and model interaction as separate workflows, the system enables critique, refinement, and revision to take place in context, inline, and tied to document structure. The system provides per- sistent interaction history, patch-based edits, and structure-aware feedback while preserving the continuity of writing. Technically, it is implemented as a Chrome extension that communicates with a Kubernetes-based backend using gRPC. The Model Context Proto- col (MCP) acts as a lightweight extensibility layer, enabling modular functionality and future agent capabilities without altering the core architecture. PaperDebugger is fully implemented with over 24,000 lines of
This content is AI-processed based on ArXiv data.