Authors: ** - Abhiram Bellur (University of Colorado, USA) - Mohammed Raihan Ullah (University of Colorado, USA) - Fraol Batole (Tulane University, USA) - Mohit Kansara (University of Texas at Dallas, USA) - Masaharu Morimoto (NEC Corporation, Japan) - Kai Ishikawa (NEC Corporation, Japan) - Haifeng Chen (NEC Laboratories America, USA) - Yaroslav Zharov (JetBrains Research, Germany) - Timofey Bryksin (JetBrains Research, Cyprus) - Tien N. Nguyen (University of Texas at Dallas, USA) - Hridesh Rajan (Tulane University, USA) - Danny Dig (University of Colorado, USA) **
📝 Abstract
The primary value of AI agents in software development lies in their ability to extend the developer's capacity for reasoning and action, not to supplant human involvement. To showcase how to use agents working in tandem with developers, we designed a novel approach for carrying out coordinated renaming. Coordinated renaming, where a single rename refactoring triggers refactorings in multiple, related identifiers, is a frequent yet challenging task. Developers must manually propagate these rename refactorings across numerous files and contexts, a process that is both tedious and highly error-prone. State-of-the-art heuristic-based approaches produce an overwhelming number of false positives, while vanilla Large Language Models (LLMs) provide incomplete suggestions due to their limited context and inability to interact with refactoring tools. This leaves developers with incomplete refactorings or burdens them with filtering too many false positives. Coordinated renaming is exactly the kind of repetitive task that agents can significantly reduce the developers' burden while keeping them in the driver's seat. We designed, implemented, and evaluated the first multi-agent framework that automates coordinated renaming. It operates on a key insight: a developer's initial refactoring is a clue to infer the scope of related refactorings. Our Scope Inference Agent first transforms this clue into an explicit, natural-language Declared Scope. The Planned Execution Agent then uses this as a strict plan to identify program elements that should undergo refactoring and safely executes the changes by invoking the IDE's own trusted refactoring APIs. Finally, the Replication Agent uses it to guide the project-wide search. We first conducted a formative study on the practice of coordinated renaming in 609K commits in 100 open-source projects and surveyed 205 developers, and then we implemented these ideas into CoRenameAgent. In our rigorous, multi-methodology evaluation of CoRenameAgent, we are using two benchmarks. First, on an established benchmark that contains 1349 renames, CoRenameAgent achieves a 2.3× F1-score improvement over the state-of-the-art. Second, on our new, uncontaminated benchmark of 1573 recent renames, it demonstrates a 3.1× F1-score improvement. By having CoRenameAgent 's automatically generated pull requests accepted into active open-source repositories, we provide compelling evidence of its practical utility and potential adoption.
💡 Deep Analysis
📄 Full Content
Multi-Agent Coordinated Rename Refactoring
ABHIRAM BELLUR, University of Colorado, USA
MOHAMMED RAIHAN ULLAH, University of Colorado, USA
FRAOL BATOLE, Tulane University, USA
MOHIT KANSARA, University of Texas at Dallas, USA
MASAHARU MORIMOTO, NEC Corporation, Japan
KAI ISHIKAWA, NEC Corporation, Japan
HAIFENG CHEN, NEC Laboratories America, USA
YAROSLAV ZHAROV, JetBrains Research, Germany
TIMOFEY BRYKSIN, JetBrains Research, Cyprus
TIEN N. NGUYEN, University of Texas at Dallas, USA
HRIDESH RAJAN, Tulane University, USA
DANNY DIG, University of Colorado, USA
The primary value of AI agents in software development lies in their ability to extend the developer’s capacity
for reasoning and action, not to supplant human involvement. To showcase how to use agents working in
tandem with developers, we designed a novel approach for carrying out coordinated renaming. Coordinated
renaming, where a single rename refactoring triggers refactorings in multiple, related identifiers, is a frequent
yet challenging task. Developers must manually propagate these rename refactorings across numerous files
and contexts, a process that is both tedious and highly error-prone. State-of-the-art heuristic-based approaches
produce an overwhelming number of false positives, while vanilla Large Language Models (LLMs) provide
incomplete suggestions due to their limited context and inability to interact with refactoring tools. This leaves
developers with incomplete refactorings or burdens them with filtering too many false positives. Coordinated
renaming is exactly the kind of repetitive task that agents can significantly reduce the developers’ burden
while keeping them in the driver’s seat.
We designed, implemented, and evaluated the first multi-agent framework that automates coordinated
renaming. It operates on a key insight: a developer’s initial refactoring is a clue to infer the scope of related
refactorings. Our Scope Inference Agent first transforms this clue into an explicit, natural-language Declared
Scope. The Planned Execution Agent then uses this as a strict plan to identify program elements that should
undergo refactoring and safely executes the changes by invoking the IDE’s own trusted refactoring APIs.
Finally, the Replication Agent uses it to guide the project-wide search. We first conducted a formative
study on the practice of coordinated renaming in 609K commits in 100 open-source projects and surveyed 205
developers, and then we implemented these ideas into CoRenameAgent. In our rigorous, multi-methodology
evaluation of CoRenameAgent, we are using two benchmarks. First, on an established benchmark that
contains 1349 renames, CoRenameAgent achieves a 2.3× F1-score improvement over the state-of-the-art.
Second, on our new, uncontaminated benchmark of 1573 recent renames, it demonstrates a 3.1× F1-score
improvement. By having CoRenameAgent ’s automatically generated pull requests accepted into active
open-source repositories, we provide compelling evidence of its practical utility and potential adoption.
Authors’ Contact Information: Abhiram Bellur, University of Colorado, Boulder, USA, Abhiram.Bellur@colorado.edu;
Mohammed Raihan Ullah, University of Colorado, Boulder, USA, raihan.ullah@colorado.edu; Fraol Batole, Tulane University,
Tulane, USA; Mohit Kansara, University of Texas at Dallas, USA, mohit.kansara@utdallas.edu; Masaharu Morimoto, NEC
Corporation, Japan, m-morimoto@nec.com; Kai Ishikawa, NEC Corporation, Japan, k-ishikawa@nec.com; Haifeng Chen,
NEC Laboratories America, USA, haifeng@nec-labs.com; Yaroslav Zharov, JetBrains Research, Germany, yaroslav.zharov@
jetbrains.com; Timofey Bryksin, JetBrains Research, Cyprus, timofey.bryksin@jetbrains.com; Tien N. Nguyen, University of
Texas at Dallas, USA, tien.n.nguyen@utdallas.edu; Hridesh Rajan, Tulane University, USA, hrajan@tulane.edu; Danny Dig,
University of Colorado, USA, danny.dig@colorado.edu.
arXiv:2601.00482v1 [cs.SE] 1 Jan 2026
2
Abhiram Bellur, Mohammed Raihan Ullah, Fraol Batole, Mohit Kansara, Masaharu Morimoto, Kai Ishikawa, Haifeng Chen,
Yaroslav Zharov, Timofey Bryksin, Tien N. Nguyen, Hridesh Rajan, and Danny Dig
1
Introduction
Recent advances in Large Language Models and autonomous systems have fueled growing interest
in using software agents to transform software engineering. These agents act as proactive, goal-
directed collaborators and have shown early success in tasks such as code synthesis [13], test
generation [3, 14], documentation [16, 25], and automated program repair [7, 33, 53]. While these
results highlight the potential of agentic systems to augment developer productivity, they also raise
critical questions about the evolving role of the human developer. As agents take on more complex
tasks, what remains uniquely human in software development? How do we balance automation
with oversight, delegation with control, and augmentation with accountability? Defining this
boundary is key to designing agentic systems that empower rather than