When Your Boss Is an AI Bot: Exploring Opportunities and Risks of Manager Clone Agents in the Future Workplace
As Generative AI (GenAI) becomes increasingly embedded in the workplace, managers are beginning to create Manager Clone Agents – AI-powered digital surrogates trained on their work communications and decision patterns to perform managerial tasks on their behalf. To investigate this emerging phenomenon, we conducted six design fiction workshops (n = 23) with managers and workers, in which participants co-created speculative scenarios and discussed how Manager Clone Agents might transform collaborative work. We identified four potential roles that participants envisioned for Manager Clone Agents: proxy presence, informational conveyor, productivity engine, and leadership amplifier, while highlighting concerns spanning individual, interpersonal, and organizational levels. We provide design recommendations envisioned by both parties for integrating Manager Clone Agents responsibly into the future workplace, emphasizing the need to prioritize workers’ perspectives and nurture interpersonal bonds while also anticipating alternative futures that may disrupt managerial hierarchies.
💡 Research Summary
This paper investigates the emerging phenomenon of Manager Clone Agents—AI‑powered digital surrogates that replicate a manager’s appearance, communication style, decision‑making patterns, and even voice—to perform managerial tasks on their behalf. Recognizing that such agents differ from traditional algorithmic management tools by combining symbolic authority with functional delegation, the authors conducted six design‑fiction workshops with 23 participants (a mix of managers and workers) to explore envisioned opportunities, risks, and design directions. Participants co‑created speculative future scenarios in which Manager Clone Agents were widely adopted, then reflected on the implications. The study yields three main findings. First, participants identified four primary roles for these agents: (1) Proxy Presence – attending meetings, presentations, or site visits to maintain a manager’s visible presence despite time or location constraints; (2) Informational Conveyor – automatically gathering, summarizing, and disseminating information across hierarchical layers, reducing information asymmetry; (3) Productivity Engine – handling routine approvals, report generation, scheduling, and other repetitive tasks, freeing managers for strategic work; and (4) Leadership Amplifier – reproducing a manager’s coaching tone, feedback style, and cultural messaging to ensure consistent leadership across the team. Second, the workshops uncovered multi‑level concerns. At the individual level, managers feared loss of accountability and personal identity, while workers worried about being evaluated by an impersonal algorithm and about job security. At the interpersonal level, participants highlighted fragile trust, diminished authenticity, and the erosion of informal social interaction when an AI mediates communication. At the organizational level, efficiency gains were seen as potentially flattening hierarchies, weakening belonging, and reshaping power dynamics in ways that could marginalize human managers. Third, participants proposed concrete design recommendations to mitigate these risks: (a) Worker‑Centric Design – make the agent’s capabilities and limits transparent, and provide channels for continuous employee feedback; (b) Clear Boundary Setting – delineate which decisions the agent may autonomously make and require human oversight for high‑stakes, high‑emotional situations (e.g., performance reviews, conflict resolution); (c) Gradual Trust Building – start with low‑risk use cases such as meeting summarization or schedule coordination, evaluate outcomes, and expand scope only after demonstrable benefits and trust are established; (d) Explainability and Accountability – embed mechanisms that let users see how the agent arrived at a recommendation, preserving a sense of control; and (e) Exploration of Alternative Futures – deliberately consider radical scenarios (e.g., fully AI‑driven organizations, de‑hierarchized structures) to avoid locking design into existing power hierarchies. The authors position their contribution within HCI and CSCW literature, extending prior work on algorithmic management, collaborative robots, and AI clones by emphasizing the unique blend of symbolic authority and functional automation that Manager Clone Agents embody. By providing an empirically grounded vision of both the promise and the perils of such agents, the paper offers a roadmap for responsible design that safeguards human relational dynamics while leveraging generative AI’s productivity potential.
Comments & Academic Discussion
Loading comments...
Leave a Comment