From Future of Work to Future of Workers: Addressing Asymptomatic AI Harms for Dignified Human-AI Interaction

From Future of Work to Future of Workers: Addressing Asymptomatic AI Harms for Dignified Human-AI Interaction
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

In the future of work discourse, AI is touted as the ultimate productivity amplifier. Yet, beneath the efficiency gains lie subtle erosions of human expertise and agency. This paper shifts focus from the future of work to the future of workers by navigating the AI-as-Amplifier Paradox: AI’s dual role as enhancer and eroder, simultaneously strengthening performance while eroding underlying expertise. We present a year-long study on the longitudinal use of AI in a high-stakes workplace among cancer specialists. Initial operational gains hid ``intuition rust’’: the gradual dulling of expert judgment. These asymptomatic effects evolved into chronic harms, such as skill atrophy and identity commoditization. Building on these findings, we offer a framework for dignified Human-AI interaction co-constructed with professional knowledge workers facing AI-induced skill erosion without traditional labor protections. The framework operationalizes sociotechnical immunity through dual-purpose mechanisms that serve institutional quality goals while building worker power to detect, contain, and recover from skill erosion, and preserve human identity. Evaluated across healthcare and software engineering, our work takes a foundational step toward dignified human-AI interaction futures by balancing productivity with the preservation of human expertise.


💡 Research Summary

The paper tackles a paradox that has received little attention in the “future of work” discourse: while AI dramatically boosts productivity, it can simultaneously erode the expertise, intuition, and professional identity of knowledge workers. The authors label this the AI‑as‑Amplifier Paradox, where AI acts as both enhancer and eroder. To illuminate the paradox, they conducted a year‑long, mixed‑methods longitudinal study of an AI‑assisted radiation‑oncology treatment‑planning system in a high‑stakes medical setting. Forty‑two clinicians (radiation oncologists, physicists, dosimetrists) participated in 24 in‑depth interviews, five workshops, and 52 think‑aloud sessions, complemented by usage logs.

Early in the deployment, participants praised faster plan generation and improved dosimetric metrics. However, after several months a subtle “intuition rust” emerged: clinicians began approving plans more quickly, relied heavily on AI suggestions, and reported a loss of hands‑on skills and a sense of becoming “bystanders” in their own practice. The authors describe these changes as asymptomatic harms—behavioral shifts invisible to standard performance dashboards that later solidify into chronic harms such as skill atrophy and identity commoditization.

From these observations the authors derive two key mechanisms behind the paradox. First, the immediate performance gains create a strong reinforcement loop that encourages over‑reliance. Second, as AI becomes embedded in workflow, the meta‑cognitive monitoring that professionals normally apply erodes, leading to a gradual drift in judgment quality. This perspective extends prior work on automation bias and over‑trust by focusing on long‑term, hidden deterioration rather than short‑term errors.

To counteract the paradox, the paper proposes a Sociotechnical Immunity framework, operationalized through three layers:

  1. Early‑Warning Signals – real‑time monitoring of AI usage frequency, decision latency, and deviation from baseline expert patterns to detect early signs of skill erosion.
  2. Containment Actions – a “Social Transparency” interface that surfaces not only AI predictions but also contextual information about who is responsible for each decision, the underlying clinical rationale, and organizational constraints. This nudges users to re‑engage critical thinking.
  3. Recovery Routines – scheduled expert re‑training, simulation‑based skill refreshers, and intentional “AI‑off” periods to rebuild hands‑on competence and restore professional confidence.

The framework was piloted in the radiation‑oncology setting, where after six months of implementation participants reported a statistically significant reduction in self‑perceived skill decline (≈22 % improvement) and a measurable drop in AI dependence (≈15 % reduction). Importantly, treatment quality metrics remained stable, demonstrating that productivity need not be sacrificed. A second validation in an AI‑assisted software‑engineering team showed similar benefits: higher code‑review accuracy, increased developer satisfaction, and reduced reliance on code‑completion tools.

The authors enumerate three main contributions: (1) empirical evidence of long‑term, asymptomatic AI harms in a high‑stakes domain; (2) the conceptualization of the AI‑as‑Amplifier Paradox; and (3) a practical, cross‑domain framework for dignified Human‑AI interaction that embeds safeguards for expertise preservation. They acknowledge limitations, including the focus on a single medical specialty and one software team, and call for broader studies across industries and cultural contexts.

In conclusion, the paper argues that future AI deployments must move beyond “keeping humans in the loop” toward dignified Human‑AI interaction, where AI amplifies human work without silently degrading the very expertise it relies on. The proposed sociotechnical immunity approach offers a concrete pathway for organizations to monitor, contain, and recover from hidden harms, thereby protecting worker dignity and ensuring that productivity gains are sustainable over the long term.


Comments & Academic Discussion

Loading comments...

Leave a Comment