Enhancing Citizen-Government Communication with AI: Evaluating the Impact of AI-Assisted Interactions on Communication Quality and Satisfaction
This study integrates critical AI scholarship with relational communication theories to explain how AI language modifications shape the quality of government-citizen communication. Distinguishing between informational-cognitive quality (clarity, ease of response) and expressive-constitutive quality (politeness, respectfulness, feeling heard, trust, urgency, empathy), we hypothesize that AI yields uncontested benefits for the former but contested effects for the latter, potentially enhancing relational markers while muting authentic emotional cues. Using a vignette-based survey with 220 citizens and 214 civil servants in China, we assess perceptions across five interaction contexts: service requests, policy inquiries, complaints, suggestions, and emergencies. Results from paired t-tests and mixed-effects regressions support the claim that AI enhances both informational-cognitive and expressive-constitutive quality from the perspectives of citizens and civil servants, with significant improvements in clarity, politeness, satisfaction, trust, and empathy, but provide no consistent evidence of urgency or empathy signals. These findings suggest that concerns over algorithmic emotional flattening may be overstated or context-specific; they offer theoretical insights into AI-mediated public interactions and practical implications for fostering trust and efficiency in digital governance.
💡 Research Summary
The article investigates how artificial‑intelligence‑driven language modification influences the quality of written interactions between governments and citizens. Drawing on relational communication theory, the authors distinguish two dimensions of communication quality: informational‑cognitive quality (clarity, ease of response) and expressive‑constitutive quality (politeness, respect, feeling heard, trust, urgency, empathy). They hypothesize that AI will unequivocally improve the informational‑cognitive dimension, while its impact on the expressive‑constitutive dimension may be mixed—potentially boosting markers such as politeness and trust but possibly dampening authentic emotional cues like urgency or grievance.
To test these ideas, the researchers designed a vignette‑based survey in China covering five realistic government‑citizen scenarios: service requests, policy inquiries, complaints, suggestions, and emergencies. Each vignette was presented in two versions—one with AI‑generated language modifications (using large language models to enhance clarity, tone, and politeness) and one without. A total of 220 citizens and 214 civil servants evaluated each version on a seven‑point scale for clarity, politeness, trust, empathy, urgency, and overall satisfaction. Demographic variables were collected to control for potential confounds.
Statistical analysis employed paired t‑tests to compare AI‑assisted and non‑AI messages, followed by mixed‑effects regression models that accounted for individual‑level random effects and scenario‑level fixed effects. Results consistently showed that AI‑modified messages were rated higher in clarity and politeness. Citizens reported greater trust and perceived empathy toward government responses that had been AI‑enhanced, while civil servants found citizen messages that had undergone AI modification to be clearer, more respectful, and easier to address. Satisfaction scores rose on both sides of the exchange. However, the effect on perceived urgency was not statistically reliable; in some cases, AI‑modified texts appeared to blunt the sense of immediacy. Similarly, while overall empathy scores improved, the transmission of negative emotions (e.g., frustration in complaints) was attenuated, suggesting a smoothing of affective intensity.
The authors interpret these findings as evidence that AI can simultaneously raise informational efficiency and certain relational cues without necessarily eroding all emotional content. They acknowledge that critical AI scholarship warns of algorithmic flattening of dissent and urgency, yet their empirical data indicate that such flattening is context‑specific rather than universal. The paper discusses theoretical implications for the study of mediated public communication, emphasizing the need to view AI not merely as an automation tool but as a third‑party interlocutor that reshapes meaning, tone, and perceived legitimacy.
Limitations include the exclusive focus on Chinese urban respondents, which may limit cross‑cultural generalizability; the artificial nature of vignette scenarios that may not capture the full dynamics of real‑time AI‑mediated exchanges; and the lack of transparency regarding the specific language‑modification algorithms used, which hampers reproducibility. Future research directions proposed are multi‑national comparative studies, analysis of actual chatbot interaction logs, and deeper investigation into how AI algorithms modulate negative affect and urgency signals.
Practically, the study suggests that public administrations should adopt AI language‑enhancement tools with attention to both clarity and emotional nuance. Training programs for civil servants could incorporate AI‑assisted drafting to improve response efficiency while preserving a sense of empathy. Moreover, policymakers should ensure algorithmic transparency and incorporate mechanisms to retain authentic expressions of urgency or grievance, thereby balancing efficiency gains with democratic responsiveness.
Comments & Academic Discussion
Loading comments...
Leave a Comment