"These cameras are just like the Eye of Sauron": A Sociotechnical Threat Model for AI-Driven Smart Home Devices as Perceived by UK-Based Domestic Workers

"These cameras are just like the Eye of Sauron": A Sociotechnical Threat Model for AI-Driven Smart Home Devices as Perceived by UK-Based Domestic Workers
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

The growing adoption of AI-driven smart home devices has introduced new privacy risks for domestic workers (DWs), who are frequently monitored in employers’ homes while also using smart devices in their own households. We conducted semi-structured interviews with 18 UK-based DWs and performed a human-centered threat modeling analysis of their experiences through the lens of Communication Privacy Management (CPM). Our findings extend existing threat models beyond abstract adversaries and single-household contexts by showing how AI analytics, residual data logs, and cross-household data flows shaped the privacy risks faced by participants. In employer-controlled homes, AI-enabled features and opaque, agency-mediated employment arrangements intensified surveillance and constrained participants’ ability to negotiate privacy boundaries. In their own homes, participants had greater control as device owners but still faced challenges, including gendered administrative roles, opaque AI functionalities, and uncertainty around data retention. We synthesize these insights into a sociotechnical threat model that identifies DW agencies as institutional adversaries and maps AI-driven privacy risks across interconnected households, and we outline social and practical implications for strengthening DW privacy and agency.


💡 Research Summary

This paper investigates privacy threats posed by AI‑driven smart home devices to domestic workers (DWs) in the United Kingdom, focusing on the dual contexts in which these workers operate: employer‑controlled homes and their own residences. The authors conducted semi‑structured interviews with 18 DWs who regularly interact with AI‑enabled cameras and voice assistants in both settings. Using Communication Privacy Management (CPM) theory as an analytical lens, they examined how DWs perceive ownership of personal information, set privacy rules, and experience “privacy turbulence” when those rules are violated.

In employer‑controlled homes, AI cameras perform advanced analytics such as motion detection, facial recognition, crying detection, and behavioral inference, while continuously storing video and metadata in cloud services. Device configuration, data‑retention policies, and surveillance norms are dictated by employers and, indirectly, by domestic‑worker agencies that mediate contracts and dispute resolution. This creates a situation where the employer (and the agency) effectively become the sole “owner” of the DW’s data, leaving the worker with little control and frequent boundary violations. The study describes these violations as “orphaned” DW profiles that persist beyond the employment relationship, exposing workers to ongoing monitoring and potential misuse of their personal data.

In contrast, DWs have greater agency in their own homes: they can purchase, install, disable, or avoid devices. Nevertheless, the authors found that AI functionalities remain opaque (e.g., background voice processing, long‑term memory features) and that gendered expectations often place women in the role of household technology manager, limiting true autonomy. Moreover, coping strategies learned in employer homes—such as physically covering cameras or using code words to evade voice assistants—transfer to personal settings, illustrating a cross‑household migration of both risk perception and mitigation tactics.

A key contribution of the work is the reconceptualization of domestic‑worker agencies as institutional adversaries. Agencies shape surveillance practices through vague contract clauses, limited training, and a lack of transparent data‑handling procedures, thereby normalising or amplifying AI‑driven monitoring. The authors also map cross‑household data flows, showing how logs and analytic outputs generated in an employer’s home can be accessed, stored, or repurposed in a worker’s personal environment, creating a networked threat landscape that transcends a single dwelling.

Building on these findings, the paper proposes a sociotechnical threat model that integrates context (employer vs. personal home), concrete technical threats (AI analytics, residual logs, cross‑household data traces), protective strategies (device disabling, privacy‑by‑design settings, legal recourse), and reflections on power asymmetries. Practical recommendations target device manufacturers (transparent AI explanations, data minimisation), platform providers (user‑centric privacy controls), DW agencies (clear privacy clauses, training on device use, independent grievance mechanisms), and policymakers (legislation that recognises DWs as a vulnerable class, imposes duties on employers and agencies to limit invasive monitoring).

Overall, the study expands the literature on smart‑home privacy by foregrounding a marginalized user group, demonstrating how AI‑enhanced devices reshape surveillance dynamics, and highlighting the need for multi‑level interventions that address both technical design and structural power imbalances.


Comments & Academic Discussion

Loading comments...

Leave a Comment