Toward a Human-AI Task Tensor: A Taxonomy for Organizing Work in the Age of Generative AI
We introduce a framework for understanding the impact of generative AI on human work, which we call the human-AI task tensor. A tensor is a structured framework that organizes tasks along multiple interdependent dimensions. Our human-AI task tensor introduces a systematic approach to studying how humans and AI interact to perform tasks, and has eight dimensions: task definition, AI integration, interaction modality, audit requirement, output definition, decision-making authority, AI structure, and human persona. After describing the eight dimensions of the tensor, we provide illustrative frameworks (derived from projections of the tensor) and a human-AI task canvas that provide analytical tractability and practical insight for organizational decision-making. We demonstrate how the human-AI task tensor can be used to organize emerging and future research on generative AI. We propose that the human-AI task tensor offers a starting point for understanding how work will be performed with the emergence of generative AI.
💡 Research Summary
The paper introduces the “human‑AI task tensor,” a multidimensional taxonomy designed to structure and analyze how generative AI and humans collaborate on work tasks. Recognizing that generative AI differs from prior technologies in its unstructured inputs, multimodal outputs, and apparent anthropomorphic qualities, the authors argue that a systematic framework is needed to make sense of the rapidly expanding set of possible human‑AI interactions.
The tensor consists of eight interrelated dimensions, each capturing a distinct aspect of a task:
-
Task Definition – the degree to which a task is well‑structured versus ill‑defined, drawing on classic problem‑structuring literature (Simon 1973; Rittel & Webber 1973). Well‑defined tasks have clear criteria and a bounded solution space, making them more amenable to automation.
-
AI Integration – whether AI acts as a substitute for human effort or as a complement that augments human capabilities. The authors reference labor‑market studies that show both substitution risk and complementary productivity gains.
-
Interaction Modality – the channel through which humans and AI exchange information, ranging from purely digital (text, image, audio) to physical/embodied interfaces (robotics, AR, haptic devices). This dimension anticipates future expansion of AI beyond the digital realm.
-
Audit Requirement – the extent of process and output oversight needed by the receiving agent. The authors distinguish between “black‑box” interactions that require little scrutiny and those that demand detailed verification of both the algorithmic process and the final result, linking the concept to scalable oversight (Amodei et al., 2016).
-
Output Definition – whether the task’s deliverable is well‑defined (e.g., a numeric report) or ill‑defined (e.g., a strategic mind map). Well‑defined outputs are easier to evaluate and audit, while ill‑defined outputs often need human interpretation.
-
Decision‑Making Authority – which party (human or AI) holds the final decision rights. The authors map this onto a 20‑level “Task Augmentation/Automation Scale,” illustrating a continuum from full human control with AI assistance to full AI autonomy with minimal human input.
-
AI Structure – the maturity of the AI system itself, ranging from “genesis” (experimental, custom architectures) to “utility” (standardized, modular, production‑grade components). This dimension informs governance choices about standardization versus bespoke solutions.
-
Human Persona – characteristics of the human participant (ability, expertise, experience). The paper notes that AI benefits can be skewed toward either lower‑skill or higher‑skill individuals depending on context, echoing recent empirical findings.
To make the high‑dimensional tensor tractable, the authors propose several two‑dimensional projections that serve as practical tools:
- Human‑AI Task Canvas – a visual matrix that lets practitioners plot a specific task across the eight dimensions, facilitating strategic planning and risk assessment.
- AI Function Matrix – combines AI Integration (substitute vs complement) with Output Definition (well‑defined vs ill‑defined) to identify six functional roles AI can play: production, idea generation, assistance, editing, explanation, and open‑ended interaction.
- Task Augmentation/Automation Scale – the 20‑step hierarchy of decision‑making authority, clarifying how much autonomy is granted to AI at each stage.
- Task Audit Matrix – classifies tasks into four types (open exchange, verifiable application, process exploration, expert application) based on the need for process and output auditing.
The paper demonstrates how the tensor can be used to organize existing research on generative AI, mapping studies onto specific dimensions to reveal gaps and overlaps. It also outlines a research agenda: (a) developing quantitative measures for each dimension, (b) empirically validating the tensor with case studies across industries, and (c) creating dynamic updating mechanisms to keep the taxonomy current as AI capabilities evolve.
Strengths of the work include its comprehensive scope, clear linkage to established theory, and the provision of concrete visual tools that bridge academic insight and managerial practice. Limitations are acknowledged: the dimensions are described qualitatively, lacking precise metrics; interactions among dimensions are not formally modeled; and the framework does not yet incorporate temporal dynamics (e.g., how a task may shift from “well‑defined” to “ill‑defined” as AI learns).
Overall, the human‑AI task tensor offers a valuable first step toward a shared language for discussing, researching, and governing the complex landscape of generative AI‑augmented work. By structuring tasks along eight well‑defined axes, it equips scholars, policymakers, and business leaders with a roadmap for assessing where AI can add value, where oversight is essential, and how human roles may evolve in the age of generative AI.
Comments & Academic Discussion
Loading comments...
Leave a Comment