Fluid Agency in AI Systems: A Case for Functional Equivalence in Copyright, Patent, and Tort

Fluid Agency in AI Systems: A Case for Functional Equivalence in Copyright, Patent, and Tort
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Modern Artificial Intelligence (AI) systems lack human-like consciousness or culpability, yet they exhibit fluid agency: behavior that is (i) stochastic (probabilistic and path-dependent), (ii) dynamic (co-evolving with user interaction), and (iii) adaptive (able to reorient across contexts). Fluid agency generates valuable outputs but collapses attribution, irreducibly entangling human and machine inputs. This fundamental unmappability fractures doctrines that assume traceable provenance – authorship, inventorship, and liability – yielding ownership gaps and moral “crumple zones.” This Article argues that only functional equivalence stabilizes doctrine. Where provenance is indeterminate, legal frameworks must treat human and AI contributions as equivalent for allocating rights and responsibility – not as a claim of moral or economic parity but as a pragmatic default. This principle stabilizes doctrine across domains, offering administrable rules: in copyright, vesting ownership in human orchestrators without parsing inseparable contributions; in patent, tying inventor-of-record status to human orchestration and reduction to practice, even when AI supplies the pivotal insight; and in tort, replacing intractable causation inquiries with enterprise-level and sector-specific strict or no-fault schemes. The contribution is both descriptive and normative: fluid agency explains why origin-based tests fail, while functional equivalence supplies an outcome-focused framework to allocate rights and responsibility when attribution collapses.


💡 Research Summary

The article argues that modern AI systems exhibit a form of “fluid agency” that is stochastic, dynamic, and adaptive, fundamentally blurring the line between human and machine contributions. This fluid agency makes it practically impossible to map specific elements of an output to either the human or the AI, a condition the authors term “unmappability.” Because traditional doctrines of copyright, patent, and tort law rely on the ability to trace a work or a harmful act back to a human author, inventor, or wrongdoer, they become destabilized when faced with AI systems that co‑determine goals, select sources, and restructure outputs without direct human oversight.

In copyright, the conventional contribution‑based analysis (which distinguishes joint authorship, derivative works, etc.) fails when an AI such as a Deep Research Agent autonomously decides which data to use, how to weight it, and how to organize the final report. The authors propose abandoning contribution‑based attribution in favor of a “human orchestration” rule: the human who initiates, directs, and ultimately curates the AI‑generated work is vested with ownership, while the AI’s internal processes are treated as a non‑separable tool. Transparency obligations on AI model and data usage are suggested to preserve public policy interests.

In patent law, the statutory requirements of conception and reduction‑to‑practice are challenged when AI supplies the core inventive insight while the human conducts experiments and files the application. The paper recommends that the human who orchestrates the AI and reduces the invention to practice be named the inventor, but that a mandatory disclosure of AI‑generated contributions be required in the patent file. This balances the need to preserve the human‑centric inventorship regime with the reality of AI‑driven innovation.

For tort liability, causation analysis becomes intractable because the AI’s autonomous decisions can be the proximate cause of harm, yet the human may have exercised only minimal supervision. The authors suggest replacing fault‑based causation with sector‑specific strict liability or no‑fault compensation schemes, thereby ensuring victims are promptly compensated while incentivizing firms to implement robust AI risk‑management practices.

The unifying solution is the principle of functional equivalence. Rather than asserting moral or ontological parity between humans and machines, functional equivalence treats human and AI contributions as interchangeable for the purpose of allocating rights and responsibilities when provenance cannot be reliably determined. This pragmatic default stabilizes legal outcomes across the three domains, reduces litigation, and accommodates rapid AI advancement. The authors acknowledge limits: the principle applies only when AI operates under human control; fully autonomous AI or AI that claims personhood would require distinct regulatory frameworks. They also call for international harmonization to avoid jurisdictional conflicts.

Overall, the paper provides a comprehensive diagnosis of how fluid agency undermines origin‑based legal doctrines and offers a concrete, cross‑disciplinary framework—functional equivalence—to fill the resulting doctrinal gaps in copyright, patent, and tort law.


Comments & Academic Discussion

Loading comments...

Leave a Comment