Axiomatic Choice
People care about decision outcomes and how decisions get made, both when making decisions and reflecting on decisions. But formalizing the full range of normative concerns that drive decisions is an open challenge. We introduce Axiomatic Choice as a framework for making and evaluating decisions based on formal normative statements about decisions. These statements, or axioms, capture a wide array of desiderata, e.g., ethical constraints, beyond the typical treatment in Social Choice. Using our model of axioms and decisions we define key properties and introduce a taxonomy of axioms which may be of general interest. We then use these properties and our taxonomy to define the Decision-Evaluation Paradox, formalize the concepts of transparency and deception in explaining and justifying decisions, and reveal the limits of existing methods using axioms to make decisions.
💡 Research Summary
The paper introduces “Axiomatic Choice,” a general framework that extends beyond the traditional set of axioms used in social choice theory to capture both the outcomes of decisions and the processes by which they are made. The authors formalize decisions as triples (profile x, rule f, outcome y), where X is a set of profiles, Y a set of outcomes, and F the set of deterministic functions from X to Y. The set D = X × F × Y contains every possible decision. An axiom A is a binary predicate on D that partitions decisions into those that satisfy the axiom (value 1) and those that violate it (value 0). Any axiom can be uniquely identified by the subset L ⊆ D of decisions that satisfy it.
A taxonomy of axioms is proposed based on how they partition D:
- Positively/Negatively trivial – satisfied (or violated) by all decisions.
- Structural – depends only on a subset of profiles X′⊆X.
- Procedural – depends only on a subset of rules F′⊆F.
- Consequentialist – depends only on a subset of outcomes Y′⊆Y.
- Black‑box – defined by a set of profile‑outcome pairs B⊆X×Y.
- Caudal – defined by a set of rule‑outcome pairs C⊆F×Y.
- Exigent – any axiom that does not fit the previous categories.
The paper defines several key notions:
- Impasse – a profile for which no decision satisfies a given axiom.
- Forcing – an axiom that, for every profile, forces a single outcome; such axioms implicitly define a unique rule.
- Black‑box reduction – from an axiom’s decision set L to the corresponding B⊆X×Y.
- Procedural extension – from a black‑box axiom to the set of rules that always produce outcomes in B.
- Extensional equivalence – two axioms that reduce to the same black‑box axiom.
- Implied rule – a rule f such that the axiom is extensionally equivalent to the procedural axiom that admits only f.
The central theoretical contribution is the Decision‑Evaluation Paradox. If an axiom A is forcing, one can derive its implied rule f_A and use f_A to make decisions. However, the decisions generated by f_A need not satisfy the original axiom A. The paradox is illustrated with a minimal example where a forcing exigent axiom A_L forces the outcome pair (x₁→y₁, x₂→y₂). Its black‑box reduction yields B = {(x₁,y₁),(x₂,y₂)} and implies rule f₂. While f₂ produces the required outcomes, the original decisions (x₁,f₁,y₁) and (x₂,f₄,y₂) that satisfy A_L are not generated by f₂, so using f₂ does not guarantee compliance with A_L. The paradox only arises for caudal and exigent axioms; for purely black‑box or procedural forcing axioms, the implied rule always satisfies the axiom (Theorem 2).
The authors connect this paradox to classic Arrow‑type impossibility results. When multiple axioms (e.g., ex‑ante fairness and ex‑post efficiency) are combined, the resulting composite axiom may become caudal or exigent, re‑introducing the paradox and potentially leading to impasses where no rule can satisfy all constraints simultaneously.
Beyond the formal theory, the paper formalizes transparency as the existence of extensional equivalence between a disclosed rule and the governing axiom, and deception as a mismatch between the publicly stated rule and the axiom actually used to make decisions. These definitions clarify why many existing axiom‑based evaluation methods lack explanatory power: they often treat axioms solely as post‑hoc evaluation tools without guaranteeing that the decision‑making process respects them.
Finally, the work discusses implications for human‑in‑the‑loop systems such as Reinforcement Learning from Human Feedback (RLHF). Conventional feedback formats (e.g., ordinal rankings) cannot capture the rich normative concerns that users may hold. By encoding feedback as a set of axioms within the Axiomatic Choice framework, one can represent diverse values (fairness, privacy, simplicity, etc.) and aggregate conflicting human preferences in a principled way. This opens a path toward more robust value alignment, policy design, and collective decision‑making that respects both outcomes and the procedural virtues that stakeholders deem important.
Comments & Academic Discussion
Loading comments...
Leave a Comment