1
Is Agreement on Ethical Content a Precondition for Moral Cooperation?
Abstract
The dominant view in ethics, including biomedical ethics, is built on a relationship between individuals, taken one at a time or as groups that share a common perspective, and principles or norms that guide behavior. Cases arise, however, where well-meaning individuals or groups do not share a common dominant norm or recognize one provided by third parties. It is still possible, I argue, to choose a morally preferable way forward under such circumstances. I define moral behavior as the joint actions that two or more agents take because neither sees any way of achieving a better future, given the circumstances and the intentions of the other agent. When the mutually most moral pair of actions is followed, the community becomes more morally fit in the sense that its members as a whole are better off than they would be following any other pair of interrelated actions.
The first section of this paper demonstrates how agents can agree on the best pair of actions without having to first agree on a common ordering of norms or ethical principles. A simple example of walking on a crowed street without being rude or dangerous is used to formalize two-agent moral interactions. Next, the formal structure of two-person, two behavior engagements is mapped onto the logic of agent-based computer simulations. Finally, an example involving alternative systems of oral healthcare delivery is outlined and then analyzed using agent-based simulation to show that mutual moral advantage decisions rules lead to greater fitness within the community than do selfish approaches or deception. In a concluding addendum, it is shown that selfishness and deception can be restored to their apparent dominance by changing a simple rule of participation without assuming any change in human nature.
[end of abstract]
Recent laboratory research has shown that the hormone estriol reduces multiple sclerosis symptoms. In this type of autoimmune disease the body’s defenses attackmyelin sheaths on nerves causing spotty fatigue, loss of sensation, imprecise muscular control, and even deafness and blindness. Estriol production is elevated during pregnancy and corresponds to anecdotal reports of reduced symptoms among MS sufferers. Currently research on estriol is funded by foundations and not pharmaceutical companies. Its commercial availability is uncertain, even if thetherapy should be proven safe and effective. The problem is that estriol is a naturally occurring hormone and thus not subject to patent protection. Drug companies will not be able to gain an advantage because they would not be able to limit the market. This problem exists generally for “orphan drugs.”
Alternative analyses of this issue can be put forward with a view toward saying what manufacturers or policy makers “should” do based on norms that they may or may not accept. Alternatively, this situation can be framed in terms of what individual pharmaceutical companies, the research community, and those responsible for policy “should” do based on where they stand, what they believe others involved are prepared to do, and what the community of professional ethicists thinks “should” be done. This is a distinction of perspective, where the “should” term is either external to those who can take action or a property internal to the agents themselves.
In the rest of this paper I will distinguish these two types of analyses. I will use the term “ethical” when referring to the theoretical positions that third parties take regarding what agents should do. I will use the term “moral” to designate what first parties actually do that affects others when they are acting as moral agents in their own right. A “moral agent” is an individual or group that has the capacity, in its uncoerced actions, to affect the future of others. A “moral community” is two or more moral agents. A “moral engagement” is a situation where two or more moral agents recognize that their actions influence the future of the other agent.
Often moral agents share norms about the priority of content in a situation that overlap completely or sufficiently so that a common way forward is agreed as natural. My concern will be with situations where one person or group favors a course of action that can be defended as ethically sound but is in conflict with the course of action another individual or group favors based on an equally sound ethical foundation. A woman may seek a medical procedure, perhaps an abortion, but the physician objects to performing it. Both have their reasons. Fluoridated drinking water is an obvious means for reducing the misery of dental decay and increasing school attendance – especially among the poor – but it is also a violation of individuals’ right to choose not to be exposed to substances they regard as harmful. Limited public health dollars can be directed to one priority that is perfectly justified on ethical grounds or to another that is also justified. No one denies that dilemmas exist.
In some such cases, the professional ethics community has the standing to influence what others should do. An example would be the Helsinki Declaration on the use of human subjects in research. Where third-party perspective is not determinate, however, the engaged moral agents must find a joint way forward on their own. Sometimes this takes the rational form of argument and compromise. Often there are efforts to reframe the issue by bringing in new considerations, such as appeal to law suits, publicity campaigns, side payments, and even violence. All such reframing has some potential for uncovering some semblance of normative common ground. But situations remain where both external and redefined internal values still leave a gap such that both parties cannot have it their way. I believe such situations are common and important and some resolutions are better than others when there is a log jam. The burden of this paper will be to show that conflicting moral agents can agree on which joint resolutions are preferable, even without agreeing on values or the preferred ordering of alternatives based on ethical norms. The task is to agree on a preference ordering across common actions rather than across common justification that may not exist.
Morality as the mutual actions of several moral agents
I build the approach to moral communities on three assumptions.
First, I take it as given that everyone acts so as to bring about a world they would rather live in. Actions are rank ordered by likelihood of creating a better future. Individuals can be mistaken, they can fail to consider everything of importance, or they can come to regret and want to change their choices. But at the time the action is taken, they are declaring the best blend of rational, volitional, and conative factors they can muster. Such moral agency is an assumption on my part because there would be no way to prove it by either observation or by definition. It is, however, a plausible assumption becausealternative characterizations are usually more complex, admit exceptions, and still end by being assumptions. Further, this assumption is neither exactly normative or description. The “is” or “ought” debate triggered by G. E. Moore(1903/2004) might not need to be settled before we can proceed.
The second assumption is that our lives are filled with constant encounters with others who share the above property of acting so as to bring about their best future world. Rank ordering across goals that may differ and even be incompatible with our own, the preference ordering of others may prima facie be as reasonable as our own. Even when suspicious of that part of the assumption, there are still ground for recognizing others as potential moral agents entitled to the same moral standing that we accord ourselves. Society will rejoice over the net increase in better futures.
The third assumption is that moral agents have the capacity through their actions to influence the attainment of our desired future worlds,and way we have the capacity by our actions to influence the attainment of the kinds of future worlds others seek. There is a reciprocity in moral actions. A moral agent is not the same as the object of charity. Although there is much to be said about altruism – understood here to mean acting for the sake of maximizing the benefit to others without regard to one’s own gain or loss – that attitude does not involve mutual moral agency. Reciprocal agency means that we acknowledge the potential influence of others on us. Neither is moral agency the same thing as respect for autonomy. Again, this is a valued moral principle, but we can respect others’ self-determination in ways that do not entail their having any potential influence on our futures.
Agents enact the assumptions of always acting to better their future as they see it, recognizing that others act the same way, and society values moral agency in a succession of moral engagements. We do not each bring the same values and potentials to each engagement and Mother Nature does not guarantee a level playing field. We do not always choose the best way forward. But engagements are the situations where we have the potential to make the world better. Morality is the business of finding and using those engagements between moral agents that maximize our common futures.
Health care is quintessentially a moral engagement. Both professionals and patients have the potential to fulfill their own aspirations at the same time they represent constraints on others. As a general rule, both agents enjoy richer futures as a result of joint action under suchcircumstances. When that potential is not realized there is a moral shortfall. Policy decisions, funding priorities, location of one’s practice, the protocol and routine of care, and common courtesies are all moral engagements.
We need a formal way of embedding agents in engagements. Perceptive readers will have recognized already that I am talking about game theory and Nash equilibrium (Binmore 2007; Gauthier 1986). Because technical jargon is seldom appreciated outside its discipline, I will speak in terms of “moral engagements” and “mutuality” instead. I mean exactly “games” by the former and “Nash equilibrium” by the latter.
A prototypical moral engagement
I will use a trivial example of a moral engagement to illustrate the rigor of the analysis that will follow. It is a common two-person encounter and one we manage with almost no thought. I wish to suggest in picking this example that many moral issues never rise to the level that requires rational analysis grounded in ethical theory.
Imagine pedestrian traffic in a large city. We move purposefully, but not in unison. Getting to our goal depends on private aims, characteristics of others using the sidewalk, and whether the way is wide, crowded, obstructed by barriers, and so forth.
As two people approach each other from different directions, in this example, it becomes clear to both that unless one or both step to the side, their future worlds will be compromised. Agent A sees that moving to the right will involve stepping into the street, while dodging to the left will bring him to a standstill or a collision with the person immediately behind B. Agent B only sees the edge of the sidewalk and the possible prospect of running headlonginto A. B is a headstrong woman pushing a baby carriage. Both A and B realize immediately where they stand and they know that the other sees the situation and has his or her own part to play. A moves to the edge of the sideway and B passes straight on.
This is a moral engagement of the type discussed in detail by David Lewis (2002). Not following the rule could lead to disagreeable rudeness (low grade immorality). If either pedestrian were preoccupied by a mobile phone or sociopathically inconsiderate, there would be more severe consequences. It is possible to give the agents additional motives or alternative value structures and it is possible to reconfigure the sidewalk. These may change the analysis, but the only requirement I wish to make is that the values of both agents and the situation on the ground are sufficient to carry through a full analysis in this and in all moral engagements.
Let’s make this case slightly more formal. The standard way of representing such engagements is the 2 x 2 matrix. One agent is Row and the other is Column. Each agent has two options for actions to bringabout the best future world: his or herbest action and the next best. The actions need have nothing at all in common beyond the fact that Row’s behavior affects B and vice versa. There are four possible outcomes, all things considered.
There are sophisticated ways of putting values on the outcomes. All that is needed, however, is a simple rank ordering of preference for each agent from [4] = best possible outcome for me to [1] my worst outcome. If we assume that the running over helpless old gentlemen or ladies would be the worst kind of outcome, running over them really hard would also be the worst outcome. There are differences between the cases in ethical theory. In terms of moral action, the best alternative should always be chosen, even when it is only slightly the best. My analysis here does not extend beyond taking the most moral course of action.
Let’s see how this might work in the case of the pedestrians on a collision course. Agent A would prefer to keep moving forward and have B adjust: for A, that pair of actions is a [4] – the action that makes his future world most fit. A most wants to avoid neither agent giving way and running into each other: the worst outcome = [1]. Dodging the oncoming fellow pedestrian is inconvenient, but still acceptable [3] The remaining possibility is for both to veer and her run into each other in the process: [2]. We know how to apologize and minimize such awkward moments, and at least we both tried. These ranks appear in the 2 x 2 engagement matrix shown in Fig. 1 as the left-hand values in each of the four cells.
Agent B’s rank preferences over outcomes are on the right in each cell. Of course B would prefer that A gives way [4], but she has no compunction about forcing the issue [3]. Moving aside for A is an unwelcome possibility [2] and an unproductive dance with A still in her way would be a king-sized annoyance [1], let us assume for now.
The prudent pair of actions would be for Agent A to step aside while B passes. And we can see that without the formality of the analysis; it is instantly and intuitively obvious. But that is not the only option. What if both agents were intent on their own interests rather than on jointly solving the problem? In that case, both of them follow the course of action that includes their own [4] rank ordering. By tracking the intersection of the two [4] self-interested actions in the table, it can be seen that they intersect in a collision. The upshot of double self-interest in his case is a collision [1 3] that is very bad for A not so good for B, although she still gets some satisfaction out of publically telling A what a lout he is for being so ethically insensitive.
Another possibility would be to embrace altruism. Here the given engagement is played by a different set of rules intended to produce a value of [4] for the other agent. Agent A deviates to give B a shot at what he believes is her best outcome, but at the same moment Agent B deviates in hopes of satisfying A. This results in the worst of all outcomes as they stumble into each other trying to get out of each other’s way [2 1]. (O’Henry’s short story “The Gifts of the Magi” is a poignant realization of the paradoxical limitations of altruism.)
Deception is also a possibility. One or both agents might signal a false intent in hopes of changing the other’s behavior, perhaps by appearing not to notice the other or by quickening one’s pace. The result is a collision [1 3].
The purpose of laying out this example in formal detail is to emphasize that there are multiple ways of responding to moral engagements. Even when there is a common set of circumstances and two fixed rank preferences for the agents, using different decision rules could lead to different outcomes. That is the point of moral analysis: we are looking for the decision rule that maximizes common benefits, given the circumstances and the needs of the participants. It turns out that self-interest and deception were slightly better than altruism in this case. In different circumstances, the fitness resulting from various rules might be different.
But none of these decision rules so far mentioned leads to the best possible outcome of Agent A giving way to Agent B [3 4]. The reason we have failed to find the best outcome is that none of the alternatives considered so far has granted moral agency to the other. Each agent has so far acted independently in deciding what is best, even though the outcome depends on what both do. What our walkers need to do is simultaneously consider what is best for them, given what they believe the other thinks is their best move. Call it mind reading or just human sensitivity, each agent needs to think something like this: “If I make this choice I will have to live with the results of the other agent choosing what leads to his or her best world.” When both agents think this way it is called mutuality (formally, Nash equilibrium), and the result is the best of all possible future worlds for both agents. That is the only way to get to the correct [3 4] outcome in the example.