Some philosophers think that using animals in harmful ways in scientific research is seriously wrong. They arrive at that conclusion in different ways, working in different ethical frameworks, and some of the details of their positions are different. Here I want to discuss a short argument for this conclusion which would probably be accepted as sound, with perhaps some qualifications, by almost all philosophers who hold the view that using animals in scientific research is wrong. While I think this argument is powerful and is an important one to come to terms with, I also acknowledge that it raises some difficult questions to which I do not think I have any satisfactory answers at the moment. I shall discuss some of these issues after presenting the argument.

The argument could be summed up like this. We would not be prepared to conduct harmful scientific procedures on unconsenting humans with similar cognitive characteristics to the nonhuman animals typically used in research. So we should not think it acceptable to do the same thing to nonhumans either.

Here is an outline of the argument.

(1)Given two cases involving treating a being in a certain way under certain circumstances, if it’s correct to give different moral judgements of the two cases, then there must be a morally relevant difference between the two cases – a morally relevant difference either between the circumstances in the two cases or between the nature of the two beings. (Aristotle’s principle of formal justice, “treat like cases as like”, Nicomachean Ethics, V. 3, 1131a10-b15).

(2)It would be morally wrong to use unconsenting humans, with similar cognitive characteristics to those nonhuman animals typically used in research, in harmful ways for the purpose of scientific research, even if steps were taken to reduce the suffering they experienced, and even if there were significant prospects of benefits thereby being achieved.

(3)There are no morally relevant differences between such use of nonconsenting radically cognitively impaired humans and the harmful use of nonhuman animals in scientific research which actually occurs.

(4)Therefore, the harmful use of nonhuman animals in scientific research is morally wrong.

Let me discuss and elaborate on the three premises of the argument in turn.

The first premise of the argument, the formal principle of justice, as I said dates back at least to Aristotle. Thomas Nagel, in a short essay discussing the question of why we should be moral, discusses an example where you visit a library where people leave their umbrellas on a stand near the entrance. You would find it convenient to have an umbrella, but you do not have one of your own, so you take someone else’s umbrella without their consent. Nagel considers the question of what sort of argument might persuade you that you have a reason not to do this. As Nagel observes, the most common argument used is “How would you like it if someone did that to you?” However, it is not clear at first sight how this argument is supposed to work. One might imagine someone replying “If someone did it to me, I’d mind. But fortunately someone else isn’t doing it to me. I’m doing it to someone else, and I don’t mind that at all!” Nagel suggests that the answer to this line of reply is that if someone took your umbrella without your consent, there would be more to your reaction than just your finding it inconvenient and frustrating. You wouldn’t just mind it, you’d resent it. You’d think, not just that it was inconvenient for you, but that it could be condemned from an impartial perspective which you could reasonably expect other members of a civilized society to share. You’d think, in short, that the action could not be justified from the moral point of view. But taking the moral point of view brings other commitments as well. Many philosophers have held that the formal principle of justice is a defining feature of the moral point of view. If you condemn someone else’s action from the moral point of view, then you are also committed to condemning similar actions by yourself – unless you can identify a morally relevant difference. The fact that your stealing an umbrella benefits you, whereas someone else stealing your umbrella does not, is not a morally relevant difference. Pointing out this difference would not justify judging the two actions differently from an impartial perspective. If we think that people are required, from the moral point of view, to observe certain restrictions on their behaviour towards us, we must also think that we are required to observe similar restrictions on our behaviour towards them, according to the formal principle of justice, assuming that the two cases are relevantly similar.

The formal principle of justice has other applications. Another way to state it is that it rules out arbitrary discrimination. In the nineteenth century it was thought permissible to enslave people who had dark-coloured skin, but not those who had light-coloured skin. The formal principle of justice entails that this differential pattern of judgement cannot be defended unless we can identify a morally relevant difference between the two cases. Skin colour by itself would probably not be a plausible candidate for a morally relevant difference. If you wrote an application for a research grant and found that yours was turned down but another application was accepted on the grounds that you had blue eyes while the other applicant did not, you’d probably regard this as unfair. If we assume that eye colour is not morally relevant, then eye colour is not a sufficient basis for discrimination, according to the formal principle of justice.

Most philosophers take some version of the formal principle of justice to be a basic feature of sound moral reasoning. And, as my examples suggest, most people in general seem to believe in something like it.

Let’s take a look at the second premise. In the documentary “Monkeys, Rats, and Me: Animal Testing”, a researcher called Tipu Aziz discusses some research he has been engaged in regarding Parkinson’s disease. In the course of his research, he induces parkinsonism-like syndromes in nonhuman primates by means of opiate drugs. He maintained that, by doing this procedure on approximately 100 primates, he had discovered that a certain area of the brain, not formerly associated with Parkinson’s disease was overactive in the primates with parkinsonism-like syndromes. He found that by operating on this area of the brain, he could significantly reduce the symptoms of Parkinson’s disease. He maintained that approximately 40,000 sufferers from Parkinson’s disease had experienced significant lifestyle benefits as a result of this discovery.

Let’s imagine that I am the director of an institution that looks after cognitively impaired human beings. These human beings have similar cognitive capacities to nonhuman primates. They have a complicated social structure and can perform some complicated cognitive tasks, but they do not use language and are not cognitively more advanced than a typical three-year-old human child. I have an opportunity to try to learn more about Parkinson’s disease by inducing parkinsonism-like syndromes in these people by means of opiate drugs, say about 100 of them. And of course there is always the chance that a discovery will be made that may bring significant lifestyle benefits to many thousands of Parkinson’s sufferers. Ought the research to be done?

I assume that if such research became publicly known about, public opposition to the experiments would be fairly strong. Most people would feel it was an outrageous violation of human rights. It would be a fairly bold person who would publicly claim that such research was defensible. Still, as someone might point out, widespread public opinion is not necessarily correct. It is conceivable that someone might maintain that, despite widespread intuition to the contrary, such research would be morally justified. I shall not look into this any further now, though we shall return to the matter later. I am assuming that there will be a widespread intuition that such research would not be justifiable. I am appealing to this intuition in support of my second premise. Those who do not share the intuition may not be convinced, although I shall have something to say to such people later.

It may be objected that I have unfairly stacked the deck by constructing my case by analogy to research involving primates. It might be agreed that research involving primates raises special ethical issues. But mice are far more commonly used than primates, it may be said. And perhaps it might be claimed that we cannot form a sufficiently clear idea of a human who is cognitively on a par with a mouse to say much about a hypothetical case involving such a human. Mice are just too different from us to perform the appropriate thought-experiment. I would suggest, on the contrary, that it is not too difficult to imagine a human who is cognitively on a par with a mouse. Mice experience pleasure and pain, things can go well or badly for them, they are aware of their environment and can perform some problem-solving tasks, in natural environments they are social animals. Some humans remain permanently at the cognitive level of a three-month-old infant, I would dare to say that they are probably cognitively no better off than a mouse. So I would suggest that, while perhaps we might initially think otherwise, we can in fact imagine a case involving scientific research on a human who is similar in all the relevant respects to a mouse. And, in the case of procedures which are likely to harm the human, I would suggest that our intuitive reaction to such a case would still be seriously negative. As long as you share this intuition, I suggest, you should accept my second premise.

Let us now examine the third premise. Is it right to say that there are no morally relevant differences between research involving radically cognitively impaired humans and research involving nonhuman animals? It may have occurred to some of you that the radically cognitively impaired humans might well have had close relatives who were not radically cognitively impaired, and they might have been upset by the research being performed. I might make the point that there are many species of nonhuman who are highly social and appear to care for their close relatives, and the distress they feel at being separated from their companions and kept captive under laboratory conditions ought to be factored into considerations about their welfare. Still, it might be said, nonhumans would not understand the situation as well as the cognitively normal humans would, and the humans would suffer more at least in this respect. I think it is reasonably clear, however, that this feature of the thought-experiment, the fact that the cognitively impaired humans may have had relatives who may have become upset, is not a very strong determinant of our reaction to it. I could have stipulated that the cognitively impaired humans had no living relatives. That would have made the case more contrived, but I don’t think it would have affected our reactions to it much.

Can we draw a morally relevant distinction based on the nature of the beings involved? The trouble is that I have stipulated that so far as cognitive capacities go, the humans are similar in all relevant respects to the nonhumans. Some philosophers try to draw a moral distinction between humans and nonhumans based on their capacity to understand moral concepts, or their degree of self-awareness, or things like that. But while these distinctions may be genuine distinctions between typical humans and nonhumans, they aren’t a basis for distinguishing between the research subjects in the two cases we’re considering. The only difference seems to be species membership. But discriminating purely on the basis of species membership alone seems hard to justify. What is the moral significance of the fact that we are in a particular biological class? Why is it any more morally significant than membership in a particular racial group?

There is an extensive philosophical literature about this issue. But I think it is at least reasonably clear that if anyone maintains that there is a morally relevant difference between the two cases, the burden is on that person to explain what the morally relevant difference is and why it is morally relevant. Carl Cohen, for example, tries to explain why species membership is morally relevant. I shall not look into the issue further here, but I think it is reasonably clear that there is a puzzle here.

So that completes our examination of the three premises. If they are accepted, then the fourth seems to logically follow. Our only hope of escaping from the argument seems to be to question one of the premises. In the course of our examination of the argument we found two possible ways of doing this. The first was to bravely bite the bullet and maintain that scientific research on unconsenting human subjects was sometimes acceptable. The second was to identify a morally relevant difference between research on cognitively impaired humans and research on nonhumans and give a convincing account of why it is morally relevant. There are many philosophers who find the first option unacceptable and think that the second task cannot be achieved.

Suppose one of you were to bravely say: “All right, I agree that, in the case of humans who are similar in all the relevant respects to nonhuman subjects who are widely used, scientific research on these human subjects is okay. It just so happens that such research is not accepted by society.” I have yet to meet anyone who maintains this position sincerely and consistently. To someone who held such a position, it might be pointed out that, in view of the difficulties of extrapolating from data obtained from a different species, research on human subjects might sometimes have a better chance of yielding valuable results. Should we, then, try to persuade people to accept research on cognitively impaired humans? Many would find that idea quite repellent. As I say, I have yet to meet anyone who really is prepared to wholeheartedly endorse such a stance.

But the only other way to escape the argument’s conclusion seems to be to identify a morally relevant difference between research on radically cognitively impaired human subjects and on nonhuman animals. There have been numerous attempts to identify such a difference, but none of them have succeeding in gaining widespread acceptance among the philosophical community. Hence those who would like to find a satisfactory defence of the permissibility of research involving nonhuman animals are left in a quandary.

So that is why some philosophers think that the harmful use of nonhuman animals in scientific research is seriously wrong.

Let me now explore some responses to the argument that I have encountered and some difficulties that are raised by it.

One response that was made last time I gave this talk was that it is quite a radical argument. After all, the view that being human makes a big difference appears to be very widely and strongly held in society. The use of nonhuman animals in order to benefit ourselves in ways that can harm them is very strongly entrenched in our way of life. Most people seem to find it hard to imagine ever giving up eating meat, for example, and the argument I have presented would appear to at least raise a serious question about the permissibility of slaughtering animals for their meat. If an argument leads us to views that conflict so strongly with what everyone seems to believe, shouldn’t we have some reservations about it?

My reply to this is that we should not assume that widely held social attitudes are immune to criticism. It was not too long ago that it was widely held in many societies that people who belonged to a certain racial group were not entitled to the same sort of moral consideration as those who belonged to the more privileged racial group, or that women did not have all of the same basic moral rights as men. Most of us have repudiated such views today. If we accept that there can be moral progress in this way, then we cannot rule out the possibility that we stand in need of further moral progress. We must entertain the possibility that there are grounds for criticizing attitudes which are widely held today.

Last time I gave this talk, someone expressed a worry about why I had used radically cognitively impaired humans for the purposes of the thought-experiment. She was worried that I might be somehow suggesting that humans with less cognitive ability are entitled to less moral consideration.

The answer is that I did not want to suggest that. The reason I made use of radically cognitively impaired humans for the purpose of the thought-experiment was that if I had used a thought-experiment involving cognitively normal humans, the likely response would have been that there are obvious differences between such humans and nonhuman animals, and I wanted to pre-empt this move. It is interesting that when we are considering questions of how to treat nonhuman animals, we are often happy enough to entertain the idea that differences in cognitive ability might make a difference in the degree of moral consideration to which a being is entitled, whereas when we are considering questions of how to treat cognitively impaired humans, we tend to find this suggestion offensive. This suggests that we are engaging in a kind of doublethink – that we do not really have a clear foundation for the differences between the way we treat humans and the way we treat nonhuman animals in which we seriously believe. Myself, I don’t set much store by the idea that cognitive ability is all that morally important. But I wanted to construct an argument that raised a difficulty for the defender of animal research even on the assumption that cognitive capacities were in some way morally important.