The History of Science and Contemporary Scientific Realism Conference, Indianapolis, 19-21 February 2016

The Book of Abstracts

Anjan Chakravartty

Anjan Chakravartty is a professor of Philosophy at the University of Notre Dame, where he is Director of the John J. Reilly Center for Science, Technology, and Values and a faculty member in the History and Philosophy of Science Graduate Program. He is Editor in Chief of Studies in History and Philosophy of Science and a former Director of the Institute for the History and Philosophy of Science and Technology at the University of Toronto, prior to which he was a Research Fellow at the University of Cambridge, where he previously completed a PhD in HPS.

A Case for Realism: Particles, Properties, and Symmetries (back)

Fundamental physics comprises a domain in which claims commonly associated with scientific realism – that our best theories are approximately true, that their central terms refer to things in the world, etc. – are seriously challenged. One reason for this is that our grasp of the natures of the subject matters putatively described is, admittedly, tenuous. What is the precise ontology, for example, of subatomic particles? Saying too little here undermines realism by denuding it, but it is difficult to know how much one can say with the kind of confidence a realist might like to profess. Saying much at all quickly mires one in difficult metaphysical issues.

The recourse to metaphysics in hopes of “fleshing out” realist commitments is fraught not least because it seems difficult to choose, on the basis of scientific practice alone, which way one should go. It is challenge enough for realists to explain what is going on in cases where the theoretical descriptions of previously accepted, empirically successful science are now regarded as largely incorrect and perhaps even non-referring. But what about cases in which it was never entirely clear what a realist account of the relevant (empirically successful) descriptions should be?

I consider the case of symmetry principles in the context of the Standard Model of particle physics. The use of symmetries here was and continues to be astoundingly successful in predicting the results of particle detections as well as detailing the natures of their properties (like mass, charge, and spin), which are described mathematically as invariants of certain symmetry group transformations (irreducible representations of the groups). The question of whether symmetries themselves are best thought of merely as heuristic principles or something more – something appropriately conceived in terms of realism – was present here from the start.

Today some realists think that the right attitude toward symmetry principles is to view them as aspects of some kind of structural ontology. I describe this characterization of the subject matter as “top-down”, since it proceeds from a set of mathematical relations to the natures of properties. Others think that symmetries describe an ontology of dispositions. I describe this characterization as “bottom-up”, since it proceeds from the natures of properties in the world to a mathematical description of them. I consider the contrasting motivations for these starkly opposed conceptions of how to be a realist in this domain, and the question of whether one is preferable.

Helge Kragh

Helge Kragh recently retired from Aarhus University, Denmark, where he was professor of history of science and technology. He is now emeritus professor at the Niels Bohr Institute, University of Copenhagen. Most of his work has been in the history of the physical sciences since the mid-nineteenth century, including quantum physics, chemical matter theories, cosmology, and astrophysics. He has also contributed to the science-religion literature. His most recent book, to be published by Springer in March 2016, is an examination of the hypothesis of varying gravity and its geological consequences.

Some Cases from the History of Science Relating to the Scientific Realism Debate (back)

Scientists usually believe that an object or a phenomenon is real if it is solidly confirmed by observation or experiment. If the phenomenon (or object) is successfully predicted by a theory (or a model), they tend to rate the theory as at least partially correct or fruitful. Of course, in scientists’ evaluation of a theory predictive success is not the only criterion. There are many cases in the history of science of wrong or now-discarded theories which have led to some predictive and/or explanatory successes. There are also many cases of non-existing entities for which there was substantial empirical evidence, but which had no or little connection to theory. There are even examples of entities which were once thought to be real, then probably unreal and later again real.

In my presentation I plan to discuss a small number of cases from the history of the physical sciences (including chemistry and astronomy) that illuminate the realism issue. Some of the cases will focus on theory, whereas others will mostly deal with empirical evidence. Within the first category I may refer to the Stern-Gerlach space-quantization effect, the Dirac hypothesis of magnetic monopoles, the Helmholtz-Thomson contraction theory of the Sun, and 18th-century astronomical applications of the Newtonian particle theory of light. In the second category I will possibly refer to the less well-known cases of Venus’s moon and the existence of the H3 molecule. The latter involved theory to some extent, but was primarily discussed and eventually settled within an experimental context (namely, when the H3 spectrum was detected in 1979). Yet another interesting case that I want to draw attention to is the belief in early nuclear physics of electrons as residents of the atomic nucleus. Between 1913 and 1930 all physicists agreed that the nucleus contains electrons, a wrong idea that did not seriously hinder progress in the field.

On a more general level it is far from obvious if these and other cases from the history of science can lead to satisfactory answers relating to the realism debate. The case-study method is useful and even necessary, but it is not sufficient. Among the problematic features of the method is how to weigh one case relative to another. Another is the arbitrariness of the cases under consideration. Some cases may seem more important than others, but why? It seems difficult or impossible to find objective criteria for preferring some cases from the history of science (including recent science) rather than some other cases.

StathisPsillos

StathisPsillos is Professor of Philosophy of Science and Metaphysics at the University of Athens, Greece and a member of the Rotman Institute of Philosophy at the University of Western Ontario (where he held the Rotman Canada Research Chair in Philosophy of Science). He is the author or editor of seven books and of more than 120 papers and reviews in learned journals and edited collections, mainly on scientific realism, causation, explanation and the history of philosophy of science. He is member of the Academy of Europe and former president of the European Philosophy of Science Association (EPSA).

From the Evidence of History to the History of Evidence: Re-Thinking the Pessimistic Induction (back)

The past record of scientific theories has been used to undermine the credentials of scientific realism as a philosophical view about science, that is as a view that takes science to be (successfully) in the truth-business. The form that this evidence has taken has been inductive: seen as providing empirical evidence, the past record of science has been taken to ground inductively drawn pessimistic conclusions about current (and future) science. In recent literature, there have been various attempts to assess the traditional pessimistic induction (Dicken and Saatsi), to canvass new types of pessimistic inductions (Stanford) or to offer non-inductive but historically informed arguments (Lyons).

In this talk I will re-assess the debate about the historical evidence against scientific realism. I will argue that we should make a distinction between the evidence of history and the history of evidence and that in trying to assess the epistemic credentials of scientific theories (past or present) we should primarily be looking into the history of evidence that supports them and only secondarily at the evidence of history of science. In trying to motivate this claim, I will take issue with some key recent arguments (by Saatsi, Stanford and Lyons) about the conclusions that have to be drawn for current science from looking at the historical record.

The overall structure of my argument will be this. Without denying that there are important conceptual shifts in science, I will show that there is a retentionist pattern which is pervasive in theory-change and that this is so because retention expresses the history of evidence there is for a scientific theory. More specifically, I will argue that it is an invariant part of scientific practice that newer theories a) explain the empirical and theoretical laws of the theories they supersede; b) retain parts (hypotheses, explanatory assumptions etc) of their predecessors; c) accommodate within them the evidence there was for their predecessors and d) prospectively, solve (or delineate) the empirical problems that ultimately brought down their predecessors. I will then argue that this retentionist pattern (which will be illustrated by assorted examples) is grounded on two considerations: a) there is an explanatory link between empirical successes and getting the world right, in at least some respects; and b) the respects in which the world was got right by a past theory should constrain the development of newer theories by offering evidence for them. Both considerations, I will argue, point to the need to collect solid and rigorous evidence and to achieve diachronic support for theories. Given this pattern, though the history of evidence for a theory becomes indispensable for accepting a theory, the evidence of history is almost irrelevant since it should be (and has been) taken into account only if it raises specific doubts about the quality of evidence there is for the specific theories that become accepted. Moreover, given this pattern, the conceptual space of alternative theories is prospectively limited to those that fall under this pattern.

The evidence of history, I will finally argue, is relevant to a broader philosophical project of explaining or grounding the very idea that science as a cognitive enterprise is in the truth-business. It is no surprise then that this kind of evidence is typically ignored by scientists, but it plays a role in philosophical arguments about science. But this is precisely its limitation, viz., that it can raise only philosophical—that is abstract—doubts about science. To become concrete, this history-based doubt should become part of the history of evidence there is for scientific theories. That is, it should be taken to speak directly against the quality of evidence there has been for the specific theories that become accepted. Whether or not this can happen is an open issue. But for it to happen the evidence of history should become history of evidence.

Eric Scerri

Eric Scerri, holds a PhD in History and Philosophy of Science from King’s College London. He has lived in the US for the past 20 years and currently teaches chemistry and history & philosophy of science at UCLA. He has published four books with Oxford University Press including, The Periodic Table, Its Story and Its Significance and A Tale of Seven Elements. OUP will soon be publishing his latest book on a new organic philosophy of science as well as two edited collections of papers.

John Nicholson’s atomic theory of 1911, how ‘wrong theories’ can lead to scientific progress and a new ‘organic’ philosophy of science (back)

The paper will examine the work of John Nicholson, a direct precursor of Niels Bohr and the first person to postulate the quantization of angular momentum in the context of atomic theory.

Nicholson attempted to explain the formation of the elements and the values of their atomic weights as well as well as the spectrum of nebulae and the solar corona. In the case of these spectra his theory appeared to be remarkably successful even though, as it later turned out, it was based on incorrect physical principles.

I will examine some philosophical consequences of this and similar cases. I will ask several questions such as how it is that ‘wrong’ scientific theories can produce scientific progress. I will relate this case to the broader question of whether scientific progress proceeds via Kuhn-like discontinuities and will preview my new book on an ‘organic’ philosophy of science.

Betty Smocovitis

Vassiliki Betty Smocovitis is Professor of the History of Science with a joint appointment in the Department of Biology and the Department of History at the University of Florida. Her research interests include the history of evolutionary biology, genetics, botany and anthropology, especially during the period of the "evolutionary synthesis." She is the author of Unifying Biology: The Evolutionary Synthesis and Evolutionary Biology (Princeton University Press, 1996) and has been studying the history of twentieth century biology through the life and work of G. Ledyard Stebbins.

Rethinking "model organisms" and "experimental systems:"E. B. Babcock, the Genus Crepis, and Genetics at Berkeley (1915-1947)(back)

This paper examines E. B. Babcock's research program centering on the plant genus Crepis at the University of California, Berkeley. Between the years 1915-1947, the genus Crepis was the focus of intense genetical study by Babcock and a team of researchers keen on finding the "plant equivalent of Drosophila." What had been selected as the "ideal type" for genetical study, however, rapidly shifted as the genetic, environmental and historical particularities of the genus began to pose problems for researchers. In the end the notion of "model organism" and "experimental system" gave way to evolutionary and phylogenetic history of a very distinct group of plants.

Jutta Schickore

Jutta Schickore is associate professor in the Department of History and Philosophy of Science and Medicine at Indiana University. Her research has focused on historical and philosophical debates about scientific methods; scientists’ conceptions of good research methods and of scientific integrity; historical and philosophical aspects of microscopy; and the relation between history and philosophy of science. Her publications include The Microscope and the Eye: A History of Reflections, 1740–1870 (2007) and Revisiting Discovery and Justification: The Context Distinction in Historical and Philosophical Perspective (2006, co-edited with F. Steinle).

“The province of the invisibly minute”: Discussions about unobservable things and processes in nineteenth-century biology and medicine (back)

In this paper, I aim to do two things at once. I introduce some methodological discussions in nineteenth-century biology and medicine, and I draw on this material to consider more generally the problem of historical case studies and their role for philosophy of science.

One of the main motivations for pursuing historical case studies in philosophy of science and indeed the main motivation of the organizers of this conference is to draw on historical cases to evaluate the philosophical position of scientific realism. Is it possible to support the scientific realist position with evidence from the history of science? How do the best scientific realist positions fare vis-à-vis relevant historical episodes? The most recent contributors to this debate have probed the history of science to see if historical cases in which novel predictive success was achieved commit us to scientific realism (see Vickers 2013).

In my contribution, I would like to change the analytic focus from theories and predictions to methodological thought. I examine various discussions in late nineteenth-century about unobservables and methodological strategies for dealing with claims about entities and processes that are hidden from sight – pathogens, fundamental formative processes, components of the immune response, and the like. Some of these discussions are mentioned in the description of the project Scientific realism and the challenge from the history of science (namely the immune response and formative forces), but I approach them from a new angle. I examine how the nineteenth-century scientists themselves addressed the issue of unobservables and what strategies they proposed to assess and secure claims about such processes and entities.

In the last part of the paper, I take a step back and consider the status of historical studies like this. What is their role (if any) for philosophy of science? Do they have anything to contribute to philosophical debates about realism and anti-realism? I will argue that the history of methodological thought does contribute to the debate about realism and anti-realism, but quite differently from historical accounts of past predictive successes.

P. Kyle Stanford

P. Kyle Stanford is a Professor and Chair of the Department of Logic and Philosophy of Science at the University of California at Irvine. He is the author of Exceeding Our Grasp: Science, History, and the Problem of Unconceived Alternatives and a range of further articles concerning both the scientific realism debate and other foundational questions in the history and philosophy of science, particularly the history and philosophy of biology. He received a B.A. from Northwestern University in 1991 and a Ph.D. from UC San Diego in 1997.