Prediction in Social Science: The Case of Research on the Human Resource

Management – Organisational Performance Link[1]

Steve Fleetwood & Anthony Hesketh

Abstract. Despite inroads made by critical realism against the ‘scientific method’ in social science, the latter remains strong in subjects like HRM. One argument for the alleged superiority of the scientific method (i.e. its scientificity) lies in the taken-for-granted belief that it alone harnesses the power of prediction. Many of those who employ the scientific method are, however, confused about the way they understand and practice prediction, making it harder to identify their understanding and practice and, therefore, harder to criticise. This paper takes empirical research on the alleged link between Human Resource Management practices, and organisational performance as a case study. Unpacking the confusion surrounding the two basic notions of prediction used, reveals what is wrong with them, why the scientific method cannot actually harness the power of prediction and why, therefore, the scientific method fails to meet its own criteria for scientificity. Explanation is considered to prevent any confusion between it and prediction.

Introduction

Despite inroads made by critical realism against the use of what is often referred to as ‘the scientific method’ in social science, the latter remains remarkably strong in subjects like Human Resource Management (HRM) Economics and Psychology. One of the most powerful arguments for the alleged superiority of the scientific method over other methods (i.e. its scientificity) lies in the taken-for-granted belief that it alone can formulate empirically testable predictions. Many of those who employ the scientific method (along with many who simply opt for other methods) are, however, extremely confused about the way prediction is understood and practiced. This makes it harder for critics to understand their understandings and their practices and, therefore, harder to formulate a critique. As a result, critics are often charged with inventing straw men. This paper takes empirical research on the alleged link between HRM practices and organisational performance as a case study of the scientific method in action. It reveals (some of) the confusion with which prediction is understood and practiced in the field. The lessons drawn are, however, applicable to almost any branch of social science where aspects of the scientific method are used. There are also lessons for critical realists such as an enquiry into why social systems are actually open;

Part of this four part paper provides a thumbnail sketch of the HRM-P paradigm and clarifies some issues surrounding the notions of science (or scientism) and prediction. Part two considers the meta-theory underpinning research on the HRM-P link and uncovers exactly why the HRM-P system is an open system. Part three unpacks the confusion surrounding the two notions of prediction that lie buried in the literature. Each of these notions is then scrutinised to reveal what is wrong with them, why the scientific method cannot actually harness the power of prediction and why, therefore, the scientific method fails to meet its own criteria for scientificity. The concluding part briefly considers explanation to prevent any confusion between it and prediction.

1. Some preliminaries

The HRM-P paradigm

HR professionals are desperately trying to demonstrate that HRM adds, rather than saps, value, and have turned to research from consulting houses[2] and academics.[3] The over-riding message emanating from this voluminous research is that a measurable, empirical association exists between an organizations’ HRM practices and its performance - henceforth referred to as the HRM-P link. As we will see, prediction plays a key role here.

From science to scientism

Supporting and sustaining research on the HRM-P link is what is often referred to as a ‘scientific’ approach. Boudreau & Ramstad (1997: 343) refer to ‘scientific studies;’ Murphy & Zandvakili (2000: 93) suggest that ‘scientific measures be used to evaluate the effectiveness of HRM practices’ referring to ‘data collected by scientific methodology;’ Brown refers to the ‘science of human capital measurement’ (2004: 40); and Thomas and Burgman (2005: 1) suggest that human capital management is moving ‘from art to science.’

We are not aware of any researchers in the HRM-P paradigm who have reflected upon, or defined, the scientific method they use exclusively. This is likely to be because when most post graduate researchers learn ‘methodology,’ they are usually presented with a set of statistical techniques (usually with little or no discussion of serious methodology and/or philosophy of science), as if these techniques just are the only ones available for anyone who wants to be ‘scientific.’ This taken-for-granted attitude is illustrated by Booth’s work on the economics of trade unions, where she refers to the ‘the accepted methodology of economics,’ (1995: 83) without feeling the need to actually state what it is.[4]

To give a flavour of what this accepted scientific method entails in the HRM-P paradigm, we sketch it as follows: Although variations exist in the phenomena that are measured, and the metrics and measures that are used to quantify these phenomena, HRM practices and organisational performance are quantified and empirical data generated. Various statistical techniques (typically regression, analysis of variance, correlation, structural equation modelling and factor analysis) are then employed on this quantitative data to empirically test various predictions (or hypotheses) to the effect that certain bundles of HRM practices lead to increased organisational performance.[5]

Critics like us, however, argue that ‘scientism’ (or derivates such as ‘scientistic’) is a more appropriate definition of the method used in research on the HRM-P link.[6] The Collins Dictionary of Sociology (1995) defines scientism as ‘any doctrine or approach held to involve oversimplified conceptions and unreal expectations of science, and to misapply ‘natural science’ methods to the social sciences.’ Hughes and Sharrock (1997: 208) define scientism as ‘those philosophies such as positivism, which seek to present themselves as having a close affiliation with the sciences and to speak in their name, and which then go on to fetishize the so called scientific standpoint.’ For us, then, a perspective is scientistic if it loosely refers to the employment of methods and techniques allegedly similar to (some aspects of) natural science, without actually specifying what these methods and techniques are and why they are appropriate to social science.[7]

Centrality of prediction

Central to scientism is prediction - although this is due less to careful reflection, and more to a kind of taken-for-granted belief that the science is primarily about formulating and testing predictions and not necessarily about meeting objectives like realisticness. Friedman’s (1988) paper on prediction as the sole criteria for evaluating theories has been enormously influential in grounding this belief. We can think of four main reasons for this centrality.

1.  One of the most powerful arguments for the alleged superiority of the scientific method over other methods (i.e. its scientificity) lies in the taken-for-granted belief that it alone is able to formulate empirically testable predictions. Perspectives that do not, or cannot, do this and aim instead for things like the recovery of actors’ meaning (interpretivists, hermeneuticists, ethnomethodologists), the deconstruction of phenomena as texts (postmodernists or poststructuralists), the analysis of discourse (critical discourse analysts) and/or explanation of the causal mechanisms that actors interact with (critical realists[8]), are presumed to be un-scientific and, in this sense, inferior.

2.  Methods that generate theories whose predictions can be empirically tested raise the possibility that some of these predictions will be successfully tested, thereby, providing a basis for policy prescription. If we can predict the future outcome of an action, we may be able to initiate that action, indeed, prevent that action, in order to bring about desired outcomes.

3.  In some natural sciences (typically those whereby the system under investigation is spontaneously closed or can be closed easily), successful predictions can be made. This success encourages the belief that, if social scientists continue to follow the example of these ‘mature’ sciences, one day, social sciences like HRM too will be able to make successful predictions. In the meantime, we should continue our efforts to generate successful predictions.

4.  Drawing on the work of Tsang & Kwan (1999: 769), we might say that prediction is superior to its close relative accommodation. A researcher who constructs theory to fit the data, accommodates the data. Another researcher may use this theory to make and test a prediction. Accommodation can be fudged, that is, the researcher knows the result the theory should generate and fudges the theory to make the theory fit the data. In the case of prediction, however, the theory comes into existence before the data and cannot be fudged.

Having established that research on the HRM-P link is usefully described as scientistic; having a grasp of what it entails’ and recognising why prediction appears central, we can shift our focus towards understanding the meta-theory that underpins all this.

2. Meta-theoretical underpinnings of research on the HRM-P link

We intend to be relatively brief here, because this section involves the (now) fairly ‘standard’ critique of closed systems although we do want to provide reasons for why the particular system we are interested in, the HRM-P system, is actually open.

Scientism’s generally accepted method appears to be some (unspecified) variant, or combination, of the covering law model, deductive nomological model, inductive-statistical model, or hypothetico-deductive method. Following Lawson (1997; 2004) critical realists have referred to this variant as the deductive method or simply deductivism. From this perspective to 'explain' something is to predict a claim about that something as a deduction from a set of initial conditions, assumptions, axioms, and law(s) or some other regular pattern of events.

Scientism presupposes (explicitly or implicitly) an ontology consisting of what can be observed and is, therefore, of observed events. Because these objects are confined to experience the ontology is empirical; and because these objects are thought to exist independently of one’s identification of them, it is realist. The ontology is, therefore, empirical realist. If particular knowledge is gained through observing events, then general, including scientific, knowledge is only available if these events manifest themselves in some kind of pattern: a flux of totally arbitrary events would not result in knowledge. Scientific knowledge is, therefore, entirely reliant upon the existence and ubiquity of event regularities or constant conjunctions of events – we use these phrases interchangeably.

Critical realists typically generalise and style regularities between events as ‘whenever event x then event y’ or ‘whenever event x1….xn then event y’. Regularities between variables are more often expressed as functional relations, y = f(x) or y = f(x1….xn) and this is the way they appear in, for example, regression models.

Research on the HRM-P link is preoccupied with what is referred to variously as testing the prediction, testing the hypothesis, testing the theory, testing the model, testing the model’s predictions, finding the predictors of their dependent variable and so on. The terminology varies and, it must be said, is highly ambiguous, but the practice is well known. In what follows we will refer (where possible) to testing the hypotheses. A hypothesis is a very precise statement about what will regularly happen to the magnitude of one variable when the magnitude of another variable or variables occurs or changes. The key points, however, are that predictions and hypotheses are (a) only intelligible if they are expressed in terms of regularities between events or variables; and (b) only possible if event regularities are ubiquitous. Predictions and hypotheses are only intelligible and possible, if event regularities exist, and event regularities occur in closed systems. Let us consider closed and open systems in more depth.

Closed and open systems

Whilst there are several ways to define systems, critical realists define systems as closed when they are characterized by event regularities, and open when characterized by a lack of such regularity.[9] Events are constantly conjoined in the sense that for every event y, there exists a set of events x1,x2...xn, such that y and x1,x2...xn are regularly conjoined. A deterministically closed system can be expressed in probabilistically and can, thereby, be transposed to a stochastically closed system. Here y and x1, x2...xn are regularly conjoined under some well behaved probabilistic function. In effect, the claim ‘ whenever event x then event y’ is transposed into the claim ‘whenever events x1, x2,…xn on average, then event y on average’, or ‘whenever the average value of events measured by variables x1, x2,…xn are what they are, then the average value of event y measured by variable y is what it is’. Stochastically closed systems, are still closed systems.

The important point to note here is that without event regularities, that is to say, in open systems, prediction based upon inductive generalization is not possible. If it is not the case that event y is observed to regularly follow events x1, x2,…xn then we have no grounds for the inductively generated prediction that the next time events x1, x2,…xn occur, event y will follow.

Does the HRM-P literature presuppose a closed system? In a word: yes. To suggest, as the literature overwhelmingly tries to, that some HRM practices are statistically associated with increased performance, is to assume regularity and hence closure. If textual evidence is needed, the following influential commentator even uses terminology that could be lifted straight from virtually any critical realist account of closed systems:

Ideally, you will develop a measurement system that lets you answer questions such as, how much will we have to change x in order to achieve our target in y? To illustrate, if you increase training by 20 percent, how much will that change employee performance and, ultimately, unit performance?’ (Becker, et al., 2001: 110)

Whilst constant conjunctions of events and, therefore, closed systems, are fundamental to deductivism, they are exceptionally rare phenomena. There appear to be very few spontaneously occurring systems wherein constant conjunctions of events occur in the natural world, and virtually none in the social world. This is not to deny the possibility that constant conjunctions may occur accidentally, or over some restricted spatio-temporal region, or be trivial. But virtually all of the constant conjunctions of interest to science only occur in artificially closed systems, typified by the bench experiments of some natural sciences. In those natural sciences where experiments can be carried out, the point of the experiment is to close the system by engineering a particular set of conditions that will isolate the one interesting mechanism. This mechanism is then allowed to operate unimpeded and the results, the constant conjunctions, recorded. In social science, however, constant conjunctions only occur where they are engineered in the form of theoretically closed systems.