Interactions MAY BE the Rule Rather Than the Exception,

BUT . . . :

A Note on ISSUES IN Estimating Interactions

in Theoretical Model Tests

ABSTRACT

Authors have called for more frequent investigation of interactions in theoretical models involving survey data. However, there are competing proposals for interaction specification in structural equation models. And, there are other interaction issues that have received little or no attention in theoretical model testing. For example, what types of evidence suggest that an interaction should be hypothesized? Is an interaction a construct or a mathematical form, or both? There are theoretical and practical issues involving the existing structural equation estimation proposals for interactions. For example, specifying the interaction between X and Z, for example, as XZ is an insufficient disconfirmation test. This paper critically addresses these and other theory-testing matters in conceptualizing, estimating and interpreting interactions in survey data.

Authors have noted that in some disciplines interactions may be the rule rather than the exception in survey data models (e.g., Jaccard, Turrisi and Wan 1990). However, interactions in published theoretical model tests with survey data are comparatively rare (Aiken and West 1991). Perhaps as a result, recent conference keynote speakers have called for increased investigations of substantive interactions. While infrequent appearances of interactions in substantive structural equation papers may have been the result of the unavailability of suitable analysis tools until recently, substantive researchers may not be accustomed to conceptualizing and estimating interactions. Theoretical model building with an interaction involves conceptualizing at least three variables, developing theory to justify two variables' proposed associations with a target variable, then developing additional theory for the variability (form) of least one of these relationships. In ANOVA, interactions are usually estimated en masse after the main effects are estimated. In this event, theorizing about interactions occurs after they are found to be significant, if at all.

The "value" added by the increased effort required for theorizing and assessing interactions may seem comparatively low, especially for theoretical models involving "interesting" new constructs or "interesting" new paths among relationships. There is also the tedium of interaction specification using the existing structural equation estimation proposals.

Anecdotally, some authors believe interactions should not appear in theoretical models because they are not "proper" constructs. They are not "indicated" (pointed to) by observed variables.

Further, specifying an hypothesized interaction between X and Z, for example, as the product of X and Z, XZ is an insufficient test of the hypothesized interaction. XZ is but one of many interaction forms (Jaccard, Turrisi and Wan 1990).

THE PRESENT RESEARCH

This paper critically addresses several matters related to interactions in theoretical model tests involving survey data. For example, it discusses foreseeing the plausibility of an interaction at the model-building stage, then justifying these interactions theoretically. It also discusses how interaction hypotheses are phrased, and it discusses issues involving the existing estimation proposals for interactions such as a lack of an adequate interaction disconfirmation test. Along the way it discusses matters such as interpreting interactions in survey data, probing for interactions after the hypothesized model has been estimated, and the "trap" of hypothesizing interactions that are found post hoc as though they were hypothesized before the model was first estimated.

CONCEPTUALIZING INTERACTIONS

We will discuss each of these matters, beginning with conceptualizing or envisioning interactions--foreseeing their plausibility at the model-building stage.

Because interactions are usually characterized as "moderators," this might hinder their theory development. The term "moderator" might signal that X, for example, reduces the Z-Y association as X increases, and that interactions are restricted to this case. However, X actually might reduce the Z-Y association as X decreases. In this case X amplifies the Z-Y assoc. There are other forms of the interaction meaning of "moderation" as well. For example, X could reduce the Z-Y association at one end of the range of X in the study, and it could amplify the Z-Y association at the other end. These interactions are termed disordinal interactions.

Thus, it may be useful to resist thinking of interactions as moderators in the development of a model. Instead, it may be fruitful to think of what might happen to the Z-Y relationship, for example, when X was at a low level versus when X was high. For example, it is well known that relationship satisfaction reduces relationship exiting, and attractive alternatives increase exiting (e.g., Ping 1993). However, what would happen if a data set were split at the median of satisfaction, and the strength of the alternatives-exiting association were compared between the two split halves? Alternatives should increase exiting when satisfaction is lower, but when satisfaction is higher, the effect of alternatives on exiting should be lower or non significant (see Ping 1994).

While this split-halves approach has been disparaged for interaction estimation (e.g., Lubinski and Humphreys 1990) it may be a fruitful "though experiment" for conceptualizing interactions. Because of concerns about model parsimony, this thought experiment should probably be restricted to major constructs, but for the "most important" pair of variables that should be related, could this relationship plausibly change in strength or direction between low and high values of a third variable? Specifically, for low values of X, for example, should the Z-Y association somehow be different from the Z-Y association when X is high?

Obviously this though experiment could be conducted on new models. It also could be conducted on previously investigated models that did not consider interactions. Specifically, because a disordinal interaction could have caused a main effect to be non significant in a published study (see Aiken and West 1991), previous studies with non-significant hypothesized associations might be fruitfully considered for the above thought experiment. A disordinal interaction also could have caused a main effect to be positive in one study and negative in another (see Aiken and West 1991). Thus, previous studies with an association that is not consistently significant, or not consistent in sign might be fruitfully considered for the thought experiment. It even might be fruitful to consider the major associations in a previously investigated model for the thought experiment, even if they have been consistently significant and in the same direction.

There is an additional way to identify interactions for theory development (and estimation in a future study): post hoc probing for them as in ANOVA. This matter will be discussed later.

JUSTIFYING INTERACTIONS

However, an interaction can be challenging to justify theoretically. While existing theory might be available to directly support a plausible interaction, it is more likely that there has been little previous thought about the target interaction. This lack of previous though about a topic has been a hallmark of science, and researchers have used many strategies to construct explanations for the topic under study. While these can include deduction, induction and abduction (see Peirce 1931–1935, 1958), in general, researchers use any sort of evidence, including direct experience such as exploratory focus groups, to support a proposed interaction.

For example, and as previously mentioned, satisfaction and alternatives both affect exiting. However, Ping (1994) argued that satisfaction attenuates the alternative-exiting association. To justify this hypothesis he used prior arguments that at high-satisfaction subjects were not aware of alternatives (Dwyer, Schurr and Oh 1987) or they devalued them (Thibaut and Kelly 1959). He also noted that alternatives previously had been argued to reduce exiting, and that argument had been empirically "confirmed." To resolve this paradox, he proposed the interaction. In this case he used existing arguments about high satisfaction, existing theory and prior results about alternatives-exiting, and a proposal that the prior alternatives-increases-exiting results likely applied to lower satisfaction samples to justify a proposed interaction. These results also might have been found in focus groups of low and high satisfaction subjects.

However, it usually is insufficient to use "experience" and previous writings as justification; a "why" must be supplied. For example, at low satisfaction increasing alternatives are likely to increase exiting because with reduced satisfaction (reduced relationship rewards) the alternatives' rewards may appear more attractive than the current relationship's. At high satisfaction, alternatives are not likely to be associated with exiting because the effort (cost) to compare rewards is unnecessary, or the alternative's rewards are less than certain (a risk). In general, rewards and cost (see for example Shaw and Costanzo 1982), and risk (Kahneman and Tversky 1979) have been used to justify considerable research involving human behavior, and they might continue to be useful for interactions. Examples of interaction justification can be found in Aiken and West (1991), and the citations therein, Ajzen and Fishbein (1980), Kenny and Judd (1984) and Ping (1994, 1999).

HYPOTHESIZING INTERACTIONS

Given that X, for example, is argued to increase the Z-Y association, should the hypothesis be stated as "X moderates the Z-Y association?" Terms such as "interacts with," "modifies," "amplifies" or "increases" would be more precise. Specifically,

H: X interacts with/modifies/amplifies/increases the Z-Y association

would be more precise. However, it still may be insufficient, especially if the low-high though experiment and justification approach suggested above is used. In this case

H: At low X, the Z-Y association is comparatively weak, while at high X the Z-Y association is stronger,

would fit a low-high argument.

AN EXAMPLEAs previously mentioned, attractive alternatives are likely to increase relationship exiting. Relationship dissatisfaction amplifies this alternatives-exiting relationship. Specifically, when dissatisfaction is low the positive alternatives-exiting association should be weak (small, possibly non significant). However, when dissatisfaction is high, the alternatives-exiting association should be stronger (larger compared to the low dissatisfaction coefficient).

Thus, in this case a "complete" interaction hypothesis would be

H1a: Dissatisfaction is positively associated with exiting,

H1b: Alternatives are positively associated with exiting, and

H1c: Dissatisfaction moderates/interacts with/attenuates/reduces the alternatives-exiting association.

Alternatively,

H1c': As dissatisfaction increases, the alternatives-exiting association becomes weaker.

Note that the interaction hypothesis is accompanied by two other hypotheses involving exiting with satisfaction and alternative, and that "moderates," meaning "to reduce' is appropriate.

Instead, one might hypothesize

H1c": When dissatisfaction is low the alternatives-exiting association is weaker than it is when dissatisfaction is higher.

Obviously, an equivalent interaction hypothesis statement would be "as dissatisfaction declines, the alternatives-exiting association becomes weaker." However, this may not match the above argument quite as well as H1c-H1c". It would match the above argument if the "direction" of the argument were reversed (i.e., "when dissatisfaction is high the positive alternatives-exiting association should be strong (large), but when dissatisfaction is lower, the alternatives-exiting association should be weaker (smaller, possibly non significant)).

A property of the dissatisfactionXalternatives interaction that is useful in justifying interactions and framing their hypotheses is their symmetry. To explain, an abbreviated structural equation involving dissatisfaction (DISSAT), alternatives (ALT), and exiting (EXIT) would be

EXIT = a DISSAT + b ALT + c DISSATxALT . (1

Factoring,

EXIT = a DISSAT + (b + c DISSAT)ALT . (2

In words, since b and c are constants, as DISSAT changes from subject to subject in the study, the structural coefficient of ALT, b+c DISSAT, changes.

However, Equation 1 could be re factored into

EXIT = bALT + (a +c ALT) DISSAT . (3

Thus, as ALT changes, the structural coefficient of DISSAT, a+cALT, changes. Thus, ALT interacts with DISSAT.

In general, if X interacts with Z in the Z-Y association, then Z interacts with X in the X-Y association. In the DISSATxALT case it turns out that it is easier to argue that increasing alternatives increases the dissatisfaction-exiting association, so an interaction hypothesis such as

H1c'": Alternatives moderate/interact with/attenuate/reduce the dissatisfaction-exiting association,

H1c"": As alternatives increase, the dissatisfaction-exiting association becomes stronger, or

H1c""': When alternatives are low the dissatisfaction-exiting association is weaker than it is when alternatives is higher,

is appropriate.

For emphasis, two thought experiments are possible with DISSAT: what happens to the DISSAT-EXIT association as ALT changes, and what happens to the ALT-EXIT association as DISSAT changes?

INTERACTION COST-BENEFITS

The "value" of the increased effort required for theorizing and determining interaction effects may be comparatively low, especially for theoretical models involving "interesting" new constructs or new relationships among constructs.

Specifically, and as mentioned earlier, an interaction involves theory development for two variables' association with a target variable, and additional theory development for the variability (form) of least one of these relationships. In addition, interactions typically explain comparatively little additional variance (Cohen and Cohen 1983). Specifically, in Equation 1, adding DISSATxALT explains little additional variance in EXIT. However, in theory testing it is more important to know how an interaction affects a target relationship, and thus the behavior of its factored coefficient (e.g., b+cDISSAT in Equation 2), rather than to explain lots of additional variance. (Materially explaining additional variance is important in model building, e.g., epidemiology.) Parenthetically, experience suggests that even for an interaction that explains comparatively little additional variance, a factored coefficient such as (b+cDISSAT) can be quite large for some values of DISSAT.

Interactions reduce parsimony. Specifically, adding an interaction decreases degrees of freedom, and increases model collinerarity. However, if there is strong theoretical justification for an interaction, it is likely the interaction will be significant in any reasonably sized sample. In addition, an hypothesized interaction's collinerarity is an important part of a model's test.

Papers with interactions tend to be overly methods oriented, and the interactions tend to dominate the paper. One method to reduce the appearance of a "methods paper" is to place the interaction details in an appendix. Considering interaction(s) for major exogenous constructs only also should reduce their apparent "dominance" in a paper.

PROPOSED APPROACHESUnfortunately, issues with structural equation estimation approaches for interactions may further reduce their apparent value. Specifically, there are several proposals for specifying latent variable interactions including (1) Kenny and Judd 1984; (2) Bollen 1995; (3) Jöreskog and Yang 1996; (4) Ping 1995; (5) Ping 1996a; (6) Ping 1996b; (7) Jaccard and Wan 1995; (8) Jöreskog 2000; (9) Wall and Amemiya 2001; (10) Mathieu, Tannenbaum and Salas 1992; (11) Algina and Moulder 2001; (12) Marsh, Wen and Hau 2004; (13) Klein and Moosbrugger 2000/Schermelleh-Engle, Kein and Moosbrugger 1998/Klein and Muthén 2002; and (14) Moulder and Algina 2002.

They are all very tedious to use, most are inaccessible to substantive researchers (Cortina, Chen and Dunlap 2001), and some do not involve Maximum Likelihood estimation, or commercially available estimation software (proposals 2, 6 and 13).

Several of these proposals have not been formally evaluated for bias and inefficiency (i.e., proposals 8 and 10). In additional proposal 10 did not perform well in a comparison of interaction estimation approaches (see Cortina, Chen and Dunlap 2001).

Most of these proposals are based on the Kenny and Judd product indicators (for example, x1z1, x1z2, ... x1zm, x2z1, x2z2, ... x2zm, ... xnzm, where n and m are the number of indicators of X and Z respectively). However, specifying all the Kenny and Judd product indicators usually produces model-to-data fit problems (e.g., Jaccard and Wan 1995).

Several proposals use weeded subsets of the Kenny and Judd (1984) product indicators or indicator aggregation to avoid these inconsistency problems (proposals 3, 4, 5, 7, 9, 11, 12 and 14). Unfortunately, weeding the Kenny and Judd product indicators raises questions about the face or content validity of the resulting interaction (e.g., if all the indicators of X and Z are not represented in the indicators of XZ, for example, is XZ still the product of X and Z as they were operationalized in the study?) (proposals 3, 7, 9, 11, 12 and 14). In addition, the formula for the reliability of a weeded XZ is unknown. Specifically, the formula for the reliability of XZ is a function of (unweeded) X and unweeded Z, and thus it assumes XZ is operationally (unweeded) X times (unweeded) Z. Weeded Kenny and Judd product indicators also produce interpretation problems using factored coefficients because XZ is no longer (unweeded) X times (unweeded) Z operationally (see Equation 2).

Finally, although proposal 4 has none of these drawbacks except that it is tedious, and it assumes the loadings are tau equivalent. Proposal 4's tediousness may be reduced using the specification templates at Although the tau equivalency assumption can be removed using weighting, experience with real-world data suggests that interaction significance is not particularly sensitive to this assumption.

INTERPRETING INTERACTIONS

Once an interaction in survey data is estimated, how should one interpret it? Two point graphical techniques that are used in ANOVA ignore much of the information available in survey data. For example, authors have noted that interaction significance varies (Aiken and West 1991, Jaccard, Turrisi and Wan 1990).

There have been several proposals for interpreting regression interactions (e.g., Aiken and West 1991; Darlington 1990; Denters and Van Puijenbroek 1989; Friedrich 1982; Hayduk 1987; Hayduk and Wonnacott 1980; Jaccard, Turissi and Wan 1990; Stolzenberg 1979). However, there is little guidance for interpreting latent variable interactions. The following presents an approach adapted from Friedrich's (1982) suggestions for interpreting interactions in regression (see Darlington 1990; Jaccard, Turrisi and Wan 1990).

AN EXAMPLETo explain this suggested interpretation approach, a real-world, but disguised, survey data set will be analyzed. The abbreviated results of a LISREL 8 Maximum Likelihood estimation of a structural model is shown in Table A. There the XZ interaction is large enough to warrant interpretation (i.e., its coefficient bXZ is significant). Interpretation of this interaction relies on tables such as Table B that are constructed using factored coefficients such as the factored coefficient of Z, bZ + bXZX, from Table A. Column 2 in Table B, for example, shows the factored coefficient of Z from Table A (.047 - .297X) at several Column 1 levels of X in the study. Column 3 shows the standard errors of these factored coefficients of Z at the Column 1 levels of X, and Column 4 shows the resulting t-values. Footnotes b) through d) in Table B further explain the Columns 1-4 entries. In particular, Footnote b) explains how values for the unobserved variable X are determined by the values of its indicator that is perfectly correlated with it (i.e., the indicator of X with a loading of 1). In addition, Footnote d) discusses the Standard Error of the variable Z coefficient. The variance of b is of course the square of the Standard Error of b, and Cov(bZ,bXZ), the covariance of bZ and bXZ, is equal to r(bZ,bXZ)SE(bZ)SE(bXZ), where r is the "CORRELATIONS OF ESTIMATES" value for bZ and bXZ in LISREL 8, and SE indicates Standard Error.