THE IMPOSED ETIC IN SURVEY RESEARCH: FACT OR FALLACY?

Gerald Albaum, University of New Mexico

Kenneth Baker, University of New Mexico

Abstract

An important question facing researchers wanting to do cross-cultural/national research, which includes doing single-country research in a foreign country, is whether research method is an emic or an etic. To provide an answer to this question generally is near impossible. The best we can do is to ask whether method can be viewed as an “imposed etic.” This paper discusses this question in the context of equivalence and raises some important questions that have yet to be answered.

Introduction

Research is research is research! In cross-cultural/national marketing and management—indeed all social and behavioral science—research nothing could be further from the truth. Often, in research involving foreign markets a research project is designed, planned, etc. by a person or persons in one culture (nation) for implementation in other cultures (nations). Moreover, the research methods and techniques that a researcher can choose from have been developed and tested (empirically, psychometrically, etc.) in a country other than the ones involved—typically the United States/Canada and to an extent the United Kingdom and Germany.

This situation leads to a major question regarding research methodology—is it a cultural emic or an etic? There are many issues that arise in cross-cultural/national research (Adler, 1983; Sekaran, 1983; Chan and Rossiter, 2003). Perhaps the single major issue is that of equivalence (Berry, 1980, pp. 8-11; Douglas and Craig, 1983, Chap. 5; Mullen, 1995; Salzberger, Sinkovics, and Schlegelmilch, 1999). This paper will discuss relatively briefly the equivalence dilemma and raise some important questions that need answers regarding the appropriateness of methodologies used in cross-cultural/national research. These questions have not been adequately answered in the literature. It is hoped that this paper will generate interest in furthering research on this all-important methodological issue. For the sake of simplicity we use the terms “culture” and “nation” interchangeably to define the domain of concern. But, it must be remembered that culture does not always equal nation so that much research that is labeled “cross-cultural” is more appropriately “cross-national.”

The Emic/Etic Issue

An aspect of methodology is an emic when it is culture-bound. That is, it behaves in a specified way in one culture and one culture only. When it operates similarly in many cultures, it is considered to be culture-free and is an etic (Berry, 1980, p. 11). By examining method in more than one culture, an aspect of external validity is assessed. In doing this, imposed etic validity is established by correctly predicting an outcome in a culture by using a test imposed from another culture (Berry, 1980, p. 19). One concern is technique-related bias, which includes form-related, measurement, sampling, etc. biases, arising from development in the culture/country of origin and adaptation to the specific cultural or national context of interest.

When emic and imposed etic validities have been “proven,” derived etic validity can be established. Berry (1980, p. 19) argues that this is appropriate for valid cross-cultural comparison and that derived etic validity must be based upon the known validity in two or more cultural systems. Viewed this way, imposed etic validity must be established one culture at a time. In an applied sense, methodologically this could lead to complications for, say, a business firm that wants to study its corporate reputation in its multiple foreign markets, or even only a small subset of its markets. What if the imposed etic of method held for only some cultures but not others? An overall derived etic validity could not be obtained.

Equivalence

The essence of concern for equivalence in cross-cultural research is captured in Figure 1, which is adapted from Salzberger (1997, p.3). As discussed above, for imposed etic validity to exist there must be equivalence in effects of method between the nation where the method was developed and refined and the nation where it is to be applied. The issue of equivalence applies to cross-cultural management research labeled by Adler (1983) as polycentric (many cultures), comparative (contrasting many cultures), geocentric (international management), and synergistic (intercultural management studies). It al so applies to studies of a single culture, what Adler (1983) calls parochial and ethnocentric research, when methods and techniques used were not developed in the culture/nation where the study is being conducted.

Perusing Figure 1 we see that there are five major dimensions where equivalence is of concern: (1) research methods, (2) research topics, (3) research units, (4) research administration, and (5) data handling. The one aspect that is perhaps least obvious form the perspective of methodology is research topics. However, functional, conceptual, and categorical equivalence are of concern in measurement. This relates to developing so-called scales of measurement for constructs. A few researchers have tackled this “problem” (for example: Richins, 1985; Parameswaran and Yaprak, 1987; Singh, 1995; Mullen, 1995; Donoho, Herche, and Swenson, 2001).

But, there is yet another dimension of measurement equivalence. This concerns the more technical matter of form and research technique to obtain data. Equivalence for this dimension covers a broad spectrum of issues, much broader than we can handle in this paper. To illustrate what is involved we use as an example extreme response style.

Response Style

Response style refers to systematic ways of answering, which are not directly related to the question content, but instead represent typical behavioral characteristics of the respondents.

Extreme Response Style (ERS) is the tendency of some respondents to favor or avoid answering in extreme intervals or categories on rating scales, independent of specific item content (Greenleaf, 1992, p. 328). Thus, ERS is more likely to interact with the form of questions, and can be viewed as a potential form-related error. Such an error has been shown to exist for the standard Likert-scale format (Albaum, 1997) and for the traditional semantic differential scale (Yu, Albaum, and Swenson, 2003). This research could be viewed as suggesting that an imposed etic exists for measurement forms that lead to a common type of error.

Form-related errors are systematic errors (Bardo and Yeager, 1982; Bardo, Yeager, and Klingsporn, 1982; Greenleaf, 1992; Phelps, Schmitz , and Boatright, 1986). These

FIGURE 1

Aspects of Equivalence in Cross-Cultural Research

errors concern psychological orientation towards responding to different item formats and include the following types:

  1. Leniency: the tendency to rate something too high or too low (i.e., rate in an extreme

way;

  1. Central tendency: reluctance to give extreme scores; and
  2. Proximity: give similar responses to items that occur close to one another.

Differences among countries in respondents’ use of extreme scale positions have been shown to exist between cultures even when this phenomenon has not been shown to represent necessarily a form-related error. Roster, Rogers, and Albaum (2004) examined five-category Likert scales and 10-point numerical rating scales and found differences among four cultures (nations).

While there has been a substantial amount of literature on ERS, there has been little done dealing with cross-cultural differences. But, identifying cross-cultural differences, or similarities for that matter, in ERS is of interest both as a reflection of cultural differences (similarities) on substantive dimensions and for its implications for cross-cultural research methodology and emic/etic properties (Marshall and Lee, 1997).

The relatively few methodological studies that have examined the emic/etic validity of scales in cross-cultural studies have been limited in scope. In their review of prior research, Marshall and Lee (1997) indicate that usually only two or three cultures are compared. Exceptions include the study by Stening and Everett (1984) of nine nations, the study by Marshall and Lee (1997) of six countries, and the study by van Herk, Poortinga, and Verhallen (2004) of six EU countries. Much of the prior research has been “content oriented” in the sense of measuring ERS for the individual respondent and then aggregating by one or more measures developed for that purpose.

The proportion of responses that are extreme (Greenleaf, 1992; van Herk, Poortinga,a nd Verhallen, 2004) and the deviation from the midpoint of the scale (Marshall and Lee, 1997) are examples of such measures.

Such aggregative approaches are consistent with what appears to be a new direction being taken by some cross-cultural psychologists to focus on nations rather than individuals (Smith, 2004). Hopefully, methodology developments, where necessary, will follow along!

In a recent variation of this approach Clarke (2000, 2001) compared responses to ERS measures from student populations across four nations in an attempt to discover if there were pervasive ERS tendencies between cultural groups. He found evidence to suggest ERS does present a “noise” factor in cross-cultural studies; furthermore, its effects seem to be related to the researcher’s choice of scale formats. In both studies, as the number of scale points increased, ERS decreased. These findings suggest that scales employing wider ranges may help to mitigate response bias in cross-cultural studies.

Conclusion

Perusal of the literature of cross-cultural/national studies, and studies within a culture/nation, relevant to many disciplines suggests the imposed etic validity is assumed, without testing, as methods (data collection, measurement, etc.) developed and refined in one nation are used in other cultures/nations. In short, there is no attempt made to assess etic properties. It may be reasonable to assume that some aspects of methodology may be better used when treated as an emic. When a research methodology is “applied” there is an interaction between that methodology and the people who are asked to respond to it—the research respondents or subjects. Since it is well known that people in different cultures/nations may differ in such basic characteristics as values (Hofsted, 2001; Kahle, Rose, and Shoham, 1999), it would be prudent to question at the outset the assumption of imposed etic validity for ERS in rating scales, and for most aspects of method as well. This view is consistent with that proposed by Adler (1991, p. 67), “assume difference until similarity is proven.”

If we can raise this question about scales then we certainly can raise it about all aspects of equivalence as shown in Figure 1. So, is imposed etic validity a fact or a fallacy? To answer this, one needs to answer many other questions, including:

  1. Should a researcher have to test his or her application of all aspects of method to be

used for imposed etic validity properties?

  1. Can researchers trust other researchers who have done methodological studies,

regardless of the culture being used?

  1. Is it practical (ie., economically feasible) to do this for all cultures (nations) or can we

assume similarity at least for nations having low psychic/psychological distance between them? In short, can we group cultures (nations) in some meaningful way?

  1. If we assume there always will be some differences, is there an acceptable level of

difference in method effects, and how much tolerance can academic and practitioner researchers accept?

If we were to answer the major question underlying this discussion we would have to state that the imposed etic is both fact and fallacy!

References

Adler, N. J., (1983) “A Typology of Management Studies Involving Culture,” Journal of International Business Studies, Fall, 29-47

Adler, N. J., (1991) International Dimensions of Organizational Behavior. Boston: PWS-Kent Publishing Co.

Albaum, G., (1997) “The Likert Scale Revisited: An Alternate Version,” Journal of the Market Research Society, 39 (2), 331-348.

Bardo, J. W. and Yeager, S.J., (1982) “Consistency of Response Style Across Types of Response Formats,” Perceptual and Motor Skills, 55 (1), 307-310.

Bardo, J.W., Yeager, S.J., and Klingsporn, M.J., (1982) “Preliminary Assessment of Format-Specific Central Tendency and Leniency Error in Summated Rating Scales.” Perceptual and Motor Skills, 54 (1), 227-234.

Berry, J.W., (1980) “Introduction to Methodology,” In H.C. Triandis & J.W. Berry (Eds.), Handbook of Cross-Cultural Psychology: Vol. 2. Methodology. Boston: Allyn and Bacon, Inc., 1-28.

Chan, A. and Rossiter, J.. (2003) “Measurement Issues in Cross-Cultural Values Research,” Proceedings of the ANZMAC 2003 Conference, Adelaide, Australia, 1583-1589.

Clarke, I., III., (2001) “Extreme Response Style in Cross-Cultural Research,” International Marketing Review, 301- 324.

Clarke, I., III, (2000) “Global Marketing Research: Is Extreme Response Style Influencing

Your Results?,” Journal of International Consumer Marketing, 12 (4), 91-111.

Donoho, C. L., herche, J., and Swenson, M.J., (2001) “Assessing the Transportability of Measures Across Cultural Boundaries: A Personal Selling Context,” Proceedings fo the Cross-Cultural Research Conference, Turtle Bay, Hawaii.

Douglas, S.P., and Craig, C.S., (1983) International Marketing Research. Englewood Cliffs, NJ: Prentice-Hall.

Greenleaf, E., (1992) “Measuring Extreme Response Style,” Public Opinion Quarterly, 56, 328-351.

Hofstede, G., (2001) Culture’s Consequences.Thousand Oaks, CA: Sage Publications, Second Edition.

Kahle, L.R., Rose, G., and Shoham, A., (1999) “Findings of LOV Throughout the World and Other Evidence of Cross-National Psychographics Introduction,” Journal of Euromarketing, 8 (½), 1-14.

Marshall, R., and Lee, C., (1997) “A Cross-Cultural Between-Gender Study of Extreme Response Style. Advances in Consumer Research, Vol. 3.

Mullen, M.R., (1995) “Diagnosing Measurement Equivalence in Cross-National Research,” Journal of International Business Studies, Third Quarter, 573-596.

Parameswaran, R. and Yaprak, A., (1987) “A Cross-National Comparison of Consumer Research Measures,” Journal fo International Business Studies, Spring, 35-49.

Phelps, L., Schmitz, C.D., and Boatright, B., (1986) “The Effects of Halo and Leniency on Co-operating Teacher Reports Using Likert Rating Scales,” Journal of Educational Research, 79 (Jan-Feb), 151-154.

Richins, M.L., (1986) “Adapting Psychometric Measures for Use in Cross-Cultural Consumer Research,” paper presented at the 1986 Annual Conference of the Institute for Decision Sciences.

Roster, C., Rogers, R., and Albaum, G., (2004) “A Cross-Cultural/National Study of Respondents’ use of extreme Categories for Rating Scales,” unpublished paper.

Salzberger, T., (1997) “Problems of Equivalence in Cross-Cultural Marketing Research,” Unpublished Working Paper, Department of International Marketing and Management, Vienna University of Economics and Business Administration, Vienna, Austria.

Salzberger, T., Sinkovics, R.R., and Schlegelmilch, B.B., (1999) “Data Equivalence in Cross-Cultural Research: A Comparison of Classical Test Theory and Latent Trait Theory Based Approaches,” Australasian Marketing Journal, 7 (2), 23-38.

Sekaran, U., (1983) “Methodological and Theoretical Issues and Advancements in Cross-Cultural Research,” Journal of International Business Studies, Fall, 61-73.

Singh, J., (1995) “Measurement Issues in Cross-National Research,” Journal of International Business Studies, Third Quarter, 597-619.

Smith, P.B., (2004) “Nations, Cultures, and Individuals: New Perspectives and Old Dilemmas,”

Journal of Cross-Cultural Psychology, 35 (1), 6-12.

Stening, B.W. and Everett, J.E., (1984) “Response Styles in a Cross-Cultural Managerial Study,” Journal of Social Psychology, 122 (2) 151-156.

van Herk, H., Poortinga, Y.H., and Verhallen, T.M.M., (2004) “Response Styles in Rating Scales: Evidence of Method Bias in Data from Six EU Countries.,” Journal of Cross-Cultural Psychology, 35, 346-360.

Yu, J., Albaum, G., and Swenson, M., (2003) “ Is A Central Tendency Error Inherent in the Use of Semantic Differential Scales in Different Cultures?,” International Journal of Market Research, 45 (2), 213-228.