Abstract Number: 020-0186

Creating an Enabling Tool for Facilitating Engagement in Continuous Innovation Programmes

Dr Helen T Wagner[*]1, Dr Susan C Morton1, and Prof Chris J Backhouse1

1 Manufacturing Organisation Group, Wolfson School of Mechanical & Manufacturing Engineering, Loughborough University, UK

POMS 22nd Annual Conference, Nevada, USA

April 29 to May 2, 2011

Abstract

Although Lean Manufacturing is an established concept in both academia and industry, consideration of the stages that follow a company-wide Lean initiative has received far less attention. Pursuing continuous innovation (CI) takes commitment from all involved, and the gap between knowing about continuous innovation and actually doing it needs to be filled. To facilitate organizational CI, a need has been identified for a bespoke tool that will enable managers to understand their people and support problem solving activities, at the supervisory/team management level in particular. Having identified five main constructs that contribute to successful engagement of employees in CI and their respective diagnostic tools, the process of questionnaire development was researched. This offered guidelines for planning, question wording, ordering and presentation, which were actioned in the development. The work has resulted in a bespoke tool for facilitating engagement that will add to the information available to managers and academics alike.

Introduction

Lean Manufacturing is a well established concept in both academia and industry; however, what is required when moving on from a lean initiative to attain further benefits is not such an established field. What lies beyond Lean in relation to performance improvement requires further investigation; the gap between knowing about continuous innovation (CI) and doing it also needs to be removed, or reduced at the very least. To facilitate continuous organizational innovation, a requirement has been identified for a new diagnostic tool for use by managers to assess all levels of the organization and to assist with problem solving at the level of supervisory/team management in particular.

Although the information needed could be collected by interviewing workers within an organization, several advantages to using a standardized questionnaire exist. Studies show that people are often more honest when completing a self-administered questionnaire [[1], [2]]. They provide an easy and time effective route to collecting data from many people, provide anonymity and limit researcher bias, and the structured format ensures each respondent reads and answers the exact same questions, which makes for robust analysis [[3]].

Devising a new questionnaire is not an easy task [1] and often researchers underestimate what is required, thinking that because they have knowledge of a topic they are capable of developing a good questionnaire [[4]]. In fact, it is a highly complex and time consuming process [[5]] that cannot be shortcut, no matter how tempting it may be [1]. The process requires not only thorough knowledge, but attention to detail [4] and a ‘stringent and scrupulous’ approach to ensure the data collected provides what is required in a usable form [5]. This is essential, as the consequences of the decisions made during the design phase impact directly on the results obtained [[6]] and, therefore, the findings and validity of the study.

It is, therefore, the purpose of this paper to introduce the process undertaken to create a new, bespoke questionnaire, created to assess the factors affecting employee engagement in continuous innovation programmes.

Existing Tools that measure the constructs affecting employee engagement in CI

There are five identified constructs that affect the potential for employees to engage with the CI programme, as outlined by Wagner et al. [[7]]. In order to incorporate each of the constructs of creativity, empowerment, leader-member relationship, team role and leadership style, into the new questionnaire, the existing tools that measure these constructs were assessed.

Job Diagnostic Survey

The Job Characteristics Model (JCM) proposed by Hackman and Oldham [[8]] is a tool for analysing the satisfaction and motivating potential offered by a job role. The quantitative calculation of each component of the model is facilitated by the Job Diagnostic Survey (JDS) [[9]], which was developed for use “in research and evaluation activities aimed at assessing the effects of redesigned jobs on the people who do them” [[10]].

Questions are set out to identify the core characteristics using two techniques; standard questions and reverse scored questions. Some researchers have experienced problems with this approachand have sought to make improvements [[11]], but when their revised questioning was tested by others, in a direct comparison, no improvement was found and the authors recommended continued use of the original questionnaire [[12]]. The results of the JDS are entered into the Motivating Potential Score (MPS) equation,also created by Hackman and Oldham [8], where each component in the equation is scored from 1 to 7, with results ranging from 1 to 343 and scores commonly around 150 [[13]].

Leader-Member Exchange (LMX)

Leader-Member Exchange (LMX), with its roots in Social Exchange Theory [[14]], is based on the two-way, dyadic relationship [[15]] between a leader and an individual subordinate [[16]]. Each relationship becomes differentiated [15,[17]] based on factors affecting the level of interaction, communication, understanding and trust [[18]] between the two; constraining or facilitating the development of the relationship [[19]]. The measure of the multidimensional relationship [18] is based on the perceptions of both the leader and subordinate, and so in its study, it is vital to view it objectively from both sides [[20]].

The LMX-7 questionnaire, put forward by Graen and Uhl-Bien [[21]], assesses the quality of the relationship of the supervisor with each individual team-member. The tool is made up of seven items that exemplify different aspects of the leader-subordinate working relationship [17] and is measured on a five-point scale.

The Belbin team roles model

The Belbin team roles model [[22]] identifies the nine potential roles each individual could exhibit when working in a team, outlining the specific behaviours and skills each brings to the team dynamic.

The Self Perception Inventory (SPI) [[23]] tool comprises seven questions, which ask the respondent to distribute ten points between ten different response options. The points can be allocated where they like, but it is important to use all ten points. Subjects explored include a person’s contribution to a team and what they feel they lack, their approach to tasks and problems, and working with others or in a group. Participants completing the questionnaire indicate their own perceptions of their behaviour in each situation. This can be complemented by the addition of the Observer Assessment [22], where other team members or the team supervisor provides their perceptions of the participant, to give an outsider view.

Although some researchers question its validity, many support the tool. They [e.g. Fisher et al. [24] and Partington and Harris [25]] suggest that it has made a significant contribution to understanding [[24]], also suggesting its value in use is more important than its psychometric validity [[25]] and recognizingthat to set aside the work because of doubt would be a great pity [24].

Research Tools – KEYS to Creativity

The KEYS to Creativity instrument (KEYS) was developed by Amabile et al. [[26]], to meet the need for research in organizational theory and practice, by using theoretical knowledge from literature to create a tool that would test real organizational settings. It looks at creativity within the working environment [[27]], examining the intrinsic motivation of individuals to be creative and assessing perceived barriers and enablers to creativity [[28]]. It is said that“the value of KEYS lies in its capacity to accurately identify the conditions necessary for innovation to occur” [[29]]. Aimed at assessing all levels within an organisation, from the shop floor team to supervisory and organizational, KEYS concentrates on the effects of environmental factors on an individual’s perceptions, which influence the creativity of their work [[30]].

The KEYS instrument itself is made up of 78 questions assessed using a four point scale [27]; purposely designed to force a response by not offering a neutral option [26]. Of the 78, 66 are related to the work environment with the remaining 12 assessing performance in terms of creativity and productivity [26]. The work environment factors are split between management practices that encourage and those that inhibit creativity [29], encompassing: organizational encouragement;supervisory encouragement; work group supports; sufficient resources; challenging work; freedom;organizational impediments; and workload pressure.

As a tool it has been extensively empirically tested [30] and has been shown to be both robust and rigorous [[31]], demonstrating its validity and reliability [27]through research with more than twelve thousand research subjects [29].

Extant Literature on Developing Questionnaires

The Oxford English Dictionary defines a questionnaire as: “A formulated series of questions by which information is sought from a selected group, usually for statistical analysis; a document containing these” [[32]], however, some see it as much more. Labaw [[33]] sees it as not only a series of questions or just a series of words [[34]], but a layered structure, where “it is a totality, a gestalt that is greater than the sum of its individual questions... with each part vital to every other part and all parts must be handled simultaneously to create this whole instrument” [33].

Although there may be more to it, the basics of the dictionary definition also hold true; as a tool for data collection in written format, suitable for large numbers of respondents [1] and a series of attitude or opinion statements and questions developed to elicit a response, which canthen be used to measure the variable being studied [5]. Well designed questionnaires can provide an understanding of the details of an organization’s manufacturing strategy [4] and can aid in driving-in and measuring the success of organizational change [5]. An ideal questionnaire should be clear, unambiguous and suitable to collect the data required to test the research question or hypothesis set [5]. In order to meet these requirements it must be designed with the respondents in mind; this will dictate the type of questions, wording and concepts that can be explored [34].

The Process of Development

Before question writing begins there is much work to be done; this starts with knowing what is the purpose of the research [5] and what you wish to accomplish [34]. Questionnaire construction takes place in stages, which begin with setting objectives [4], clearly defining what will be studied [5] and to what level of detail and accuracy [6]. This should involve reviewing appropriate literature [1]. Once all of this is known, research questions or hypotheses should be developed [5]. It is likely that this initial planning phase will take up a third to a half of the development time of the questionnaire [6].

Another fundamental part to the early stages of questionnaire development, that must run parallel to both the planning and question development stages, is that of analysis design [5]. A questionnaire must be designed with analysis as an integral part; so that it can be assured that the data collected will be suitable for analysis [35]. This statistical analysis will allow researchers to study data on individual respondents or questions, but will also facilitate the presentation of results and testing of hypotheses [35]. It must, however, be remembered that data collection is paramount and no amount of statistical manipulation can make up for poor questionnaire design [4]. With this in mind, the practicalities of questionnaire design must next be examined.

Question Wording

The consideration of question wording is one that receives much attention in extant literature. While all recommend it be given careful consideration, some think that the specific wording of questions has a much greater impact than others. Brigham [[35]] suggests that wording has considerable effect on results; a belief that is supported by Synodinos [4] who found that even small changes in wording can produce response effects. However, this belief is not shared by all. Labaw [33] suggested that wording variations have little impact on the stability of results, a position that was corroborated by the findings of Gendall [34] who stated that it is possible to ask the same question in different ways with no effect on respondents’ understanding.

Further contention exists in the phasing of attitude statements. Murray [5] considers that all such statements should be worded positively; however, Gendall [34] found no evidence that wording positively or negatively has any influence on response. He did find that the strength of a word had an impact on response, with words such as ‘forbid’ being less acceptable than ‘not allow’ [34].

It is sometimes taken for granted that the respondent reads and understands the question as the researcher intends, unfortunately this is not always the case [34]. With this in mind, a table of the suggestions for questionnaire wording has been compiled, outlining the advice of many sources (Table 1).

In addition to question wording, answer format must be considered. It should provide a clear structure so that respondents know what is required of them [5]. Closed questions offer a fixed choice of answers that come in several different formats. Yes/No formats are commonly used, but should be limited to avoid guessing [5]. Checklists where the respondent is asked to tick all that apply [1] can be used when multiple answers may be applicable. In this instance, the use of an ‘other’ box is also recommended in case a possible option has not been thought of [5], although this may not entirely make up for omissions [36]. Category answers are possible [5] as are quantities or bands of figures [1].

Table 1: Advice for wording questions

Rules for wording of questions
Use closed questions where possible to ensure the context is the same for all / [4, 34, 36]
Questions should be simply worded and structured, unambiguous, focussed and short / [1, 4, 5, 6, 34]
Less than 20 words / [5]
Less than 12 words / [1]
Questions should be clear and precise and not woolly, so all understand and interpret as intended / [4, 5, 6, 34, [36]]
Use language appropriate to the target population / [6]
Phrase to the lowest education level of respondents / [4,5]
Do not patronise or make too elitist / [5]
Do not use jargon, unusual words, acronyms, and abbreviations / [4, 5]
Avoid unfamiliar, difficult words or words that sound similar to others / [34]
Consider of words have an alternative meaning / [5]
Avoid double negatives / [4, 6]
Double barrelled questions should be separated into single concept questions / [1, 4, 5, 6, 34]
Avoid leading or loaded questions / [1, 5, 6, 34]
Avoid assuming/presuming questions / [1]
Imprecise conditions such as frequently, generally, normally should be avoided / [1, 6]
Questions should not challenge the respondents’ knowledge, only asking what they are easily able and to answer / [4, 6, 34, 36]
Do not ask respondents to think too far back, not more than 6 months / [1, 5]
Hypothetical questions are difficult to answer and should be avoided / [1, 5, 6]
Questions that ask people to predict the future should be used with caution / [4]

Although different questions require different styles of response formats, it is the attitude or opinion statements that seem to be the most contentious. Ranges of mutually exclusive answers, such as strongly disagree to strongly agree [1, 5] are commonly used to quantify these questions. However, Gendall and Hoek [36] suggest that agree-disagree questions are the most likely to be affected by question wording and, therefore, the answer format should be a forced choice. In further work, Gendall [34] reinforces that there should be no mid-point or neutral alternative offered, in order to measure intensity of feeling, but states that a no opinion option should always be included.

Question Order

Question order is another issue that must be considered during the design phase of a new tool. Unlike question wording, most authors are in agreement as to the best way to order questions. The first and most fundamental point is to establish whether each question is in fact necessary to complete the study [5], as the length of the questionnaire should be kept to its optimal minimum. Once the questions are deemed to be necessary they can be ordered based on the generally accepted advice.

The questionnaire should begin with easy, basic questions that are neither sensitive nor threatening, in order to ease the respondent into the process [15, 34, [37]]. Questions should then develop logically [4, 5, 34], be grouped by theme [4, 5, 37] and flow smoothly from one to the next [5]. Questions that are more sensitive or embarrassing should be left until late on in the order [1, 6, 34]. Advice is divided on where important questions should be placed, with some feeling that these questions should be first as later responses could impact on these issues, but others suggest that important questions should be approached slowly [6]. Similarly, division is found in the positioning of demographics questions. While Synodinos [4] recommends that some screening questions be placed at the end of the introduction section, he thinks that demographics questions are likely to be the most sensitive in the questionnaire so, in line with previous advice,these should be positioned at the end. Oppenheim [[38]] concurs on the positioning, although his reasoning comes more from the desire not to dissipate the initial enthusiasm by diluting it with questions not related to the main topic of the questionnaire. This is in direct opposition to Drummond et al. [[39]], who found that placing demographics questions first actually increased response rates in a postal survey. The effects of question order on response rates were also found by Synodinos [4] and Dunn et al. [37], and it was thought that this could be influenced further by the gender of the respondent [39].

Questionnaire Presentation Formatting

Whilst much of the advice concerning questionnaire presentation is aimed at self-administered tools that are completed as part of a postal survey, there are lessons to learn to improve the presentation of all questionnaires based on these findings. Jepson et al. [[40]] found that the overall length of the questionnaire had a direct impact on response rates with a response rate at 60% for a questionnaire of 849 words, but only 16.7% when the words are increased to 1800, concluding that there is an acceptable threshold for questionnaire length. Although response rate is not an issue in organizational studies with full participation, the findings on questionnaire length may help to ensure that focus can be maintained by those completing the questionnaire.

Two things are likely to create an immediate impression on respondents, which makes them vitally important. The first is the introduction, which is needed to build rapport with the respondent [5]; it sets the scene and can build interest in completing. Second is the graphic design of the tool itself [34], which has the potential to either arouse interest or discourage respondents from taking the time to complete [5].