Running Head: CRITIQUE OF HAYES, HOFFMAN , & NYDAM (AUTHOR: KUTCH)1
Assignment 3: Critique of Hayes, Hoffman, & Nydam
Michael Kutch
New Jersey City University
EDTC 809: Assessment and Evaluation
Dr. Zieger
CRITIQUE OF HAYES, HOFFMAN , & NYDAM (AUTHOR: KUTCH)1
Content Learned
The pilot study designed by Hoffman, Hayes, and Nydam (2014) clearly indicates that: (1) all teachers use technology to some degree in their classrooms, (2) all teachers desire professional development regarding the effective use of this technology, and (3) teachers in the secondary grades are more confident in their use of technology than those in the primary grades. This pilot study was interesting due to the fact that it confirmed support for quality professional development regarding technology use in schools. Further, the third conclusion (secondary teachers are more confident in their use of technology) represents a possible solution to the demand requested by all teachers. The research by Hoffman et al. (2014) suggests that perhaps secondary school teachers could be appropriate designers of technology related professional development for the primary level teachers in school districts.
Research Design & Enhancements
A quantitative, survey-based research design was an appropriate way for Hoffman et al. to quickly gather information addressing the ten research questions posed in the study. Survey research lends itself to the quick acquisition of large quantities of data designed to address research questions (Gay, Mills, & Airasian, 2009; Salkind, 2013). Further, the pilot study clearly followed the logical progression of a well-designed research proposal. The authors chose useful topic (educator professional development) and cited research that funneled the reader down to an area of interest that had little previous research: the differences in needs of teachers across different grade levels. Creswell (2014) explains the deficiencies model of an introduction as “an approach to writing an introduction…that builds on gaps existing in the literature” (p. 111). Hoffman et al successfully built a pilot study that identified the fact that much research has established the distinctions between professional development across different instructional disciplines, however a deficiency exists when distinguishing between the professional development that exists across different grade levels.
Out of this clear presentation of the topic of study and research problem, the authors stated ten research questions. These research questions, although worded in a form that was more appropriate for the actual survey instrument than the logical progression of a research proposal, were a natural extension of the problem statement. This researcher recommends condensing these ten questions into 2-4 questions that align more closely to the research hypothesis “that teachers at all levels have the same values and skills when it comes to educational technology and therefore the same professional development needs” (Hoffman et al., 2014, p. 3). The survey data was then analyzed to draw conclusions that support this hypothesis. Specifically, “educators across grade levels have similar overall usage patterns for in-classroom instructional technology but online technology use grows as grade level increases” and “K-12 educators appreciate small-group formal professional development while post-secondary educators would prefer a greater variety of professional development delivery options and formats” (Hoffman et al., 2014, p. 21).
An analysis of reliability would have further enhanced this study. Reliability is a prerequisite to validity; a study cannot be valid unless it is first reliable (Creswell, 2014; Drew, Hardman, & Hart, 1996; Gay, Mills, & Airasian, 2009; Salkind, 2013). Although Hoffman et al. state “Changing the instrument could lead to issues with validity, however the actual survey questions needed little, if any, revision which is why this instrument was deemed appropriate for our pilot study” (p. 5), this validity is in question until the reliability of the study is verified numerically.
Reliability of survey instruments can be done using the split-half reliability test or Cronbach’s alpha. Assuming that all of the items in a test (or a section of a test) measure the same construct (variable), internal reliability can be gauged by dividing the test (or section) into two halves and checking the correlation between the two halves of the test. This technique, known as a “split-half” reliability would not be appropriate for the test created by Hoffman et al. because their survey instrument was designed to address ten different research questions, thus I their test does not meet the requirement that all items measure the same construct. Cronbach’s alpha, on the other hand, would be an appropriate measure of their reliability. However, this measure could only be applied if there were multiple survey items that address the same construct (Salkind, 2013). Although not specifically computed, the authors do suggest some degree of internal reliability when they state that “by cross-referencing this [100% of the college level faculty…said that they used a computer projector and/or SMART Board to promote… innovative thinking]with the ‘comfort level with technology’ data for the same group it became apparent that the 29% of faculty who actually responded to the survey were most likely in the first three categories of adoption: ‘innovators’, ‘early adopters’ or ‘early majority’” (p. 20).This statement correlates a response from one section of the survey instrument with another response from a different section, a technique that is the very essence of reliability testing and something that Hoffman et al. could do to test the reliability of data collected to test other constructs.
Contributions and Further Study
This study contributed survey information regarding the distinctions between professional development offered across different grade levels, from K-12 through the junior college level. Gay et al. (2009) and Salkind (2013) argue that the information gathered as a result of survey research often forms the foundation for further ex-post-facto research (such as correlational and causal-comparative studies) that identifies the relationship between interesting variables. These relationships, in turn, lead to the development of theories that explain the relationships which are further tested in future studies (Creswell, 2014). This researcher recommends modifying the survey instrument to include more demographic information about the participants to facilitate the development relational studies that establish correlations between variables or develop prediction equations using regression and/or factor analysis. In addition, this author recommends defining a population from which to gather a representative sample to draw conclusions using the redeveloped survey instrument. Perhaps there are further distinctions between the professional development offered between grade levels at schools located in communities of differing socio-economic status, ethnic background, etc.
References
Creswell, J. W. (2014). Research design: Qualitative, quantitative, and mixed methods
approaches (4th ed.). Los Angeles, CA: Sage.
Drew, C. J., Hardman, M. L., & Hart, A, W. (1996). Designing and conducting research: Inquiry
in education and social science (2nd ed.). Needham Heights, MA: Allyn and Bacon.
Gay, L. R., Mills, G. E., & Airasian, P. (2009). Educational research: Competencies for analysis
and applications (9th ed.). Upper Saddle River, NJ: Pearson.
Kurpius, S., Stafford, M. (2006). Testing and measurement: A user friendly guide. Los Angeles,
CA: Sage.
Roberts, C. M. (2010). The dissertation journey: A practical and comprehensive guide to
planning, writing, and defending your dissertation. Thousand Oaks, CA: Corwin.
Salkind, N. J. (2013). Tests & measurements for people who [think they] hate tests &
measurements (2nd ed.). Los Angeles, CA: Sage.
Bottom of Form