After reading the Haladyna, et al. article,, as well at Popham'chapters, and Kehoe's article, what advice would you give new item writers or classroom test developers? If you have the chance. engage some colleagues in a discussion about some of the things you have learned in reading these materials. Then share your discussions with the rest of the class.
Reply by sharon anderson jones on February 24, 2010 at 11:12pm
Most teachers ask a lot of questions with regard to constructing tests because they have very little training or information on the task. Multiple choice connstruction involves creating the stem or the introductory question and the options or the answers. Valid tests cover the material that was taught and the important concepts. When writing the stem you should identify one problem, one concept and several choices for the answer options. Wording must be selected carefully with more information in the stem and less information in the options. A good stem does not contain negative wording. Negatives can produce an unwanted bias. Clues that are not important to the stem should not be included.
When writing options use 3 or 4 well constructed items. Also, use distractors that are the same length and complexity. When possible have someone with a good knowledge of the content review the items. Acccording to Haladyna and Downing (2002) research suggests some writing formats of test items for higher level learning are not the same because of the changes in classroom assessment. Teachers should use a variety of approaches to assess knowledge, skills and abilities. By using performance testing, teachers can test more complex skills. The multiple choice (MC) format continues to be used to assess on a large scale. Studies show that teachers have a difficult time assessing complex abilities such as problem solving and writing. Textbooks support the use of the MC format.
Reply by Brenda Little on February 28, 2010 at 8:18pm
Sharon: Teachers should use a variety of approaches to testing but I bet many teachers use the same kind of formats that they are most successful at or test formats that are the fastest to grade.
------
Reply by Sara Gilliam Crater on February 25, 2010 at 10:32am
One piece of advice I would give to classroom teachers would be to write down after each class the main points of the lesson and construct the assessment based off of those questions. I would tell the classroom developers to use a multiple array of assessment questions. Popham differed on one way of thinking compared to the (Haladyna, Downing, Rodriquez) article. Popham stated that you should have at least four or five distracters in a multiple choice to cut down on student guessing and the article said that three distracters would be sufficient.
The suggestion that three distracters might be sufficient comes from an earlier study by Fredric Lord that found that in many tests developed by novices, the fourth distracter was really a throw-away distracter. If four or five GOOD distracters can be written, then MC items should have more than three.
My colleagues said that they use multiple choice only because they have to get the students ready for the EOC tests. If time allowed, teachers would prefer to use short answer or essay work for the students to articulate their progress. Teachers feel that they want to incorporate more of a holistic approach to their methods of assessment. By using holistic approaches, teachers allow students to show their cumulative knowledge and understanding of big picture schemes learned throughout the course.
Reply by Kathy Courtemanche 1 day ago
I also liked the advice of jotting down a note or two related to the current lesson to help in the development of assessment questions. I think it would lead to a more authentic assessment of the content covered.
Reply by Tiffany Smith on March 1, 2010 at 9:52am
This is an important tip that would be very beneficial for writing items. If the teacher jots a few notes at the end of each class, this will ensure an accurate picture of what has been taught and what needs to be assessed. If the teacher does not have these notes, it makes writing the exam at the end of the unit very difficult.
Yes, but in doing so teachers need to be mindful of the need for items than measure higher-level cognitive skills.
------
Reply by Sharon Creasy on February 25, 2010 at 9:22pm
My initial advice would be to follow good writing practice: share your writing for feedback. Getting various perspectives about content, format, bias, clarity of questions is imperative for sound, accurate and valid testing items. Next is the limiting and quality of distracters, apparently this can influence the outcomes of a student’s performance - spending time on this facet of a question matters.
One of the areas of discussion among fellow educators was the use of complex multiple choice. Haldyna did not recommend the use of this format. We talked about how frequently it is used on state standardized tests, which would indicate to teachers that it is an acceptable format perhaps even preferred. We discussed how much we ourselves do not like the format when we take a test. It frequently adds unnecessary confusion and a level of difficulty unrelated to the content or concept being tested.
Most testing experts would agree with Haladyna and Downing that complex MC items should not be used. Standard formats work just fine.
Another common response was how difficult it is to develop a strong multiple choice test and that it can take a few rounds to weed out poor questions or revise them. Using test analysis software really helps with this. Some teachers ask others to review their tests but just as many do not. Moving from autonomous to collaborative test item creation is a work in progress. Most agreed that multiple-choice testing is a staple of assessment but most had not had any formal training in test item writing.
Reply by Brenda Little on February 28, 2010 at 8:24pm
Sharon: Allowing other peers to review developed tests opens a teacher to a level of vulnerability that I think most people are not comfortable with. We need to help teachers become more comfortable in using peer review.
Reply by Tammy Essic on February 28, 2010 at 9:28pm
I like the idea of having peers review tests, but you're right about most teachers being uncomfortable doing it. Teachers (and administrators, too!) have to learn that collaboration can help remedy some of the time issues surrounding education. We have to learn to help each other and ourselves, too!
I like this idea, too.
------
Reply by Tammy Essic on February 25, 2010 at 9:25pm
Writing quality test items requires much more time than many teachers realize. I liked the suggestion by Popham (2006) to spend ten minutes at the end of each class session to write down the most important concepts covered during that lesson. In addition to spending time planning the test, teachers should also allow time for their colleagues to review test items. Of course, this colleague review is only effective if proper training has been conducted on item writing. Proper training should include guidelines for writing various types of test items, including selected-response and constructed-response items.
I discussed with some of my colleagues the information I learned from my readings, and overwhelmingly they agreed that they give more multiple-choice tests than any other type. Many of them also admitted they normally give commercially-made tests and modify them as they see the need. Several teachers commented they would like to develop their own tests and to include more constructed-response items, but they felt that if they had to choose between planning quality instruction and developing assessments, they would choose planning. The teachers at my school feel very overwhelmed this year, and item writing is not a top priority.
I wonder how they would react to Popham’s suggestion that the assessment exercises be written first, and that instruction be planned second.
Because the multiple choice format seems to be the one most widely used, I would like those item writers to be aware of some of the basic guidelines that appeared in our readings this week. The question stems should contain most of the item’s content and should be written in a positive manner. All alternatives should be similar in length and be grammatically compatible with the stem. Vocabulary should be simple, and reading should be minimized to avoid construct-irrelevant variance. Clear directions should be given, and trick items should be avoided. All of the above should never be used as an answer choice, and none of the above should only be used by experienced test-item writers to make an item more difficult. Common errors made by students can be used to determine plausible distractors.
Reply by Sara Gilliam Crater 19 hours ago
I talked to my co-workers about sharing their test with the department to get feedback but their main concern was time. They said that making a good test was hard enough but getting people to look over it was time they didn't have. I see their point but my thinking is that could share in the responsibility and overcome their obsticles together!
Perhaps teachers of same subjects could collaborate on constructing common tests. There are all kinds of ancillary benefits that could accrue from this.
------
Reply by Pamela Cowell 1 day ago
Haladyna, Popham and Kehoe pretty much stated the same facts when it comes to writing good questions. Construct the stem as a question or incomplete statement. Include the majority of the information in the stem and not the options. Use three or four options and construct distracters that are comparable in length. Options such as “none of the above” and “all of the above” should be omitted. Randomize the location of the answers and avoid trick items.
The vocabulary should be appropriate for the students being tested. Construct-irrelevant variance occurs when reading comprehension becomes an issue on a test intended to measure something else. Haladyna et al. state that tests should be vertically formatted because it is easier to read. One task that I found extremely interesting and helpful was the suggestion given in the Kehoe article. He states that when planning for a test; take 10 minutes after every class to write down important concepts covered. By doing this, the concepts taught are still fresh and will be of great help when constructing the test.
My colleagues, overwhelming, use the multiple-choice format when it comes to tests. It is easier to grade (ScanTron allows you to just run the answer keys through a machine and the machine grades its.) and it is the format used on the end-of-grade tests. One of my colleagues stated that “time restraints and the other demands of the job” keep her from giving “many” essay or constructed- response assessments especially when she has 100+ students. She also went on to say that writing good multiple choice stems is not an easy task either. I also feel this is the main reason many teachers use “ready-made” tests because they have not been taught how to write good test questions. It is not an easy task.
I wonder if the teachers who use ScanTron ever bother to do any kind of ITEM ANALYSIS. One thing that can be very helpful is a distracter analysis. Good distracters should be selected often by students who score poorly on tests.
Reply by Sharon Creasy 19 hours ago
Time is the nemesis!
Reply by Rebecca Mankins on February 28, 2010 at 7:45pm
I agree that often teachers use the "ready made" tests because it is easier and less time consuming. With all of their other obligations, who can blame them?
Reply by Brenda Little on February 28, 2010 at 8:28pm
Pam: I agree - writing test questions is not an easy task! Therefore, we rely on tests developed by textbook companies because surely they know what they are doing! But, is that always the case?
------
Reply by Rebecca Mankins 1 day ago
The number one piece of advice that I have for item writers or classroom test developers is a guideline discussed by Haladyna et al. (2002), which is to avoid writing tricky items on a test. What is the purpose of trying to write a tricky item? How is that assessment for learning? An assessment should reflect what the student has learned, not how carefully they can read a test item. I encourage all test writers to analyze the way they are writing test items. They should not be written as to confuse the test taker, rather to allow the test taker to think about what they learned and then respond accordingly.
I couldn’t agree more with you, Rebecca. The purpose of tests is NOT to determine how clever students are at discovering tricks in items.
I agree with another tip mentioned by Haldyna et al. (2002), which is to write test items in a positive manner. As a test taker, a question turns into a tricky one with too many negatives. It becomes unclear and is not typically a good test item when a test item contains a negative term. I feel that test writers solicit a negative term as a way to write a trick item.
Finally, item writers and classroom test developers should put themselves in the place of the test taker. They should analyze the questions they write and evaluate whether or not each test item is one they would want on a test that they might take. If the answer is no, then they should rewrite the test item. Writing a test is not an easy task. An item writer should remember that writing a test provides the test taker with an opportunity to display what they have learned, rather than to deceive the test taker.
Reply by Sara Gilliam Crater 19 hours ago
I like the idea about putting the test developer in the place of the test taker. What a brillant idea! :) Negative questions usually tricked me also or at least made it confusing.