CNCS SCVC Effective Budgeting RSVP Worksheet

CNCS SCVC Effective Budgeting RSVP Worksheet

This page left intentionally blank.

This guide is not an exhaustive list of corrections to the activities presented. It, rather, provides an idea of how one might approach answering the questions posed to evaluate whether the situations presented will result in a data collection process that gathers high quality data.

Activity A – K-12 Academic Success

Work Plan

Q: Does the community need statement make a clear connection to the output and outcome?

A: The community need statement is missing a citation for the statement that children from low-income families are less likely to reach proficiency in reading or math. The description of what volunteers will be doing is about emotional support rather academic and does not tie to outcomes.

Q: The service activity description should include who the beneficiaries are, what the volunteers will be doing with the beneficiaries; how often volunteers will provide the service, for how long, and where the service will take place. Does the service activity contain all of the required information? If not, what is missing?

A: The following information is missing from the service activity description: number of volunteers, number of children served, number of hours per week the service will be provided, length of time the service will be provided, and a clear description of the activity that will result in the stated outcome. It appears the goal is “a safer environment” rather than improved academics. Therefore, given that the frequency and actual service are unknown, it cannot be determined if the activity would produce the outcomes selected.

Q: Does the description of how often volunteers will provide the service and for how long seem sufficient to claim that the volunteer contributed to the change or improvement?

A: No information is provided about how often volunteers will provide the service or for how long. Projects should review the Performance Measurement Instructions for CNCS guidance on how much service may be required for each measure as some measures are more prescribed than others.

Q: The output instrument description should give the name of the instrument, briefly describe who will collect the data, from whom the data will be collected, and when it will be collected. Does the output instrument description contain all of the required information? If not, what is missing?

A: Counting the number of children is not an instrument. The following information is missing: the name of the instrument and when data is collected. The data being collected according to the instrument description does not inform the outcome, so the entire instrument description should be rewritten. There is also no description of what children should qualify in order to be counted.

Q: Does the outcome instrument description contain all of the required information? If not, what is missing?

A: The instrument description describes measuring volunteers’ performance and not students’ performance. Assessment of student performance needs to go beyond teacher statement of “yes” or “no.” A pre- and post-evaluation are needed to determine academic improvement. The following information is missing: the name of the instrument, who will collect data, and while it states it will tabulated at the end of the year, it is unclear when data will be collected.

Scenario

Q: Will the activity described in the scenario support the output and outcome being measured? If not, why not?

A: The output (number of students completing a CNCS-supported education program)--will likely be invalid as the majority of the children are being counted, not just those initially identified as needing academic help and assigned to the volunteer. The outcome is likely invalid as the measure (number of students with improved academic performance in literacy and/or math) requires that the specific children identified have received a minimum amount of service. It also requires that improved academic success is measurable and based on actual performance not simply based on a teacher’s impression.

Q: Will the data collected under this scenario be valid? Will it be complete?

A: Data will not be valid as the activity does not support the output and outcome measures and the method used to inform the outcome, relying upon judgment, does not clearly measure academic improvement. The data will not be complete since only some teachers receive and complete the surveys. There are no instructions given to the teachers on completing the survey, thus making it possible for different interpretations of the survey and thus, invalid results.

Q: Does the scenario describe a data collection process that is consistent? Accurate? Verified?

A: Data collection is inconsistent as all teachers do not complete the survey. Data collection is not accurate because there is no clear plan or procedure that is followed in collection and review of data. There are no instructions for those issuing the survey or those taking it.Data collection is not verified because the project director does not review the numbers tabulated by the coordinator.

Q: Does the scenario describe an effective method of data collection? What works well? What could be improved?

A: What works is that the project collects written data twice a year and could use these surveys to collect data on changes in academic performance.

What could be improved is:

-service activity (volunteers need to provide services to and properly count individual children that receive a minimum amount of service, rather than the entire classroom).

-instrument (require specific measures of student performance, such as test scores, rather than allowing teachers to provide subjective data, such as “improved somewhat.” Projects should also review the Performance Measurement Instructions for CNCS guidance on How to Collect/Measure Data as some measures are more prescribed than others.

-data collection method (train those issuing and responding to the survey, ensure all teachers complete surveys, and ensure project director verifies data once it is tabulated by the coordinator)

The data collection process should be formalized. Instructions should be developed and given to those administering surveys (handing out and collecting) as well as to those providing the information in the surveys. Such instructions are designed to prevent bias, to prevent incomplete answers or misunderstanding when answering. They also provide standard policy and procedures for data collection and reporting.

Instrument Review

Q: HOW - What method is used to collect data?

A: Survey.

Q: WHERE - What is the source of the data?

A: Teachers complete the survey.

Q: WHEN – What is the schedule for data collection?

A: The survey is collected at the end of the school year.

Q: WHAT - Will the instrument collect data that is valid?

A: Data is not valid as it does not adequately measure the outcome ED5 (number of students with improved academic performance in literacy and/or math). In order to measure the outcome, the instrument would need to capture specific measures of student performance such as “Reading Comprehension” and “Reading Fluency.” These measures should be collected using objective data, such as test scores, rather than allowing teachers to provide subjective data, such as “improved somewhat.”

Q: WHAT - Will the instrument collect data that is complete?

A: Completeness indicates whether there is enough information to draw a conclusion about the data and whether enough individuals responded to it to ensure representativeness. The completeness of the data will be determined by the number of evaluations the program collects. As the data collection process is not clearly documented, we cannot be certain that the data collected is complete.

In addition, the instrument does not collect complete data as it only uses subjective measures such as “moderate to significant progress” rather than formal measures or grades. Also, no system exists to address surveys that are not completed, i.e. requiring a minimum percentage returned in order to prove results are valid.

Activity B – School Readiness

Work Plan

Q: Does the community needs statement make a clear connection to the output and outcome?

A: The community needs statement connects to the need of school readiness and cites information to support that need.

Q: The service activity description should include who the beneficiaries are, what the volunteers will be doing with the beneficiaries; how often volunteers will provide the service, for how long, and where the service will take place. Does the service activity contain all of the required information? If not, what is missing?

A: The service activity includes number of volunteers, number of children served, number of hours per week service will be provided, length of time service will be provided, and a clear description of the activity that will result in the stated outcome.

Q: Does the description of how often volunteers will provide the service and for how long seem sufficient to claim that the volunteer contributed to the change or improvement?

A: Foster Grandparents will serve twenty hours per week, Monday – Thursday, for the full school year assisting two or more children in Head Start classrooms on a one-on-one basis. When developing service activity descriptions, projects should review the Performance Measurement Instructions for CNCS guidance on how much service may be required for each measure as some measures are more prescribed than others.

Q: Does the output instrument description contain all of the required information? If not, what is missing?

A: The Output instrument description names the instrument, defines when the data will be collected, and lists who will collect it. The station representative is identified as the person responsible for preparing the Assignment Plan and Evaluation Form. The instrument description should also specify that teachers complete the pre- and post-surveys, as the term “station representatives” may refer to the project’s point of contact at the school.

Q: Does the outcome instrument description contain all of the required information? If not, what is missing?

A: The Output instrument description includes when the data will be collected, and who will collect it. The description, though, does not clearly explain the process. The same instrument, The Senior Corps Child Assignment Plan and Evaluation for Preschool and Head Start, is used for the pre- and post-survey. In the description, the instrument is identified as the “Post-Evaluation” form and the explanation may lead one to believe that two distinct forms are in use. The instrument description should also specify that teachers complete the pre- and post-surveys, as the term “station representatives” may refer to the project’s point of contact at the school.

Scenario

Q: Will the data collection method described in the scenario support the output and outcome being measured? If not, why not?

A: As the volunteers serve individual students with identified needs and teachers count (for output) and assess criteria for improvement (for outcome), the method described will support the output and outcome measures.

Q: Will the data collected under this scenario be valid? Will it be complete?

A: Data will be valid as the activity supports the output and outcome measures.

Q: Does the scenario describe a data collection process that is consistent? Accurate? Verified?

A: Data collection is consistent, as teachers at all schools use the same survey and complete them at the same points during the year. It is unclear if data collection is accurate or verified. While a procedure is established, the project has not outlined how it handles the surveys once they are returned. Projects should review the Performance Measurement Instructions for CNCS guidance on How to Collect/Measure Data as some measures are more prescribed than others.

Q: Does the scenario describe an effective method of data collection? What works well? What could be improved?

A: The scenario is unclear in a number of areas relating to data collection. It is unclear if the project is ensuring that all post-surveys are returned. How they manage the surveys when they are returned is also unclear. Does a specific staff person compile the data in a database for reporting? Do they tally results by hand when PPRs are due? The project may introduce human error if they are not following a process to manage data. They do not have a clearly documented data collection process.

What works well: the project collects written data twice a year; teachers evaluate student performance; the data collection tool includes verification that the student has finished the school year. What could be improved: it is unclear if the minimum amount of time spent with children is verified in the data collection process; there is not a clear data collection procedure for all (teachers, volunteers,project staff) to follow.

The data collection process should be formalized. Instructions should be developed and given to those administering surveys (handing out and collecting) as well as to those providing the information in the surveys. Such instructions are designed to prevent bias, to prevent incomplete answers or misunderstanding when answering. They also provide standard policy and procedures for data collection and reporting.

Instrument Review

Q: HOW - What method is used to collect data?

A: A written evaluation.

Q: WHERE - What is the source of the data?

A: Teachers evaluate students at the beginning and end of the year and complete the surveys.

Q: WHEN – What is the schedule for data collection?

A: At the beginning and end of the school year; dates are specified on the instrument.

Q: WHAT - Will the instrument collect data that is valid?

A: The instrument collects information that will be valid. The data in the instrument is relevant, consistent with the goals of the program, and measures what the program intends to measure.

Q: WHAT - Will the instrument collect data that is complete?

A: Completeness indicates whether there is enough information to draw a conclusion about the data and whether enough individuals responded to it to ensure representativeness. The completeness of the data will be determined by the number of evaluations the program collects. As the data collection process is not clearly documented, we cannot be certain that the data collected is complete.