IDEA:Special Education Technical Assistance and Dissemination(OSERS)
FY2010Program Performance Report(System Print Out)
Strategic Goal1
Discretionary
IDEA, Part D-2, Section 663
Document Year2010Appropriation: $
CFDA / 84.326: Special Education_Technical Assistance and Dissemination to Improve Services and Results for Children with Disabilities
Program Goal: / To assist states and their partners in systems improvement through the integration of scientific-based practices.
Objective1of3: / States and other recipients ofSpecial Education Technical Assistance and Dissemination program services will implement scientifically- or evidence-based practices for infants, toddlers, children and youth with disabilities. (Long-term objective. Target areas: assessment; literacy; behavior; instructional strategies; early intervention; and inclusive practices)
Measure1.1of1: The percentage of school districts and service agencies receiving Special Education Technical Assistance and Dissemination services regarding scientifically- or evidence-based practices for infants, toddlers, children and youth with disabilities that implement those practices. (Desired direction: increase)00000g
Year / Target / Actual
(or date expected) / Status
2009 / 79 / Measure not in place
2010 / 0 / Measure not in place
2012 / Maintain a Baseline / (October 2012) / Pending
2014 / Maintain a Baseline / (October 2014) / Pending

Frequency of Data Collection:Biennial

Objective2of3: / Improve the quality of Special Education Technical Assistance and Dissemination projects.
Measure2.1of4: The percentage of Technical Assistance and Dissemination products and services deemed to be of high quality by an independent review panel ofexperts qualified to review the substantive content of the products and services. (Desired direction: increase)89a0e6
Year / Target / Actual
(or date expected) / Status
2005 / 56 / Measure not in place
2006 / 74 / Measure not in place
2007 / Set a Baseline / 74.3 / Target Met
2008 / 75 / 80 / Target Exceeded
2009 / 77 / 94.2 / Target Exceeded
2010 / 80 / 87.4 / Target Exceeded

Source.Description of Expert Panel Review – OSEP provides an outside contractor with a list of 15-20 potential Science Expert Panel members who are nationally known experts in special education research, policy, and/or practice. The contractor randomly selects, contacts, and secures the review services of six to eight panel members, ensuring that no panelist is currently employed by an OSEP-funded program, grant or project.
The Science Expert Panel reviews a randomly selected sample of products and services made available through OSEP-funded Technical Assistance and Dissemination Centers (84.326 grantees) and Deaf-Blind Projects (84.326C grantees). The grantees selected to provide data for this measure supply copies of one product and one service along with completed Product and Service Description forms.
The program office defines a “product” as “a piece of work, in text or electronic form developed and disseminated by an OSEP-funded project to inform a specific audience on a topic relevant to the improvement of outcomes for children with disabilities. A “service” is defined as “work performed by an OSEP-funded project to provide information to a specific audience relevant to the improvement of outcomes for children with disabilities.”
Projects randomly selected to provide data for this measure supply a copy of one product and one service for each panel member along with a Product and Service Description form on each submitted product or service. The purpose of the form is to standardize the information used in the panel review. Information categories contained in the forms include: product or service name, target audience, alignment with program office target investment area (assessment; literacy; behavior; instructional strategies; early intervention; secondary transition; and inclusive practices), classification of product/service as evidence- or policy-based, and description of research-basis, if any, on the product or service.
Panelists independently assess the quality of each product and service according to criteria described below in the Explanation section.

Data Quality.This measure appliesto 84.326C grants (52 grants funded in FY 2009) and 84.326 grants (26 grants funded in FY 2009). 84.326M grants, which are also funded under the Special Education Technical Assistance and Dissemination Program are not included in this measure.This year, one of the 26 84.326 grants was in its first year of funding with no new products and services, and so was omitted from the sampling frame.
The data for this measure are derived from a 50%random sample of (1) 84.326 projects funded during the prior fiscal year.
Thesample is selected byan outside contractor according to three parameters established by the program office: (1) a 20% stratified random sample of 84.326C grants, (2) a 50% random sample of all other 84.326 grants, plus (3) a purposive sample of additional 84.326 grantsto ensure that the sample addresses all program office target investment areas (i.e., assessment, literacy, behavior, instructional strategies, early intervention, secondary transition, and inclusive practices).
The contractor selected 10 84.326C grants by first randomly selecting four of six IDEA Part D Regional Resource Center (RRC) regions, randomly selecting two grants from within these regions, and then randomly selecting one 84.326C grant from each of the two remaining RRC regions. The contractor then chose13 84.326 grants at random and then purposely included three 84.326 grantsin the sampleto represent the program office’s target investment areas not addressed by the randomly selectedgrants.
The total number of grants in the 2010 sample for this measure was 26. Grants in the sample were representative with respect to the 84.326 and 84.326C target investment areas as well as representing a range offunding levels, target audiences, and disabilities addressed.
The sample of new products and services developed by selected granteesis selectedin the following way:The contractorrequests a list of new products and services from each grantee, andrandomly selects one new product and one new service from each grantee list.The contractor then requests that the granteessend copies of the selected productsfor review,along with a New Product or New Service Description form.
The contractor provides copies of each item to the Science Expert Panel, along with instructions for accessing, reviewing, and assessing the quality of each product and service. Panelists rate the products and services using theQuality Assessment Scoring Guide described below.
The AM-ICC inter-rater reliability score for the TA&D product and service quality review was 0.936, representing highly consistent results across the reviewers. [Note: values of AM-ICC from 0.40 to 0.50 are considered moderate, 0.51 to 0.70 are considered substantial, and 0.71 or higher are considered outstanding (Landis, J. R., Koch, G. G. [1977]. The measurement of observer agreement for categorical data. Biometrics 33:159-174).

Target Context.Targets for this measure were established on the basis of data obtained in 2006 and 2007.

Explanation.

Description of Method:
Panelists individually assess the quality of each product and serviceon two dimensions:
(1) Substance – Does the product reflect evidence of conceptual soundness and quality, grounded in recent scientific evidence, legislation, policy, or accepted professional practice? (2) Communication – Is the product presented in such a way so as to clearly understood, as evidenced by being well-organized, free of editorial errors and appropriately formatted? The extent to which the product or service meets the substance criterion is measured using a seven-point scale from 0=Unacceptable, 1 to 2=Low, 3 to 4=Acceptable, and 5 to 6=Superior. The extent to which the product or service meets the communication criterion is measured using a four-point scale from 0=Unacceptable, 1 =Low, 2=Acceptable, and 3=Superior. Panelists complete the form for each product and service.
Scores on Quality rangedbetween 0 and 9. Products and Services scoring 6 or higher were defined as having high Quality.
Description of Calculation:
The quality measure iscalculated as follows:
Total number of TA&D Center products and services reviewed by a Science Expert Panel with average quality scores totaling 6 or higher divided by the total number of TA&D Center products and services reviewed times the proportion of the FY09 budget spent on TA&D projects times 100% PLUS the total number of State Deaf-Blind project products and services reviewed by a Science Expert Panel with average quality scores totaling 6 or higher divided by the total number of State Deaf-Blind project products and services reviewed times the proportion of the FY09 budget spent on State Deaf Blind projects times 100%.
2010 Result = [(27 84.326 products and services scored as high quality/31) x .78 + (14 84.326C products and services scored as high quality/16) x .22] x 100% = 0.679 + 0.195 = 87.4%

Measure2.2of4: The percentage of Technical Assistance and Dissemination products and services deemed by an independent review panel of qualified expertsto be of high relevance to educational and early intervention policy or practice.
(Desired direction: increase)89a0e7
Year / Target / Actual
(or date expected) / Status
2006 / 63 / Measure not in place
2007 / 94 / Measure not in place
2008 / Set a Baseline / 94.9 / Target Met
2009 / 90 / 94.3 / Target Exceeded
2010 / 92 / 95.9 / Target Exceeded
2011 / 94 / (October 2011) / Pending

Source.Description of Expert Panel Review – OSEP provides an outside contractor with a list of 15-20 potential State Stakeholder Panel members who are State special education administrators who overseeIDEA Part B or Part C State programs. The contractor randomly selects, contacts, and secures the review services of six to eight panel members, ensuring that no panelist is currently employed by an OSEP-funded program, grant or project.
The Stakeholder Panel reviews a randomly selected sample of products and services made available through OSEP-funded Technical Assistance and Dissemination Centers (84.326 grantees) and Deaf-Blind Projects (84.326C grantees). The grantees selected to provide data for this measure supply copies of one product and one service along with completed Product and Service Description forms.
The program office defines a “product” as “a piece of work, in text or electronic form developed and disseminated by an OSEP-funded project to inform a specific audience on a topic relevant to the improvement of outcomes for children with disabilities. A “service” is defined as “work performed by an OSEP-funded project to provide information to a specific audience relevant to the improvement of outcomes for children with disabilities.”
Projects randomly selected to provide data for this measure supply a copy of one product and one service for each panel member along with a Product and Service Description form on each submitted product or service. The purpose of the form is to standardize the information used in the panel review. Information categories contained in the forms include: product or service name, target audience, alignment with program office target investment area (assessment; literacy; behavior; instructional strategies; early intervention; secondary transition; and inclusive practices), classification of product/service as evidence- or policy-based, and description of research-basis, if any, on the product or service.
Panelists independently assess the quality of each product and service according to criteria pertaining to product/serviceRelevance described below in the Explanation section.

Data Quality.This measure applies to 84.326C grants (52 grants funded in FY 2009) and 84.326 grants (26 grants funded in FY 2009). 84.326M grants, which are also funded under the Special Education Technical Assistance and Dissemination Program are not included in this measure. This year, one of the 26 84.326 grants was in its first year of funding with no new products and services, and so was omitted from the sampling frame.
The data for this measure are derived from a 50% random sample of (1) 84.326 projects funded during the prior fiscal year.
The sample is selected by an outside contractor according to three parameters established by the program office: (1) a 20% stratified random sample of 84.326C grants, (2) a 50% random sample of all other 84.326 grants, plus (3) a purposive sample of additional 84.326 grants to ensure that the sample addresses all program office target investment areas (i.e., assessment, literacy, behavior, instructional strategies, early intervention, secondary transition, and inclusive practices).
The contractor selected 10 84.326C grants by first randomly selecting four of six IDEA Part D Regional Resource Center (RRC) regions, randomly selecting two grants from within these regions, and then randomly selecting one 84.326C grant from each of the two remaining RRC regions. The contractor then chose13 84.326 grants at random and then purposely included three 84.326 grants in the sample to represent the program office’s target investment areas not addressed by the randomly selected grants.
The total number of grants in the 2010 sample for this measure was 26. Grants in the sample were representative with respect to the 84.326 and 84.326C target investment areas as well as representing a range of funding levels, target audiences, and disabilities addressed.
The sample of new products and services developed by selected grantees is selected in the following way: The contractor requests a list of new products and services from each grantee, and randomly selects one new product and one new service from each grantee list. The contractor then requests that the grantees send copies of the selected products for review, along with a New Product or New Service Description form.
The contractor provides copies of each item to the Science Expert Panel, along with instructions for accessing, reviewing, and assessing the quality of each product and service. Panelists rate the products and services using the Quality Assessment Scoring Guide described below.
The AM-ICC inter-rater reliability score for the TA&D product and service relevance review waswas 0.502 representingmoderately consistent results across the reviewers. [Note: values of AM-ICC from 0.40 to 0.50 are considered moderate, 0.51 to 0.70 are considered substantial, and 0.71 or higher are considered outstanding (Landis, J. R., Koch, G. G. [1977]. The measurement of observer agreement for categorical data. Biometrics 33:159-174).

Target Context.Targets were established on the basis of data collected in 2006 and 2007.

Explanation.

Explanation of Method:
Panelists individually assess the quality of each product and service on three dimensions: need, pertinence, and reach.
Panelists follow the Relevance Assessment Scoring Guide, which addresses the three criteria: (1) Need – Does the content of the product or service attempt to solve an important problem or critical issue? (2) Pertinence – Does the content of the product or service tie directly to a problem or issue recognized as important by the target audience? and (3) Reach – To what extent is the content of the product or service applicable to diverse segments of the target audience? The extent to which the product or service meets each criterion is measured using a four-point scale from 0=Unacceptable, 1=Low, 2=Acceptable, and 3=Superior. Panelists complete the form for each product and service.
High relevance is defined as an average total panel score of 6 or higher across the three relevance criteria
Explanation of Scoring Calculation:
The calculation provides the proportion of products and services, weighted according to 84.326 and 84.326C program funding that are scored by the expert panel as highly relevant. The calculation is:
Thenumber of TA&D Center products and services reviewed by a State Stakeholder expert panel with average relevance scores totaling 6 or higher divided by thenumber of TA&D Center products and services reviewed times the proportion of the FY2008 budget spent on TA&D projects times 100 PLUS thenumber of State Deaf-Blind project products and services reviewed by a State Stakeholder expert panel with average relevance scores totaling 6 or higher divided by thenumber of State Deaf-Blind project products and services reviewed times the proportion of the FY2008 budget spent on State Deaf-Blind projects times 100.
2010 Result = [(31 84.326 products and services scored as highly relevant /31) x .78 + (13 84.326C products and services scored as highly relevant/16) x .22] x 100
= 95.9%

Measure2.3of4: The percentage of all Special Education Technical Assistance and Dissemination products and services deemed by an independent review panel of qualifiedexperts tobeuseful to improve educational or early intervention policy or practice. (Desired direction: increase)1947
Year / Target / Actual
(or date expected) / Status
2005 / 43 / Measure not in place
2006 / Set a Baseline / 46 / Target Met
2007 / 48 / 77.6 / Target Exceeded
2008 / 50 / 82.1 / Target Exceeded
2009 / 52 / 84.9 / Target Exceeded
2010 / 54 / (October 2011) / Pending

Source.Description of Expert Panel Review – OSEP provides an outside contractor with a list of 15-20 potential State Stakeholder Panel members who are State special education administrators who oversee IDEA Part B or Part C State programs. The contractor randomly selects, contacts, and secures the review services of six to eight panel members, ensuring that no panelist is currently employed by an OSEP-funded program, grant or project.
The Stakeholder Panel reviews a randomly selected sample of products and services made available through OSEP-funded Technical Assistance and Dissemination Centers (84.326 grantees) and Deaf-Blind Projects (84.326C grantees). The grantees selected to provide data for this measure supply copies of one product and one service along with completed Product and Service Description forms.
The program office defines a “product” as “a piece of work, in text or electronic form developed and disseminated by an OSEP-funded project to inform a specific audience on a topic relevant to the improvement of outcomes for children with disabilities. A “service” is defined as “work performed by an OSEP-funded project to provide information to a specific audience relevant to the improvement of outcomes for children with disabilities.”
Projects randomly selected to provide data for this measure supply a copy of one product and one service for each panel member along with a Product and Service Description form on each submitted product or service. The purpose of the form is to standardize the information used in the panel review. Information categories contained in the forms include: product or service name, target audience, alignment with program office target investment area (assessment; literacy; behavior; instructional strategies; early intervention; secondary transition; and inclusive practices), classification of product/service as evidence- or policy-based, and description of research-basis, if any, on the product or service.
Panelists independently assess the usefulness of each product and service according to criteria pertaining to product/service Relevance described below in the Explanation section.

Frequency of Data Collection:Annual

Data Quality.This measure applies to 84.326C grants (52 grants funded in FY 2009) and 84.326 grants (26 grants funded in FY 2009). 84.326M grants, which are also funded under the Special Education Technical Assistance and Dissemination Program are not included in this measure. This year, one of the 26 84.326 grants was in its first year of funding with no new products and services, and so was omitted from the sampling frame.
The data for this measure are derived from a 50% random sample of (1) 84.326 projects funded during the prior fiscal year.
The sample is selected by an outside contractor according to three parameters established by the program office: (1) a 20% stratified random sample of 84.326C grants, (2) a 50% random sample of all other 84.326 grants, plus (3) a purposive sample of additional 84.326 grants to ensure that the sample addresses all program office target investment areas (i.e., assessment, literacy, behavior, instructional strategies, early intervention, secondary transition, and inclusive practices).
The contractor selected 10 84.326C grants by first randomly selecting four of six IDEA Part D Regional Resource Center (RRC) regions, randomly selecting two grants from within these regions, and then randomly selecting one 84.326C grant from each of the two remaining RRC regions. The contractor then chose13 84.326 grants at random and then purposely included three 84.326 grants in the sample to represent the program office’s target investment areas not addressed by the randomly selected grants.
The total number of grants in the 2010 sample for this measure was 26. Grants in the sample were representative with respect to the 84.326 and 84.326C target investment areas as well as representing a range of funding levels, target audiences, and disabilities addressed.
The sample of new products and services developed by selected grantees is selected in the following way: The contractor requests a list of new products and services from each grantee, and randomly selects one new product and one new service from each grantee list. The contractor then requests that the grantees send copies of the selected products for review, along with a New Product or New Service Description form.
The contractor provides copies of each item to the Science Expert Panel, along with instructions for accessing, reviewing, and assessing the quality of each product and service. Panelists rate the products and services using theUsefulness Assessment Scoring Guide described below.
The AM-ICC inter-rater reliability score for the TA&D product and service usefulness review was was 0.604 representing substantial consistency of ratingsacrossreviewers. [Note: values of AM-ICC from 0.40 to 0.50 are considered moderate, 0.51 to 0.70 are considered substantial, and 0.71 or higher are considered outstanding (Landis, J. R., Koch, G. G. [1977]. The measurement of observer agreement for categorical data. Biometrics 33:159-174).