Validating an Observation Protocol to Measure Special Education Teacher Effectiveness

Authors

  • Dr. Evelyn S. Johnson (Author) Boise State University image/svg+xml
    Dr. Johnson is a professor of Special Education at Boise State University and the Executive director of Lee Pesky Learning Center, a non-profit organization whose mission is to improve the lives of people with learning disabilities. Dr. Johnson's research focuses on special education teacher evaluation, interventions for students with learning disabilities and improving the way we identify students with learning disabilities.
  • Dr. Carrie L. Semmelroth (Author) Boise State University
    Dr. Semmelroth is a lecturer at Boise State University. Her research has focused on special education teacher evaluation.
https://doi.org/10.64546/jaasep.286
This study used Kane’s (2013) Interpretation/Use Argument (IUA) to measure validity on the Recognizing Effective Special Education Teachers (RESET) observation tool. The RESET observation tool is designed to evaluate special education teacher effectiveness using evidence-based instructional practices as the basis for evaluation. In alignment with other studies (Bell et al., 2012), we applied and interpreted Kane’s (2006) four inferences for trait observation: scoring, generalization, extrapolation, and decision rules. Results from this study show that acceptable levels of validity are promising for the RESET observation tool. Because the RESET observation tool is premised on the idea that by increasing the use of evidence-based practices, student achievement will also increase, further investigations into the relationship between fidelity of implementation of instruction and student achievement will be critical for moving project work forward.

Baker, E. L., Barton, P. E., Darling-Hammond, L., Haertel, E., Ladd, H. F., Linn, R. L., …

Shepard, L. A. (2010). Problems with the use of student test scores to evaluate teachers. In Economic Policy (Vol. 278, pp. 1–29). Economic Policy Institute. Retrieved from http://epi.3cdn.net/b9667271ee6c154195_t9m6iij8k.pdf

Bell, C. A., Gitomer, D. H., McCaffrey, D. F., Hamre, B. K., Pianta, R. C., & Qi, Y. (2012). An argument approach to observation protocol validity. Educational Assessment, 17(2-3), 62–87. doi:10.1080/10627197.2012.715014 DOI: https://doi.org/10.1080/10627197.2012.715014

Brennan, R. L. (2001). Generalizability theory. New York: Springer. DOI: https://doi.org/10.1007/978-1-4757-3456-0

Browder, D., Ahlgrim-Delzell, L., Spooner, F., Mims, P. J., & Baker, J. N. (2009). Using time delay to teach literacy to students with severe developmental disabilities. Exceptional Children, 75(3), 343–364. DOI: https://doi.org/10.1177/001440290907500305

Browder, D. M., & Cooper-Duffy, K. (2003). Evidence-based practices for students with severe disabilities and the requirement for accountability in “no child left behind.” The Journal of Special Education, 37(3), 157–163. DOI: https://doi.org/10.1177/00224669030370030501

Buzick, H. M., & Laitusis, C. C. (2010). Using growth for accountability: Measurement challenges for students with disabilities and recommendations for research. Educational Researcher, 39(7), 537–544. doi:10.3102/0013189X10383560 DOI: https://doi.org/10.3102/0013189X10383560

Cardinet, J., Johnson, S., & Pini, G. (2010). Applying generalizability theory using EduG. New York, NY: Routledge. DOI: https://doi.org/10.4324/9780203866948

Chard, D. J., Ketterlin-Geller, L. R., Baker, S. K., Doabler, C., & Apichatabutra, C. (2009). Repeated reading interventions for students with learning disabilities: Status of the evidence. Exceptional Children, 75(3), 263–281. DOI: https://doi.org/10.1177/001440290907500301

Connelly, V., & Graham, S. (2009). Student teaching and teacher attrition in special education. Teacher Education and Special Education, 32(3), 257–269. doi:10.1177/0888406409339472 DOI: https://doi.org/10.1177/0888406409339472

Cook, B. G., & Odom, S. L. (2013). Evidence-based practices and implementation science in special education. Exceptional Children, 79(2), 135–144. DOI: https://doi.org/10.1177/0014402913079002021

Cook, B. G., Tankersley, M., & Landrum, T. J. (2009). Determining evidence-based practices in special education. Exceptional Children, 75(3), 365–383. DOI: https://doi.org/10.1177/001440290907500306

Danielson, C. (2013). The framework for teaching evaluation instrument, 2013 Edition (2nd ed.). Princeton, NJ: Danielson Group.

Fuchs, L. S., & Fuchs, D. (2005). Enhancing mathematical problem solving for students with disabilities. The Journal of Special Education, 39(1), 45–57. DOI: https://doi.org/10.1177/00224669050390010501

Gersten, R., Chard, D. J., Jayanthi, M., Baker, S. K., Murphy, P., & Flojo, J. (2009). Mathematics instruction for students with learning disabilities: A meta-analysis of instructional components. Review of Educational Research, 79(3), 1202–1242. DOI: https://doi.org/10.3102/0034654309334431

Gersten, R., Fuchs, L. S., Compton, D., Coyne, M., Greenwood, C., & Innocenti, M. S. (2005). Quality indicators for group experimental and quasi-experimental research in special education. Exceptional Children, 71(2), 149–164. DOI: https://doi.org/10.1177/001440290507100202

Herlihy, C., Karger, E., Pollard, C., Hill, H. C., Kraft, M. A., Williams, M., & Howard, S. (2014). State and Local Efforts to Investigate the Validity and Reliability of Scores from Teacher Evaluation Systems. DOI: https://doi.org/10.1177/016146811411600108

Hill, H. C., Charalambous, C. Y., & Kraft, M. A. (2012). When rater reliability is not enough: Teacher observation systems and a case for the generalizability study. Educational Researcher, 41(2), 56–64. doi:10.3102/0013189X12437203 DOI: https://doi.org/10.3102/0013189X12437203

Ho, A. D., & Kane, T. J. (2013). The reliability of classroom observations by school personnel. Retrieved from http://www.metproject.org/downloads/MET_Reliability_of_Classroom_Observations_Research_Paper.pdf

Holdheide, L., Browder, D., Warren, S., Buzick, H., & Jones, N. (2012). Summary of “using student growth to evaluate educators of students with disabilities: Issues, challenges, and next steps” (pp. 1–36). Retrieved from http://www.gtlcenter.org/sites/default/files/docs/TQ_Forum_SummaryUsing_Student_Growth.pdf

Holdheide, L., Hayes, L., & Goe, L. (2013). Evaluating specialized instructional support personnel supplement to the practical guide to designing comprehensive teacher evaluation systems. This needs the GTL info, so It hink something like, Great Teachers and Leaders Center: City, ST (same with the one above)Retrieved from http://www.gtlcenter.org/tools-publications/publications

Horner, R. H., Carr, E. G., Halle, J., McGee, G., Odom, S., & Wolery, M. (2005). The use of single-subject research to identify evidence-based practice in special education. Exceptional Children, 71(2), 165–180. DOI: https://doi.org/10.1177/001440290507100203

Johnson, E. S., & Semmelroth, C. L. (2012). Examining interrater agreement analyses of a pilot special education observation tool. Journal of Special Education Apprenticeship, 1(4). Retrieved from http://josea.info/index.php?page=vol1no2 DOI: https://doi.org/10.58729/2167-3454.1014

Johnson, E. S., & Semmelroth, C. L. (2013). Sources of Variance in a Special Education Observation Tool. In Pacific Coast Research Conference. Coronado, CA this isn't the right way to cite a poster-please check the APA manual .

Johnson, E. S., & Semmelroth, C. L. (2014a). Special education teacher evaluation: Why it matters and what makes it challenging. Assessment for Effective Intervention, 39(2) need page numbers. DOI: https://doi.org/10.1177/1534508413513315

Johnson, E. S., & Semmelroth, C. L. (2014b). Validating an Observation Tool to Measure Teacher Effectiveness. In Pacific Coast Research Conference. this isn't the right way to cite thisCoronado, CA. DOI: https://doi.org/10.64546/jaasep.286

Jones, N. D., & Brownell, M. T. (2014). Examining the Use of Classroom Observations in the Evaluation of Special Education Teachers. Assessment for Effective Intervention, 39(2), 112–124. doi:10.1177/1534508413514103 DOI: https://doi.org/10.1177/1534508413514103

Kane, M. T. (2006). Validation. In R. L. Brennan (Ed.), Educational Measurement (4th ed., pp. 17–64). Westport, CT: Praeger.

Kane, M. T. (2013). The Argument-Based Approach to Validation. Social Psychology Review, 42(4), 448–457. DOI: https://doi.org/10.1080/02796015.2013.12087465

Kane, T. J., & Staiger, D. O. (2012). Gathering feedback for teaching: Combining high-quality observations with student surveys and achievement gains (pp. 1–68). Need project name & City, State Retrieved from http://www.metproject.org/downloads/MET_Gathering_Feedback_Research_Paper.pdf

McLeskey, J., Tyler, N. C., & Flippin, S. S. (2004). The supply of and demand for special education teachers : A review of research regarding the chronic shortage of special education teachers. The Journal of Special Education, 38(1), 5–21. DOI: https://doi.org/10.1177/00224669040380010201

Mead, S., Rotherham, A., & Brown, R. (2012). The hangover: Thinking about the unintended consequences of the nation’s teacher evaluation binge.need a publication source Morgan, P. L., Frisco, M. L., Farkas, G., & Hibel, J. (2008). A propensity score matching analysis of the effects of special education services. The Journal of Special Education, 43(4), 236–254. doi:10.1177/0022466908323007 DOI: https://doi.org/10.1177/0022466908323007

National Autism Center. (2009). National standards report. Randolph, Massachusetts. Retrieved from http://www.nationalautismcenter.org/nsp/reports.php

Odom, S. L. (2009). The tie that binds: Evidence-based practice, implementation science, and outcomes for children. Topics in Early Childhood Special Education, 29(1), 53–61. DOI: https://doi.org/10.1177/0271121408329171

Odom, S. L., Brantlinger, E., Gersten, R., Horner, R. H., Thompson, B., & Harris, Karen, R. (2005). Research in special education: Scientific methods and evidence-based practices. Exceptional Children, 71(2), 137–148. DOI: https://doi.org/10.1177/001440290507100201

Semmelroth, C. L. (2013). Using generalizability theory to measure sources of variance on a special education teacher observation tool. Boise State University is this the right way to cite a dissertation.

Semmelroth, C. L., & Johnson, E. S. (2014). Measuring rater reliability on a special education observation tool. Assessment for Effective Intervention, 39(2) need page numbers. DOI: https://doi.org/10.1177/1534508413511488

Semmelroth, C. L., Johnson, E. S., & Allred, K. (2013). Special educator evaluation: Cautions, concerns and considerations. Journal of the American Academy of Special Education Professionals need page numbers or indication that it is online. DOI: https://doi.org/10.64546/jaasep.222

Shavelson, R. J., & Webb, N. M. (1991). Generalizability theory: A primer. Newbury Park, Calif.: Sage Publications.

Spooner, F., Knight, V. F., Browder, D. M., & Smith, B. R. (2012). Evidence-based practice for teaching academics to students with severe developmental disabilities. Remedial and Special Education, 33(6), 374–387. doi:10.1177/0741932511421634 DOI: https://doi.org/10.1177/0741932511421634

Weisberg, D., Sexton, S., Mulhern, J., & Keeling, D. (2009). The widget effect: Our national failure to acknowledge and act on differences in teacher effectiveness. publication source? Retrieved from widgeteffect.org/downloads/TheWidgetEffect.pdf

There are 41 references in total.
Johnson, E. S., & Semmelroth, C. L. (2015). Validating an Observation Protocol to Measure Special Education Teacher Effectiveness. Journal of the American Academy of Special Education Professionals, 10(3), 99-119. https://doi.org/10.64546/jaasep.286

Downloads

Article Information

  • Article Type Articles
  • Submitted August 24, 2015
  • Published October 15, 2015
  • Issue Fall 2015
  • Section Articles
  • File Downloads 0
  • Abstract Views 0
  • Altmetrics
  • Share
Download data is not yet available.