Teacher Evaluation and Effectiveness Report

Written By:
Shannon Smith, Michigan Virtual

Suggested Citation

Smith, Shannon (2018). Teacher Evaluation and Effectiveness Report. Lansing, MI: Michigan Virtual University. Retrieved from https://mvlri.org/research/publications/teacher-evaluation-and-effectiveness-report/

This report details Michigan Virtual’s efforts to adopt the Charlotte Danielson Framework for Teaching evaluation rubric for online teachers. Michigan Virtual Student Learning Services administration modified Danielson’s evaluation rubric to suit the online teaching context, and developed an observation resource for use with the rubric. Phase 1 implementation of the evaluations was successful overall; it succeeded in both bringing Michigan Virtual in compliance with Public Act 173 by adopting an evaluation system and providing valuable professional development and growth opportunities for teachers.

Background

Public Act 173 (Michigan Educator Evaluations At-A-Glance, 2015) states that a Professional Education Certificate will not be issued unless an individual was rated effective or highly effective on his/her annual year-end performance evaluations for three consecutive years preceding the application for the Professional Education Certification or was rated effective or highly effective for at least three non-consecutive years preceding the application for Professional Education Certification and submits a recommendation from the chief school administrator of the school at which s/he is currently employed. (p. 11)

During the 2017-18 fiscal year, Michigan Virtual developed complementary goals, born out of Public Act 173, to align its internal teacher evaluation practice with the “rigorous, transparent and fair evaluation system for teachers” requirement (Michigan Educator Evaluations At-A-Glance, 2015). Additionally, Michigan Virtual wanted to provide better support for teachers in their efforts to remain certified and extend their certification by obtaining a Professional Education Certificate.

Evaluation Development

Given these dual aims, Michigan Virtual set out to develop a teacher evaluation process aligned with Michigan Department of Education guidelines and recommendations. As part of the development process, Michigan Virtual administration reviewed the observation and rubric tools identified by the state (Michigan Educator Evaluations At-a-Glance, 2015) including:

  • Charlotte Danielson’s Framework for Teaching
  • Marzano Teacher Evaluation Model
  • Thoughtful Classroom
  • 5 Dimensions of Teaching and Learning

To meet the requirement of conducting annual teacher evaluations, Michigan Virtual decided to focus on implementing the Charlotte Danielson Model and the associated rubric. The model was chosen not only for its flexibility to adapt to an online environment, but also for its focus on teacher engagement.

Administration and a group of instructional leaders attended a two-day conference led by a Danielson Framework for Teaching trainer at the Hillsdale ISD. The conference provided information and resources to help evaluators modify the Danielson rubric to fit the contexts for various positions in their districts, including non-teaching positions.

During and following the training, the Michigan Virtual administrative team reviewed each of the Danielson rubric components and descriptions, as well as indicators and critical attributes, analyzing their application within an online environment. As a result, adjustments and eliminations were made to align the rubric with the day-to-day work of online educators. For example, the traditional Danielson rubric includes Component 2c: Managing Classroom Procedures. The observable elements for Managing Classroom Procedures in a traditional classroom include transitions between activities, management of classroom materials and supplies, and performance of classroom routines, such as taking attendance and collecting homework. Component 2C is not relevant in the Michigan Virtual model for online instruction and was eliminated as a component for observing online teacher effectiveness. In addition, Component 2e: Organizing Physical Space of the Danielson Model was also eliminated as it does not apply to the online environment. Although no additional components were added to the evaluation rubric, clarifying points were added to the indicators and critical attributes to align more appropriately with an online environment.

Following the initial rubric review and update, the team developed a supplemental, standalone walk-through resource to support observations required by the Danielson Framework for Teaching. This process was extensive and resulted in many changes to the resource to align with the structure of the Michigan Virtual approach to teaching online. The supplemental walk-through resource required the administrative team to consider critically what is observable in an online course and what correlates to components in the modified Danielson rubric. For example, where would an observer see a teacher demonstrating knowledge of resources or establishing culture in an online class? Michigan Virtual Student Learning Services (SLS) administrators were able to identify key elements particular to online learning that related to each teaching component. In terms of demonstrating knowledge of resources, we look for supplemental materials identified in weekly announcements and individual feedback to students. We can identify evidence of teachers establishing culture through their interactions with students on the discussion boards, in their welcome letters, and in elements of their announcement page. This supplemental walk-through resource is not a formal requirement of the evaluation process but is considered a key element in creating opportunities for dialog between the administrative observer and the teacher. Communication throughout the year can lead to better discussions of growth and opportunity when the formal evaluation is conducted using the modified Danielson rubric. The development of the supplemental walk-through resource did not affect the modified evaluation rubric but rather provided clear guidance for points of observation that would serve as the basis for conducting evaluations.

SLS administration determined the next step was to ask the full-time instructional staff to review and provide recommendations for further changes to the modified Danielson rubric and supplemental standalone walk-through resource for observations. Throughout October and November 2017, the administrative team worked with full-time Michigan Virtual teachers in focused work sessions to develop the most accurate and encompassing indicators and critical attributes for alignment. Full-time teachers engaged in discussions to hone observable critical attributes on both the supplemental resource and modified rubric; they also provided possible examples of these attributes in an online teaching environment. Following each of the interactive work sessions with this group, additional changes were made and more elements particular to online learning were included. Simultaneously, administrators worked with Human Resources to identify a resource to help scale and store the critical components of the evaluation process and allow for ongoing reflection and two-way communication between teachers and administrators to support the feedback process.

Based on the research conducted by Human Resources, Frontline Professional Growth suite was selected as the system to develop and house the evaluations. Frontline allowed for the use of the modified version of the Danielson tool and provided features that were critical to the reflective process. This was important to the organization in its effort to grow and train effective online educators.

Phase 1 Implementation

With all the resources in place and the final version of the modified Danielson evaluation rubric finalized, the administrative team conducted class observations using the supplemental walk-through resource via Frontline in December 2017. Each of these walk-through observations resulted in direct feedback to the teachers about areas where they were showing success and areas where they could still grow. This feedback was provided to the teachers by direct supervisors. Teachers had the opportunity to engage in dialog if they needed further direction or had questions. Administrators then made additional observations using the supplemental walk-through resource to provide teachers with detailed feedback prior to the evaluation. Conducted throughout the year, these observations allowed the teacher to continue growing over time and receive formative feedback before the summative formal evaluation occurred at the end of the year. This process also prevented any surprising observations or feedback from occurring during the formal evaluation discussion.

Implementation of the full evaluation using the modified Danielson rubric occurred in April and May 2018. As part of this process, teachers were asked to complete a pre-observation form, which focused on reflection of their teaching practice and insights they wanted to provide the evaluator. Once the pre-observation form was complete, the supervising administrator conducted the observation portion of the evaluation using the walk-through resource. Following the observation, the administrator submitted an evaluation using the modified Danielson rubric. Both teachers and administrators had access to all steps in the process through Frontline. Teachers were also required to acknowledge receipt of the evaluation within the Frontline system.

Once the teacher acknowledged receipt of the evaluation, the administrator assigned a post-observation form. This allowed teachers to reflect and consider the feedback provided in the evaluation prior to the formal meeting between administrator and teacher. The formal meeting was meant to continue discussion related to teacher growth and provide time to collaborate on professional development options available to support this continued growth.

Outcomes

Although improvements were needed in some areas, reflection on phase 1 of implementation of the modified Danielson Model determined that it was an overall success. The goals of providing a rubric to accurately evaluate the unique work of online teachers while also meeting the legislative requirement detailed above were achieved. The rubric provided a multifaceted view of teaching practices, which allowed administrators to better align professional development needs and support services. For example, if a teacher received a rating lower than expected in the area of setting instructional outcomes (Danielson component 1c), their supervisor could provide support in the area of writing learning targets and determine additional professional development opportunities in which the teachers could continue to grow this ability. Another example is related to communication with families (Danielson component 4c). Within our unique environment, this can be a challenge as our teachers do not have direct access to guardian information unless it is provided by the partner district. If this is identified as an area to improve upon, a supervisor can provide resources and a mentor to support the online teacher’s development. Teachers also indicated that they were provided with feedback during the pilot that was more in line with understanding the areas in which they could grow and expand their practice. An instance of this was provided during one of the follow-up formal meetings between supervisor and teacher: a teacher indicated that compared to the previous evaluation tool, the indicators and feedback were aligned better with his work as a teacher. As a result, he felt he had a better sense of how to improve his practice and how the professional development offered would support that.

As part of our process, SLS administrators met to review the results of teacher evaluations. Indicators and critical attributes were reviewed, and several were modified or further clarified to support clear understanding of expectations. In addition, it was determined that certain components and/or elements needed to be removed. The eliminated items reflected a more standard face-to-face practice rather than Michigan Virtual’s online teaching and learning environment. For example, it was determined that Component 1f: Designing Student Assessments of the Danielson Model would be removed from future evaluations. Michigan Virtual’s curriculum and assessments are built by a curriculum development team, and limited change is applied in an active course by the teacher of record in an effort to maintain consistency across sections. This serves a critical purpose in our work partnering with local districts as they do not want two students in the same class (or lab) setting to have a vastly different experience in an online course. As a result, we determined that using Component 1f was not appropriate as part of the teacher’s evaluation.

Future directions

Moving forward, this evaluation method will be expanded to include the additional part-time teachers that currently teach for Michigan Virtual. This fall, Michigan Virtual will be expanding the use of the new tool to over 150 part-time teachers working with students across Michigan. This allows for better review and evaluation of practices to ensure alignment with effectiveness and best practices in the online environment.

There is still work to be done regarding the reporting of evaluation data. Currently, Michigan Virtual is responsible for relaying effectiveness data for all of its teachers to local districts via the Registry of Educational Personnel report. As a result, teachers now receive dozens of effectiveness scores each year in Michigan Online Educator Certification System (see Figure 1 below).

Screenshot detailing rows of teacher effectiveness scores in the Michigan Online Educator System.
Figure 1. Screenshot of teacher evaluations.

This has resulted in confusion and frustration for the teachers who work with Michigan Virtual. As an organization, we would like to work with the Michigan Department of Education (MDE) to help determine the best way to communicate effectiveness ratings for online teachers that provides clarity of responsibility, does not lead to confusion, does not interfere with local actions to be taken as a result of teaching practice, and provides the data the MDE needs for various initiatives and programs it implements. A potential area for consideration is having the employing body, in this case Michigan Virtual, report effectiveness ratings rather than the contracting body – the school district. This may provide a more accurate record and reduce rating variability by district.

This situation leads to another foreseeable concern with the current rating system: it does not make a distinction between face-to-face and virtual teaching effectiveness. Many Michigan Virtual teachers teach in a traditional setting for which they receive effectiveness ratings. The current process could lead to instances where the local district provides one rating reflecting the face-to-face setting and the online provider reports a contradictory rating for the person’s virtual teaching effectiveness. A future system that can differentiate and deal sufficiently with this nuance will be important to allow local schools and online providers to take the steps needed regarding ineffective teachers in their respective mediums without the ramification of potentially contradictory ratings.

References

Michigan Department of Education. (2015). Educator evaluations at-a-glance. Lansing, MI: Michigan Department of Education.

Additional Resources

Danielson Framework for Teaching Evaluation Instrument

Michigan Virtual Modified Danielson Model Rubric for Evaluation

Keep up with the latest MVLRI has to offer