Choosing, creating an evaluation model?

Published on Friday, October 12, 2012

Here are questions that you should ask

Is your district developing its own teacher evaluation model framework to submit to the N.J. Department of Education for approval or still selecting one among approved models established by others?  Here are some questions to use with the district and with developers to assess those evaluation tools.

In addition, materials from the four primary evaluation models and video highlights from last spring’s NJEA-PSA evaluation symposium at njea.org/evaluation.

Be sure to visit Evaluation Central at the NJEA Convention where you can learn more about the four evaluation models currently being piloted in many districts across New Jersey. See Page 7 of the NJEA Convention Program for details.

Overall Model

  1. When was the model developed, by whom, and what was that individual’s expertise and background? What published materials are available that clearly identify the developer’s philosophies?
  2. How many and which school districts in New Jersey or other bargaining states are using the system and how long have they used them?  What is their experience? Can you contact the local association presidents in those districts for feedback?
  3. Is the model positive (recognizing good practices, promoting and supporting teacher growth and collaboration, and encouraging solutions) or negative (focused only on shortcomings, limiting collaborative opportunities, not providing specific feedback and supportive tools, and assigning blame)?
  4. What protections are there against the model being implemented improperly?
  5. Is the model easily understood or does it use a lot of jargon? Is it comprehensive or is it riddled with gaps and uncertainties?
  6. Does the model promote a “continuum of learning,” taking into account overall teaching experience, as well as experience working in a specific assignment?
  7. Is the model framework extremely prescriptive relative to teacher practice or does it allow for creativity and flexibility?
  8. Is the model checklist driven or does it require a narrative to clarify observations? Look for: More than checklists.
  9. How does the model fit in with evaluation procedures outlined in the local collective bargaining agreement?
  10. Does the model call for professional self-reflection used for the teacher’s introspection or self-evaluation in the guise of “teacher reflection” to be submitted for review to a supervisor?
  11. Is the model unfairly skewed toward certain types of teachers, such as those who heavily use educational technology tools or use a prescribed teaching method?
  12. What does the model say about use of student standardized test scores to evaluate teachers?
  13. How does the model recommend pupil performance be measured for teachers, regardless of what content or grade level they teach? How would it combine the teacher practice component with the “multiple objective measures of student growth?”
  14. How does the model accommodate for teachers of special student populations?
  15. Does the model recommend using student surveys? How and why? Is it optional?
  16. Is the model your district is considering implementing the most recent version? If not, what are the differences between the model you are using and the new version?
  17. How is the model aligned with the N.J. Standards for Teachers and the N.J. Professional Teaching Standards?
  18. What is the cost of implementing the model and what specifically does that cost include (training of administrators and teachers, resources, data software, tablets/netbooks/other computer hardware, etc.)? Were estimated costs obtained from a framework developer, from a subcontractor who has developed data-gathering tools and accessories for the model framework, or a licensee authorized to use the model name as part of a data software system? Are you comparing like services to like services? Look for: Details on what is included and for quality, not just cost.
  19. What do the contracts with any vendors (whether framework or data) require?

Reliability

  1. How does the model assure that administrators are skilled in the observation tool and the rubric for identifying the characteristics tied to each rating?
  2. How does the model assure that administrators will implement the framework consistently from teacher to teacher?  How does it guarantee consistency regardless of which administrator conducts the observation or evaluation?
  3. What independent studies were conducted to support the efficacy of the model in enhancing teaching and learning?
  4. Can the approved provider’s model instrument be adapted to or by your district? If changed, is it still legally defensible or research based?
  5. If the program is district-designed, were teachers and the association involved in developing the model and rubric? Has the model been tested extensively to see whether it contributes to improving teaching and learning?
  6. If the program is available from an outside vendor, what role did teachers and their organizations play in creating the framework?
  7. Does the model hold districts and schools accountable for providing certain supports for teachers and students and for assuring that the district-approved curricula reflect the Common Core and Core Curriculum Content Standards?

Process

  1. Does the model recommend a pre- and post-conference with the evaluator prior to an observation, with the teacher an active participant encouraged to provide essential information about the class, lesson, and other key data?
  2. How long is a recommended observation?
  3. What does the model recommend regarding “informal observations,” “walkthroughs,” or “short observations?” What is the goal of such an activity? Who conducts these?  How long do they last? What training is provided?  What forms and materials are used?  Is there any discussion following these activities?
  4. Does the model require or encourage videotaping of teachers? For what purpose? Who gets to view and/or use the videotape?
  5. How will technology (such as iPads/tablets, video) be used in the evaluation process? Is technology driving process & procedures?

Training

  1. Is comprehensive training provided for all teachers and supervisors?
  2. How much time is recommended for comprehensive training to occur?
  3. Is training limited to watching a video?
  4. Is a model-certified trainer providing the training?
  5. Does the model provide specific and thorough training for turnkey trainers? How is high quality training assured?
  6. Is training for teachers and administrators provided jointly?
  7. Is training required prior to any implementation of the model?
  8. Are clear materials defining the model, components, elements, rubric, and rating system provided to teachers and administrators as part of the training?
  9. What provisions are made for training new staff?
  10. If using a commercially available model, will there be contact persons – both for the model itself and the software provider – to whom the association can continue to direct questions during implementation?

Support and collaboration

  1. How is professional development linked to teacher evaluation? Does the model emphasize narrative observations and teacher-evaluator interaction or rely primarily on an observation checklist?
  2. Does the model use evaluation as part of a continuous conversation only between teacher & supervisor, with the latter doing most of the talking, or does it encourage a more collegial model, encompassing collaborative, confidential approaches among teachers?
  3. Does the model reflect collaborative professional development and assistance or send you to watch a video?

Scoring

  1. How does the model recommend ratings be determined and used?
  2. Is the scoring system fair? For example, does an individual’s difficulty with any one item or a few designated items result in an “ineffective” score for the entire observation or evaluation?
  3. Are the distinctions among the ratings and their characteristics clearly defined for each domain, component, and element of the model?
  4. Are all elements given equal weight in coming up with a final overall rating? How is the overall rating determined?

Data collection

  1. What data collection and retention software is the district using to support this rubric?
  2. What access to the data is provided to administrators? Individual teachers? Others?
  3. What security measures are in place to ensure confidentiality is protected?
  4. What happens to the data if the district decides after a few years to move to a different system or vendor?
  5. How will the data be grouped and evaluated?
  6. What back-up measures are in place for the data?

Bookmark and Share