Testimony before the NJ State Board of Education

Proposed teacher evaluation regulations
by NJEA President Barbara Keshishian
April 3, 2013

Good afternoon.  I am Barbara Keshishian, the president of NJEA.  The New Jersey Education Association was proud to support TEACHNJ, Teacher Effectiveness and Accountability for the Children of New Jersey.  When this bill was signed into law, NJEA could say that we were partners in the process.  The new law was the result of collaboration and compromise on all sides. 

Unfortunately, I can’t make the same statements about the regulations to implement the law.  It is true that NJEA worked with the Department of Education on evaluation.  We encouraged our districts to become part of the pilot and had a successful two-way discussion about issues and ideas with the DOE.  However, that cooperation evaporated when it came time to write these regulations.  We did not receive any information about the actual content of these regulations until they were released as part of the agenda for last month’s state board meeting.  And these regulations are problematic, and in our view do not meet the intent and spirit of the new law.

TEACHNJ requires the State Board of Education to set standards for the approval of evaluation rubrics for teachers, principals, assistant principals, and vice principals.  However, we believe the DOE has gone well beyond the standard-setting requirements of the law and is seeking to impose its vision of teacher evaluation in these regulations on every school district in the state.  The problem is, that that vision is untested and untried. 

Teacher evaluation is changing across the nation.  But unlike states that had no evaluation parameters to begin with, New Jersey has had a comprehensive evaluation system in place since 1978.   Why not let local boards of education work within the confines of the standards required in TEACHNJ?  This would be in line with what the department is doing in other areas of code. Why when it comes to evaluation is the state deviating from this approach?

As the department and this board adopt changes resulting from the Education Transformation Task Force, the DOE continually touts that it is moving away from a system of compliance to give districts the flexibility they need to operate.  This is true in virtually everything except  evaluation, where once a district has chosen its evaluation model, it has no more choices or discretion to exercise?  Under the state’s view of the world, the state will determine the   percentage of student achievement and teacher practice that determine a teacher’s rating.  The state will determine the percentage of standardized test scores that must be used in the student achievement portion.  The state will even calculate a teacher’s final rating, for teachers in tested grades and subjects, although the district and teacher won’t receive that rating until halfway through the following school year.  We believe that districts should have some input into these decisions.  After all, it is the district, not the state, that will live with the results of the new evaluation procedures.

New Jersey has chosen to use student growth percentiles, or SGPs.  Much of our system is based on the example of Colorado.   Colorado is currently in a pilot year, and while Colorado will implement its new evaluation system in 2013-2014, it will be a “hold harmless” year.   Here in New Jersey, our pilot will be finishing in June, and we will begin to use the new system, one with significant consequences for both practitioners and students alike, before we have done a thorough study of the pilot results.  That is not good practice.  Why bother even having a pilot?

This state, this department and this board need not rush into unproven regimented systems.  Instead of prematurely forcing teachers and administrators into using a new system, let’s take a step back and have a real pilot program, one that tests these current regulations. Have districts  use the new standards, but give districts a range they can use to determine the student achievement portion of a teacher’s rating, and give supervisors the discretion to modify it as needed, based on specific circumstances.  See how it works.  Teachers individualize instruction for students based on specific needs.  Shouldn’t evaluation utilize the same model?

Everyone seems to agree that the primary goal of evaluation is to help improve instruction.  Yet this system will fail to meet that goal for those teachers subject to state standardized tests.  Timing is also a critical concern. Teachers of language arts and math in grades 4-8 won’t receive their final summative evaluation score until halfway through the following school year.  So when a teacher meets with his or her supervisor, the supervisor will have to say “Right now, I think you’re highly effective, but I won’t know for sure until six months from now.  Go teach next year, and in December I’ll let you know how you’re doing.” How are teachers supposed to improve their instruction with assistance from their supervisors when they don’t have the data they need in a timely fashion.  Flexibility in determining the evaluation formula would help address this problem. 

Rushing into a high-stakes evaluation system would be a disservice to the students and teachers of New Jersey. The TEACHNJ Act requires that a new evaluation system be implemented in 2013-2014.  That system needs to “be partially based on multiple objective measures of student learning that use student growth from one year’s measure to the next year’s measure.”  The law does not give specific numbers that must be followed lock step in every school district in 2013-2014. 

Implementing a system that is partially based on test scores before it is thoroughly tested could result in misidentifying both ineffective and effective teachers.  Research shows that with three years of error-free value added data, (almost impossible to have,) the chance of misidentifying a teacher’s effectiveness is 26%.  That’s one out of every four teachers that could be misidentified as either effective or ineffective.  When only one year of data is available, the number moves to 36%.   Another study shows that ratings are not stable.  Among teachers rated in the top 20% of effectiveness one year, only one third of that group was still rated in the top 20% the following year, while one third was rated in the bottom 40% of teachers.  

Think about it.  What happens when test scores are used as the most important measure of student achievement?  Teachers narrow their instruction and focus on tested skills, students get stressed, and good people make bad decisions when they are worried about the test results.  Just consider the recent news from Atlanta.

If the primary goal of evaluation is to improve instruction, we are already on that path.  Next year, all school districts in the state will be using comprehensive frameworks to evaluate teacher practice---there is no need for the state to set a pre-determined percentage of student growth based on state assessments.  Give districts flexibility to evaluate their teaching staff members using multiple measures, including standardized tests.  At the same time, the DOE can test the use of SGPs in a pilot that has the time to be fully tested and validated.  This will ensure that the system actually works, and that the results have meaningful implications for the improvement of instruction and learning---THE REAL GOAL OF ANY EVALUATION SYSTEM!

NJEA looks forward to working with you and the department on these regulations in a continuing cooperative and collaborative manner---it is our firm belief that working together will ensure the very best results.

Thank you for listening to our input and concerns today.