Skip to content

EdNC. Essential education news. Important stories. Your voice.

Prioritize performance over testing in teacher assessments

As described in a recent news article, there’s a dark shadow on licensure testing for elementary teachers in North Carolina.

Sadly, many new—or new to North Carolina—teachers are failing the mathematics exams within the Pearson suite of licensure exams. As a teacher-educator and researcher in teacher assessment, I’d like to share some perspective on the alarming failure rate, how it should serve as a call to re-examine the utility of licensure exams in North Carolina, and as an inducement to prioritize performance assessments over content testing.

When implemented in 2014, the teacher-education community at North Carolina institutions of higher education was concerned. While the adoption of the exam brought test validity benefits, it also brought concerns about content validity and alignment with the N.C. Professional Teaching Standards. The anticipated pass rates—based on estimates from its use in Massachusetts—caused concern because the test-taker population there did not mirror the diversity of our North Carolina teacher population. And, the shift lacked transparency, with our state’s teacher education faculty having little opportunity to bring their expertise to bear on a decision that would seriously impact their work and the success of their graduates.

It’s four years later, and we find that new teachers are failing the Pearson math exam at unsettling rates.

Now, as noted in the N.C. Department of Public Instruction report presented to the State Board of Education, more study is needed to determine the predictive value of licensure exams. Are teachers who score higher on the Pearson exams better teachers? Do they have better principal evaluations of their performance? Do their students score higher on state achievement tests?

If the answer is no, the high failure rate and concern that good teachers are being “locked out” of classrooms are prime reasons to shift away from content knowledge tests to performance assessments for beginning teachers in all pathways to teaching.

Instead, we should be using teacher-performance assessments.

Teacher-performance assessments are portfolios of lessons, assessments, and teaching analyses that demonstrate a teacher candidate’s readiness to teach. In North Carolina, edTPA and PPAT are two options, and soon, each teacher-preparation program in North Carolina will be required to implement an assessment as a state licensure requirement. At UNC-Chapel Hill, we first implemented edTPA in 2010 as a program completion requirement because of its promise as a valid and reliable measure of teacher candidate readiness.

Soon, the cost for all North Carolina licensure exams and teacher-performance assessment scoring will cost over $500 per teacher candidate—a heavy burden for college students, career-changers and others seeking to become teachers. Why keep both if one is predictive of performance?

It’s time for North Carolina to focus more on teacher candidate performance over test-taking skills. It’s time to shift away from content exams like the Pearson exams and engage more fully in the value of teacher-performance assessments. With teacher-performance assessments, teacher-preparation programs have a rich, content-specific portfolio that focuses on the context in which learning occurs, leverages student assets, uses assessment data to inform instruction, and drives thoughtful analysis of teaching—all of which can be aligned to the N.C. Professional Teaching Standards and InTASC standards for beginning teachers. Furthermore, all new teachers, whether prepared in our schools of education, lateral-entry teachers, or teachers recruited from other states, should complete teacher-performance assessments like edTPA (consider the edTPA requirement in Pathways to Practice, a UNC-NCSU collaboration to prepare lateral-entry teachers).

More importantly, the predictive value of edTPA is a growing area of research on quality teacher preparation (see the work of educational researchers Kristin Gansle, Dan Goldhaber, Kevin Bastian, and others). In North Carolina, the UNC Educator Quality Dashboard links teacher performance to the preparing university or college. When extended to include edTPA data, we can link principal evaluations and student achievement scores to edTPA to examine the predictive capacity of edTPA, something the NCDPI report notes is not currently available with our licensure exams.

Additionally, the value of these analyses is two-fold, benefiting new teachers and their hiring districts and the colleges and universities that prepared them. For districts, the analyses provide strong evidence for developing beginning teacher support programs (see edTPA’s influence on NC NTSP) which has potential to increased teacher performance, retention, and longevity. For teacher-preparation programs, these analyses shine a light on program strengths and gaps in the teacher education curriculum and support continuous improvement efforts. Together, they highlight the additional benefits of teacher-performance assessments, like edTPA, over content exams like the Pearson math exams which are failing to meet North Carolina’s needs.

Based on these examples, legislators and policy makers should make an evidence-based decision to remove unnecessary barriers to teaching through content licensure exams and invest in new teachers who demonstrate that they can perform in today’s classrooms. The evidence indicates that teacher performance assessments, like edTPA, are predictive of future outcomes for teachers and students, so why invest more time, money and energy to validate a content exam that is causing such strife?

Instead, invest in performance, invest in beginning teachers who demonstrate their ability to teach when and where it matters—in North Carolina’s classrooms.

Resources

Gansle, K. A., Noell, G. H., Burns, J. M. (2012). Do student achievement outcomes differ across teacher preparation programs? An analysis of teacher education in Louisiana. Journal of Teacher Education, 63(5), 304-317.

Goldhaber, D., Cowan, J., & Theobald, R. (2017). Evaluating prospective teachers: Testing the predictive validity of the edTPA. Journal of Teacher Education, 68(4), 377-393. http://journals.sagepub.com/doi/abs/10.1177/0022487117702582

Bastian, K. C., Henry, G. T., Pan, Y., & Lys, D. (2016). Teacher candidate performance assessments: Local scoring and implications for teacher preparation program improvement. Teaching and Teacher Education, 59, 1-12. https://www.sciencedirect.com/science/article/pii/S0742051X16300889

Bastian, K. C., Lys, D., & Pan, Y. (2018). A Framework for Improvement: Analyzing PerformanceAssessment Scores for Evidence-Based Teacher Preparation Program Reforms. Journal of Teacher Education, 0022487118755700. http://journals.sagepub.com/doi/abs/10.1177/0022487118755700

Diana Lys

Diana Lys, Ed.D., is an assistant dean at UNC-Chapel Hill’s School of Education.