For centuries, the prevailing assumption about learning has been that the teacher tells, shows, or demonstrates facts, knowledge, rules of action, and principles, and then the students practices them (Burton, 1996). Hence, the way in which a student was assessed has been indicative of this assumption. However, in the last decade there has been somewhat of a "revolution" in the world of education. Educators and policy makers alike have worked to develop new methods of teaching and learning that center around improving assessment performance standards, in hopes of raising the standard of education.
With the advent of new accountability standards such as the "No Child left Behind' (Bush, 2001) act, designed for educators to improve the quality of education for children, educators are finding new and creative ways to provide instruction for children that has moved teaching beyond the "telling, showing, and demonstration of facts". However, the cost associated with these new "standards of education" are antithetical to what these new standards were designed to do, which in turn is affecting students and educators alike.
This paper will look at the history of performance assessments, and the ways in which high stakes testing has become the dominant pedagogy for teachers in America. This paper will also look at the pros and cons of performance assessments, and some strategies for improving the overall quality of education for children in America.
History of Performance Assessments
Since testing was first introduced as a policy mechanism in China in 210 B.C there have been four major ways to assess behavior and performance. First, you could have asked the person to supply an oral or written answer to a series of questions (e.g. essay questions, etc.). Second, you can ask a person to produce a product (e.g. a portfolio or artwork, a research paper,