No Child Left Behind
uniform testing has become the archetype of public education in the United States, and the results are generally accepted as valid assessment of educational accomplishment. With an increase in uniform testing the pressure to raise scores begins, which in turn can rule to score pollution, which alters the validity of test score results. uniform tests are based in behaviorist psychological theories from the nineteenth century. While our understanding of the brain and how people learn and think has progressed enormously, tests have remained the same. Behaviorism assumed that knowledge could be broken into separate parts and that people learned by passively absorbing these parts. Today, cognitive and developmental psychologists understand that knowledge is not separable parts and that people (including children) learn by connecting what they already know with what they are trying to learn. If they cannot actively make meaning out of what they are doing, they do not learn or remember. But most uniform tests do not incorporate the modern theories and are nevertheless based on ingemination of secluded facts and thin skills. (Fairtest.org, 2006). Should uniform tests be used to estimate the quality of educators and/or the success of schools? There are two possible answers: The tests and testing methods are appropriate, or the use of uniform tests should be evaluated and changed.
Independent and Dependent Variables
In order to develop an appropriate solution to the problem, the variables must be identified. The dependent variable is the variable whose value is the consequence or a function of the control or independent variables. (Cooper & Schindler, 2003). In the uniform test setting, the dependent variable is the measurement of a student’s knowledge (or without thereof), and the independent variable is the uniform tests.
Much of the research course of action of educational attainment has been guided by three questions: a) to what degree does achievement depend on factor is that are not under an individual’s control? b) What are the social and psychological mechanisms of this dependence? and c) to what extent do ability, aspiration, and effort depend on factors other than the individual’s experiences and past achievements? (Entwisle, 1988) A study was completed in 18 states with to determine if the programs were affecting student learning. In analyzing the results, the findings suggest that in all but one case, student learning is indeterminate, remains at the same level, or decreases with the implementation of testing. Several states already administer tests which can have a meaningful impact on school assessment and funding.
“What Do Test Scores in Texas Tell Us?” (Klein, Hamilton, McCaffrey & Stecher, 2000) raises serious questions about the validity of gains in reported scores. The paper also careful about the danger of making decisions to sanction or reward students, teachers, and schools on the basis of test scores that may be inflated or misleading. Schools and districts use results from uniform testing as a tool to decide where more attention should be directed. Multiple-choice tests, the norm in uniform testing, are a poor yardstick of student performance. They do not measure the ability to write, to use math, to make meaning from text when reading, to understand scientific methods or reasoning, or to grasp social science concepts. Nor do these tests adequately measure thinking skills or estimate what people can do on real-world responsibilities.
uniform, multiple choice tests were not originally designed to provide help to teachers. Classroom surveys show teachers do not find scores from uniform tests very helpful, so they rarely use them. The tests do not provide information that can help a teacher understand what to do next in working with a student because the tests do not indicate how the student learns or thinks. (Fairtest.org, 2006).
Sample sizes were chosen student populations who took the TAKS test (Texas Assessment of Knowledge and Skills, before known as the Texas Assessment of Academic Skills) and AIMS (Arizona’s Instrument to Measure Standards) from years 2000 to 2006.
Background and Research Approach
As before stated, uniform testing has become the archetype of public education in the United States. These scores have become generally accepted as a valid form of educational accomplishment. Most states started their own testing due to the law that was passed by President George W. Bush, known as “No Child Left Behind”.
Recognizing the universal importance of education, the federal government assumed a larger role in financing public schools with the passage of the Elementary and Secondary Education Act (ESEA) in 1965. by later reauthorizations, ESEA has continued to assist the states. In 2001, the reauthorization included No Child Left Behind, which asks the states to set standards for student performance and teacher quality. The law establishes accountability for results and improves the inclusiveness and fairness of American education. No Child Left Behind is the 21st-century iteration of this first major federal foray into education policy – a vicinity that is nevertheless mainly a state and local function, as conceived by our Founding Fathers. (No Child Left Behind, 2006).
No Child Left Behind ensures accountability and flexibility in addition as increased federal sustain for education. African American, Hispanic, special education, limited English proficient and other students were left behind because schools were not held accountable for their individual progress. Under No Child Left Behind, every state is required to set standards for grade-level achievement and develop a system to measure the progress of all students and subgroups of students in meeting those state determined grade-level standards. (NCLB).
Sample data from Arizona and Texas schools was compared, and the United States Department of Education website provided additional information.
AIMS (Arizona’s Instrument to Measure Standards)
In Arizona, the state performed conclusive testing prior to implementation of the AIMS test. The Arizona Board of Education started tracking statistical data with a pre-AIMS test in 2001 and continued to track the statistical data when the AIMS test was passed into law. (Arizona Department of Education, 2006).
In 1996 the Arizona legislature passed a law that reflected a strong need from the public that there be an objective measure to make sure that students with diplomas have the proficiencies expected of high school graduates. Newly adopted legislation relates to the graduation requirements of students with Individual Education Plan Programs (IEPs) or 504 Plans (refers to Section 504 of the Rehabilitation Act and the Americans with Disabilities Act, which specifies that no one with a disability can be excluded from participating in federally funded programs or activities, including elementary, secondary or postsecondary schooling). According to this amendment, students with IEPs or 504 Plans shall not be required to unprotected to passing scores on competency tests in order to graduate from high school unless a passing score on a competency test is specifically required in a specific academic area by the student’s IEP or 504 Plan. (Arizona Department of Education, 2006).
The first examination reviewed on Arizona scores includes a report on the Class of 2002, the first complete examination of the AIMS high school passage rate. The second examination compares the percentage of students meeting or exceeding the standard between 2001 and 2002. The third examination compares the percentage of students meeting or exceeding the standard over a two-year period (2000 to 2002). The data shown in this report for 2002 have been modificated to adjust to for English Language Learners (ELL) to continue consistency with the 2000 and 2001 data. In addition, the meeting or exceeding category in high school writing includes students who completed the requirement by obtaining a meets (an average trait score of 4 or more) on the extended writing portion of the assessment plus an approaches extent score overall. (Arizona).
Over the time of three years, approximately 88% of the Class of 2002 met or surpassed the standard on the high school reading exam and 73% met or surpassed the standard on the high school writing exam. In mathematics, only the results for 2001 and 2002 are shown because the 2000 AIMS high school mathematics assessment was not focused to concentrate on chief mathematics skills and is not comparable to the content of the 2001 and 2002 assessments. The progress of the first high school cohort in mathematics will not be complete until after the 2003 assessment. (Arizona Department of Education, 2006).
The average extent scores for reading across all grade levels tested show little change for grades 3, 5, and 8 over the three years of 2000, 2001, and 2002. For grade 10 there is a decline from year to year. Grades 11 and 12 show little change from year to year. An increase from year to year for grade 3 reading was noted, with little change for grades 5 and 8, and a decline from year to year for grade 10. Other than the increase at grade 11 from 2001 to 2002, grades 11 and 12 method are similar from year to year. 88% of the graduating class of 2002 met or surpassed the standards in reading in 3 years. 73% of the Class of 2002 met or surpassed the standards in writing in 3 years. (Arizona).
In elementary schools, percentage of students meeting or exceeding the reading standard increased or remained stable at all grade levels from 2001 to 2002 with the largest increase (3%) at fifth grade. Over 2 years (2000-2002), the percentage of students meeting or exceeding the standards in reading has declined by 7% at fifth grade. The percentage of students meeting or exceeding the writing standard increased at all grade levels from 2001 to 2002 with the largest increase (5%) at fifth grade. Over 2 years (2000-2002), the percentage of students meeting or exceeding the standards in writing has declined by 2% at third grade and 5% at eighth grade. For mathematics, the percentage of students meeting or exceeding the standard increased at all grade levels from 2001 to 2002 with the largest increase (6%) at third grade. Over 2 years (2000-2002), the percentage of students meeting or exceeding the standards in mathematics has increased by 10% at third grade and 11% at fifth grade. (Arizona).
Texas TAKS (Texas Assessment of Knowledge and Skills)
In 2005, TAKS results specific to the Dallas Independent School District were as follows: In grade 3, an overall gain in reading of 2.2 points was achieved in Reading, but the District lost 3.1 points in Mathematics. African American and Economically Disadvantaged subgroups received the lowest test scores in mathematics, while Whites experienced the lowest scores in Reading. A summary for all grades in DISD indicates the Writing part of the TAKS test to be the most troublesome, with a cumulative loss of 2.7 points. Losses were experienced in Writing among all subgroups with the exception of Hispanic and a .6 gain from the Economically Disadvantaged part. Overall, the DISD experienced a gain of 5.9 points in Reading, 2.7 points in Mathematics, a loss of 2.7 points in Writing, a 4.9 point gain in science, and a 3.2 gain in Social Studies. (Dallas Independent School District, 2005).
Texas schools are not making the grade, according to a report released by the Texas Education Agency last week. The number of schools considered academically unacceptable increased from 95 in 2004 to 364 in 2005. Less than a quarter of the state’s districts ranked above the minimum rating of permissible. Debbie Ratcliffe, communications director of TEA, said the results of the report do not necessarily average Texas students are doing worse in classrooms, but instead they show that the TAKS test has become more difficult. (Yeh, 2005).
A school can be deemed unacceptable for not having enough students passing just one section of the five-part test. Austin ISD Superintendent Pat Forgione said in a written statement that students need tutoring and additional help with reading and math. “All of these additional supports require funding,” he said. “Unfortunately, adequate funds are not currently obtainable.” (Yeh).
Funding is often cited as the major contributor to student failure by AISD officials. Ratcliffe said funding is not the only way to enhance schools, that teachers need more training in science and math, and instructors with more experience in the two fields may need to be allocated to schools that need them. (Yeh, 2005).
According to a report published by the National Center for Education Statistics, Austin Independent School District, Houston Independent School District, Dallas Independent School District and Fort Worth Independent School District are among the hundred largest districts in the United States. The TEA report also showed Dallas ISD had 22 unacceptable schools, Houston ISD had five and Fort Worth ISD had three. (Yeh, 2005).
In May of 2006, the US Department of Education had to grant Texas (among other states) a flexibility waiver to adjust to districts and campuses that served students displaced by Hurricanes Katrina or Rita, and to address school districts that were forced to suspend classes for an extended period because of Hurricane Rita. Texas will be forced to create a Hurricane Katrina/Rita student group that will not be included in any of the other student group categories. (Texas Education Agency, 2006).
A state-sponsored examination of 2005 Texas Assessment of Knowledge and Skills scores revealed what are being called anomalies in test scores. Lewisville, Carroll, Coppell and Carrollton-Farmers Branch Independent School Districts each had schools that were flagged by the independent examination group for having speculate scores the examination group associated with testing irregularities. The Texas Education Agency sent a letter to districts on May 31 alerting them to characteristics that would make scores speculate. The letter asked schools to review the information and conduct any investigations deemed necessary. The four districts’ schools mentioned above were noted for unusually high gains. This could be explained in many ways, as the TEA letter states, but some believe it indicates methodic cheating within the school system is responsible for the gains. (Benton, 2006).
Critics say the possible cheating scandal, and the idea of educators willing to go to such lengths to raise their schools’ scores, is further proof that uniform testing does not work. Supporters say the instances of cheating on such tests are scarce and can be found in every profession. in spite of, cheating on uniform tests has been making the news with increasing frequency. From Boston to Florida to California, school districts have been investigating claims that teachers are providing students with answers, changing answers after the test is over, and giving students additional time to complete the test. (Benton, 2006).
Earlier this month, an Indiana third-grade teacher was suspended after being accused of tapping students on the shoulder when they marked answers incorrectly – the state’s third incident in as many years. In September, Mississippi threw out portions of test scores at nine schools after discovering more than two dozen situations of cheating. One fifth-grade teacher was fired after helping students on the writing portion of the test. In July, nine Arizona school districts invalidated portions of their test scores after teachers allegedly either read sections of the test to students or gave students additional time to finish. It was Arizona’s 21st case of cheating since 2002. (Benton).
The problem, say many education experts, is that the tests have been tied to teachers’ job contracts and bonuses, and the pressure to meet or go beyond standards is excessive and creates a possible personal and specialized financial dilemma. Under the No Child Left Behind Act, states have 12 years to bring children up to academic proficiency or lose federal funding. The new regulations have had the worst impact on minority schools many of which are considered low performing; under pressure to get their scores up, these schools were the first to dump traditional curriculum and do test prep almost exclusively. (Benton, 2006).
During the last decade and in the wake of the No Child Left Behind legislation, standards, assessments, and accountability have emerged as three prongs of a national education reform movement that has asked district and school administrators to think very differently about educational decision-making and the use of data from uniform tests. Using a mix of qualitative and quantitative methodologies, a two-year study by the Education Development Center included three phases. Phase One focused on understanding the ways in which school district personnel, along with superintendents and their educational teams, considered data for use in decision-making. Phase Two emphasized ethnic research in 15 schools across four school districts in New York City that represented various neighborhoods, student populations, and overall performance levels. Phase Three involved the development and administration of two surveys across the New York City public school system that asked teachers and administrators about how they interpret data. (Light, et al., 2005).
Teachers are open to data from uniform tests but are also conscientious about its use. instead of accept an interpretation of their students’ strengths and weaknesses based on a single test, they rely on multiple data supplies (impressionistic, anecdotal, and experiential) accrued over the long term and based on many experiences with their students to make most teaching decisions. (Light, et al.)
No matter how teachers viewed mandated, uniform testing, they recognized that part of their job is to prepare students to take the test. In interviews and survey responses, teachers expressed concern about the aspect of accountability and its impact on instruction, with the majority of teachers feeling that the tests rule them to teach in ways that contradict their own beliefs of what is good teaching. The majority of teachers also questioned the test’s accuracy in measuring students’ academic abilities and in measuring life skills that students need to succeed in school and beyond. (Light, et al., 2005)
Administrators’ attitudes about uniform testing are not markedly different from teachers. Administrators clearly feel pressure to enhance test scores, and the growing accountability culture has influenced their schools and their own decisions. With its adoption becoming more extensive by No Child Left Behind, uniform testing is not only being used to directly measure students’ academic progress but also to indirectly estimate administrators’ leadership abilities. (Light, et al.)
Administrators likewise have reservations about what the tests are measuring. Administrators questioned the validity and reliability of the test, and the majority of administrators do not consider the test as accurate as a teachers’ judgment of what students know and have the ability to do. Administrators were divided over most other issues related to state mandated tests, such as whether or not the state-mandated test were an accurate measure of what students know, or whether test pressure narrowed the scope of the curriculum. While many administrators expressed a desire to estimate student progress from different angles, the great majority of the administrators reported that, in order to prepare students for the test, they encourage teachers to “teach the students test-taking skills.” (Light, et al.)
While policymakers have embraced the concept that a single assessment can measure students, educators in this study concede that testing communicates only a piece of what teachers should know about assessing the complicate repertoire of skills and talents that children need to succeed. Here lies the gap between what policymakers and teachers see as important. (Light, et al., 2005)
uniform test data does not mirror the level of learning a student has acquired throughout the school year, but rather the level of learning the student has acquired while studying for the uniform tests. Other measures, including aptitude, socio-economic concerns, and learning ecosystem (both at school and at home) complete the entirety of a student’s achievement, and in some ways, are not assessable. Educators should not rely on uniform testing alone to measure a student’s growth, accomplishments and knowledge.