Wednesday, November 30, 2011

The Nation's Report Card: NAEP

The National Assessment of Educational Progress (NAEP) is known as “The Nation’s Report Card.” It is a national test that periodically assesses what American students can do and what they know in subjects such as reading, mathematics, science, writing, the arts, civics, economics, geography, and U.S. history.

Tests are given at the national and state level. The first assessment was given in 1969. In 1990, voluntary assessments began every 2 years. In 2002 TUDA, or Trial Urban District Assessment, began in selected urban districts. In order to receive Title I funding, schools must agree to participate in reading and mathematics assessments. NAEP is run by the National Assessment Governing Board. It is an independent, bipartisan group appointed by Department of Education. This board awards contracts to contractors to handle the main operation of NAEP.

NAEP is designed to give a general picture of the levels of knowledge and skill among students nationwide or in a particular state. Only a small number of students are tested and no student takes the entire test. Scores of individual students and schools are not released. NAEP provides results on achievement in subject matter, instructional experience and school environment for populations of students. All fourth, eighth, and twelfth graders are reported on. Groups may be further broken down by sex or race. Results are based on representative samples instead of the entire state.

The results reported in November of 2011 show that the average fourth-grade reading scores remained unchanged from 2011, but is 4 points higher than in 1992. The average eighth grade score was 1 point higher than 2009 and 5 points higher than 1992. At grade eight, results showed a higher percentage of students at or above proficient than in 2009 and 1992. Nationally represented samples of fourth and eighth graders participated in the 2011 NAEP. At the state level, only 2 states, Alaska and Maryland, made improvements. All other states made no significant improvement over 2009.

In addition to pointing to academic areas and discrepancies in state reporting, a benefit of NAEP is that it provides a database on educational performance and student background. Data on teacher qualifications, socioeconomic status, computer usage, hours spent watching television, reading habits, and other demographic and school information can be found. Educational reformers can use this information to determine factors affecting achievement.

One of the main areas of concerns about NAEP is the wide discrepancy between NAEP scores and student performance on state scores on tests given by individual states. In some cases, for example, 80% of students may be proficient by state standards, but only 30% according to NAEP. The biggest difference is between state test scores and NAEP concerning the number of students reported to be proficient. In most reports, state test results are indicating more students are proficient than NAEP reports. There are very large discrepancies of up to 60 percentage points in some cases.

Proponents of NAEP claim that these differences show that some states are making to it too easy to be proficient. Critics state that NAEP’s proficiency standards are too high and out of line with student performance. The National Academy of Science, the Government Accounting Office, The Center for Research in Evaluation, Standards and Student Testing, and the National Academy of Education. Have studied the assessments and described the achievement levels reported by NAEP as “Fundamentally Flawed.” The National Academy of Science study points to problems with the judgment tasks that are difficult and confusing, rater’s judgments of different item types being inconsistent; lacking in validity evidence for the cut scores and unreasonable results. The biggest concern of critics is that while there each state sets its own proficiency standard. There is no national standard. NAEP has a level named “proficient” but it is not in line with any states. While proponents of NAEP indicate that states are lying about their student’s performance, critics claim the test is unfair and being used as a tool to hold over the heads of states.

Aside from possibly flawed assessing of results, state standards often differ from the standards being used by NAEP. Scores in topic based test can be taught at different grade levels and may not accurately show student achievement. Motivation is another factor, especially in high school. State tests must be passed for graduation, but there are no consequences for NAEP. Additional concerns include the test’s inability to provide reliable short –term indicators of progress due to the margin of error. Linking federal funds to test results could push states towards a national curriculum and away from the curriculum developed by individual districts. Although independent from the Dept. of Education, NAEP operations remain within the department. Contractors are selected by the commissioner of education statistics. The Dept. of Education’s involvement raises the risk of politicization. To avoid this, the National Assessment Governing Board should be given authority.

The NAEP provides a source of state and national achievement data. Correctly implemented it has the ability to assess how the nation’s students are doing. Its limitations must be acknowledged however, so that it is no given too much weight in lawmakers’ decision making process.

No comments:

Post a Comment