Placement testing

From Wikipedia, the free encyclopedia - View original article

 
Jump to: navigation, search

Placement Testing is about the placement tests that colleges and universities use to assess college readiness and place students into their initial classes. Since most two-year colleges have open, non-competitive admissions policies, many students are admitted even though they do not have college-level academic qualifications. Tests primarily assess abilities in English, math and reading and in other disciplines such as foreign languages, science, computer Internet and health. The goal is to offer low-scoring students remedial coursework so that they can undertake regular coursework.[1] The most common tests given are College Board’s ACCUPLACER and ACT’s COMPASS, both of which are online, computer-adaptive, multiple-choice tests. Some colleges add computer-scored essay writing tests, including ACCUPLACER’s WritePlacer, and COMPASS’s e-Write.

Less-prepared students are placed into various remedial situations, from Adult Basic Education, through various levels of developmental college courses.

Historically, placement tests also served additional purposes such as providing individual instructors a prediction of each student’s likely academic success, sorting students into homogeneous skill groups within the same course level and introducing students to course material. Placement testing can also serve a gatekeeper function, keeping academically challenged students from progressing into college programs, particularly in competitive admissions programs such as nursing within otherwise open-entry colleges.

Test Validity[edit]

In the construction of a test, subject matter experts (SMEs) construct questions that assess skills typically required of students for that content area. "Cut scores" are the minimum scores used to divide students into higher and lower level courses. SMEs sort test items into categories of appropriate difficulty, or correlate item difficulty to course levels. "Performance Level Descriptors" define the required skills for remedial and standard courses.[2]

Once in use, placement tests are assessed for the degree to which they predict the achievements of students once they have been assigned to remedial or standard classes. Since grades serve as a common indirect measure of student learning, in the customary analysis, a binary logistic regression is run using the test score as the independent variable, and course grades as the dependent conditions. Typically, grades of A, B or C are counted as successful, while grades of D and F are counted as unsuccessful. Grades of I (for an unconverted Incomplete) and W (a Withdrawal) may be considered unsuccessful or may be excluded from the analysis. In practice, there is usually a clear positive relationship between student placement test scores and initial course grades. However, placement tests usually predict less than 10% of college course success variance.[citation needed]

Test items typically do not match the tasks students face in the classroom (they lack "face" validity). Instead, the tests offer multiple choice questions and extemporaneous essays.[citation needed]

Test scores are interpreted based on a proposed use and assessed in that context, rather than simply by establishing a predictive relationship between scores and grades. Since placement tests are designed to predict student learning in college courses, by extension they predict the need for developmental education. However, the efficacy of developmental education has been questioned in recent research studies, such as those by Bettinger and Long;[3] Calcagno and Long;[4] Martorell and McFarlin[5] and Attewell, Lavin, Domina and Levey.[6]

If placement tests are designed to measure a student’s ability to learn at a given college level, a correlation between test and course success may not be sufficient to establish the test as a valid measure, for example, if the students systematically cheat on both test and course. Such critiques are less about testing than about the validity of the educational enterprise itself.

One study found that one-quarter of students assigned to math remediation and one-third of students assigned to English remediation in the US would have passed regular university courses with a grade of at least a B without any additional support.[7]

The Placement Testing Process[edit]

Upon enrollment a student will be recommended or required to take placement tests, usually in English or writing, in math and in reading. With ACCUPLACER this will most likely be Arithmetic, Elementary Algebra, Reading Comprehension and Sentence Skills. With COMPASS this will probably include Math, Reading Skills and Writing Skills. Testing may also include a computer-scored essay, or an English-as-a-second-language assessment. Some colleges use ASSET, ACT’s paper and pencil test. Students with disabilities may take an adaptive version, such as in an audio or braille format that is compliant with the Americans with Disabilities Act (ADA).

Advisors interpret the scores and discuss course placement with the student. As a result of the placement, students may take multiple developmental courses before qualifying for college level courses. Students with the most developmental courses have the lowest odds of completing the developmental sequence or passing gatekeeper college courses such as Expository Writing or College Algebra.[8] Adelman has shown that this is not necessarily a result of developmental education itself.[9]

Student acceptance[edit]

Throughout the history of placement testing and enrollment practices, the pendulum has swung slowly back and forth between more and less prescriptive practices. If students are not required to take placement tests, they tend to avoid them. If they are not required to immediately enroll in the developmental classes they’ve placed into, they will often delay or avoid taking those as well. The validity of studies examining placement testing and developmental courses will necessarily suffer to the extent that students avoid testing and the subsequent course placements.[citation needed]

Beyond avoidance, many students do not understand the high stakes nature of placement testing. Lack of preparation is also cited as a problem. According to a study by Rosenbaum, Schuetz and Foran, roughly three quarters of students surveyed say that they did not prepare for the tests.[10]

Many colleges supply their students with study guides and practice tests, and a small but growing practice is to require online or face to face review sessions before allowing students to test, or retest.[citation needed]

Once students receive their placement, they either may or must begin taking developmental classes as prerequisites to credit-bearing college level classes that count toward their degree. Most students are unaware that developmental courses do not count toward a degree.[11] Some institutions prevent students from taking college level classes until they finish their developmental sequence(s), while others apply course prerequisites. For example, a psychology course may contain a reading prerequisite such that a student placing into developmental reading may not sign up for psychology until they complete the developmental reading requirement.

Federal Student Aid programs pay for up to 30 hours of developmental coursework. Under some placement regimens and at some community colleges, low-scoring students may require more than 30 hours of such classes.

History[edit]

Placement testing has its roots in remedial education, which has always been part of American higher education. Informal assessments were given at Harvard as early as the mid-1600s in the subject of Latin. Two years earlier, the Massachusetts Law of 1647, also known as the “Old Deluder Satan Law,” called for grammar schools to be set up with the purpose of “being able to instruct youth so far as they shall be fitted for the university.” [12] Predictably, many in-coming students lacked sufficient fluency with Latin and got by with the help of tutors who had graduated as early as 1642.[13]

According to John Willson,[14]

“The chief function of the placement examination is prognosis. It is expected to yield results which will enable the administrator to predict with fair accuracy the character of work which a given individual is likely to do. It should afford a reasonable basis for sectioning a class into homogeneous groups in each of which all individuals would be expected to make somewhat the same progress. It should afford the instructor a useful device for establishing academic relations with his class at the first meeting of the group. It should indicate to the student something of the preparation he is assumed to have made for the work upon which he is entering and introduce him to the nature of the material of the course.”

Historically, the view that colleges can remediate abilities that may be lacking was not universal. Hammond and Stoddard [15] wrote in 1928 that “Since, as has been amply demonstrated, scholastic ability is, in general, a quite permanent quality, any instrument that measures factors contributing to success in the freshman year will also be indicative of success in later years of the curriculum.”

Entrance examinations began with the purpose of predicting college grades by assessing general achievement or intelligence. In 1914 T.L. Kelley published the results of his course-specific high school examinations designed to predict “the capacity of the student to carry a prospective high school course.” [16] The courses were algebra, English, geometry and history, with correlations ranging from R =.31 (history) to .44 (English).

Placement testing within the broad category of entrance assessments has long been coupled with remedial education as a solution for the phenomenon of students that do not meet the academic expectations of college officials. In 1849 the University of Wisconsin established country’s first in-house preparatory department. Late in the century, Harvard introduced a mandatory expository writing course, and by the end of the 19th century, most colleges and universities had instituted both preparatory departments and mandatory expository writing programs.

Entrance examinations and the College Entrance Examination Board (now the College Board) allowed colleges and universities to formalize entrance requirements and shift the burden of remedial education to junior colleges in the early 20th century and later to community and technical colleges.[17]

Policies[edit]

Placement testing policies may include a host of related areas. Some experts consider testing requirements to be important because, as community college and student engagement expert Kay McClenney puts it, “Students don’t do optional.”[citation needed]

Required placement testing and remediation was not always considered desirable. According to Robert McCabe, former president of Miami-Dade Community College, at one time “community colleges embraced a completely open policy. They believed that students know best what they could and could not do and that no barriers should restrict them....This openness, however, came with a price....By the early 1970s, it became apparent that this unrestricted approach was a failure”[18]

The push toward mandatory policies gathered momentum more recently. In 2002, 5 states had statewide standard placement test cut scores. By 2009, that number had jumped to 20. In 2002, 17 states had statewide remedial placement policies. By 2005, that number had risen to 24.[citation needed] Examples of state or college placement testing policies:

Alternatives[edit]

Testing other elements of student ability[edit]

Conley recommends adding assessments of contextual skills and awareness, academic behaviors, and key cognitive strategies to the traditional math, reading and traditional tests[1] Boylan proposes examining affective factors such as “motivation, attitudes toward learning, autonomy, or anxiety.”[19] Other typical non-cognitive factors include students’ educational expectations and their feelings of self-efficacy and even social and financial support.[citation needed] High school GPA is a possible proxy for such measures.[citation needed]

Alternative test formats[edit]

An important characteristic of traditional placement tests is the predominance of the multiple choice question, which may reduce the value of testing as an indicator of overall performance.[citation needed]

In 1988, Ward predicted that computer adaptive testing would evolve to cover more advanced and varied item types, including simulations of problem situations, assessments of conceptual understanding, textual responses and essays.[20]:6-8 Tests now being developed incorporate conceptual questions in multiple choice format (for example by presenting a student with a problem and the correct answer and then asking why that answer is correct); and computer-scored essays such as e-Write, and WritePlacer. They have proven to be as statistically valid and reliable as expert-scored essays.[citation needed]

In a Request for Information on a centralized assessment system, the California Community Colleges System asked for “questions that require students to type in responses (e.g. a mathematical equation)” and for questions where “Students can annotate/highlight on the screen in the reading test.” [21] Some Massive open online courses, such as those run by Udacity automatically assess user-written computer code for correctness.[22]

Diagnostic Placement Testing[edit]

Placement testing focuses on a holistic score to decide placement into various levels, but is not designed for more specific diagnoses. Increasing diagnostic precision could involve changes to both scoring and test design and to better targeted remediation programs, where students focus on areas of demonstrated weakness within a broader subject.[citation needed]

“The ideal diagnostic test would incorporate a theory of knowledge and a theory of instruction. The theory of knowledge would identify the student's skills and the theory of instruction would suggest remedies for the student's weaknesses. Moreover, the test would be, in a different sense of the word from what we have previously employed, adaptive. That is, it would not subject students to detailed examinations of skills in which they have acceptable overall competence or in which a student has important strengths and weaknesses—areas where an overall score is not an adequate representation of the individual’s status.” [23] Various test preparation methods have shown effectiveness: test-taking tips and training, familiarity with the answer sheet format along with strategies that mitigate test anxiety.[24]

Some studies offer partial support for the test publishers' claims. For example, several studies concluded that for admissions tests, coaching produces only modest, if statistically significant, score gains.[25][26] Other studies, and claims by companies in the preparation business were more positive.[27] Other research has shown that students score higher with tutoring, with practice using cognitive and metacognitive strategies and under certain test parameters, such as when allowed to review answers before final submission, something that most computer adaptive tests do not allow.[28][29][30]

Other research indicates that reviewing for placement tests may raise scores by helping students to become comfortable with the test format and item types. It also might serve to refresh skills that have simply grown rusty. Placement tests often involve subjects and skills that students haven’t studied since elementary or middle school, and for older adults, the might be many years between high school and college. In addition, students who attach a consequence to test results and therefore take placement tests more seriously are likely to achieve higher scores.[31]

Community college administrators regard test preparation as a critical aid in boosting the accuracy of the placement test, thereby helping students to avoid unnecessary remediation.[citation needed] Test review increases scores for students who retest[citation needed] and help students to place out of one or more remedial levels,[citation needed] without undermining the academic performance of those students who advance through retesting. The impact of test preparation before initial placement testing is less clear.

According to a 2010 California community college study, about 56% of colleges did not provide practice placement tests, and for those that did, many students were not made aware of them. In addition, their students “did not think they should prepare, or thought that preparation would not change their placement.” [32]

By 2011, at least three state community colleges systems (California, Florida, and North Carolina), had asked publishers to bid to create customized placement tests, with integrated test reviews and practice tests. Meanwhile, some individual colleges have created online review courses complete with instructional videos and practice tests.

Simulations[edit]

In “Using Microcomputers for Adaptive Testing,” Ward predicted the computerization of branching simulation problems, such as those used in professional licensing exams.[20]

Secondary/tertiary alignment[edit]

Since placement testing is done to measure college readiness, and high schools in part prepare students for college, it only makes sense that K-12 and higher education curricula be aligned. Such realignment could take many forms, including K-12 changes, collegiate changes or even collaboration between the two levels. Various efforts to improve education may undertake this challenge, such as the national K-12 Common Core standards, Smarter Balanced Assessment Consortium (SBAC), or the Partnership for Assessment of Readiness for College and Careers (PARCC).

As of 2012 neither kind of alignment has progressed to the point of close coordination of curriculum, assessments, or learning methodologies between public school systems and systems of higher education.

References[edit]

  1. ^ a b Conley, David. “Replacing Remediation with Readiness” (working paper). Prepared for the NCPR Developmental Education Conference: What Policies and Practices Work for Students? September 23–24, 2010, Teachers College, Columbia University, p. 12.
  2. ^ Morgan, Deanna. “Best Practices for Setting Placement Cut Scores in Postsecondary Education” (working paper). Prepared for the NCPR Developmental Education Conference: What Policies and Practices Work for Students? September 23–24, 2010, Teachers College, Columbia University, p. 12.
  3. ^ Bettinger, E., and Long, B. T. “Remediation at the Community College: Student Participation and Outcomes.” In C. A. Kozeracki (ed.), ‘‘Responding to the Challenges of Developmental Education.’’ New Directions for Community Colleges, no. 129. San Francisco: Jossey-Bass, 2005.
  4. ^ Calcagno, J. C., and Long, B. T. “The Impact of Postsecondary Remediation Using a Regression Discontinuity Approach: Addressing Endogenous Sorting and Noncompliance.” New York: National Center for Postsecondary Research, 2008.
  5. ^ Martorell, P., and McFarlin, I. “Help or Hindrance? The Effects of College Remediation on Academic and Labor Market Outcomes.” Dallas: University of Texas at Dallas, 2007.
  6. ^ Attewell, P., Lavin, D., Domina, T., and Levey, T. “New Evidence on College Remediation.” ‘‘Journal of Higher Education’’ 2006, 77(5), pp 886–924.
  7. ^ Judith Scott-Clayton (20 April 2012). "Are College Entrants Overdiagnosed as Underprepared?". NYTimes.com. Retrieved 2012-04-24. 
  8. ^ Bailey, T., Jeong, D. W., & Cho, S. (2010). Referral, enrollment, and completion in developmental education sequences in community colleges. Economics of Education Review, 29, 255-270.
  9. ^ Adelman, Clifford (2006). “The toolbox revisited: Paths to degree completion from high school through college.” U.S. Department of Education. [1]
  10. ^ Rosenbaum, James E., Schuetz, Pam & Foran, Amy. “How students make college plans and ways schools and colleges could help.” (working paper, Institute for Policy Research, Northwestern University, July 15, 2010).
  11. ^ Rosenbaum, J., Deil-Amen, R., & Person, A. (2006). After admission: From college access to college success. New York: Russell Sage Foundation.
  12. ^ Massachusetts Trial Court Law Libraries http://www.lawlib.state.ma.us/docs/DeluderSatan.pdf .
  13. ^ Wright, Thomas Goddard (1920). Literary culture in early New England, 1620-1730. New Haven, CT: Yale UP, Ch. 6, p. 99. http://web.archive.org/web/20051025080258/http://www.dinsdoc.com/wright-1-6.htm
  14. ^ Willson, J.M. (1931). A study of an objective placement examination for sectioning college physics classes. Thesis submitted to the faculty of the School of Mines and Metallurgy of the University of Missouri, p. 5. http://scholarsmine.mst.edu/thesis/pdf/Willson_1931_09007dcc8073add4.pdf
  15. ^ “A Study of Placement Examinations.” University of Iowa Studies in Education. Charles L. Robbins, Editor. Volume 4(7) Published by UIA, Iowa City, p9.
  16. ^ Kelley, T. }. Educational Guidance: An Experimental Study in the Analysis and Prediction of High School Pupils. Teachers College, Columbia University, Contributions to Education. No. 71.
  17. ^ Boylan, 1988
  18. ^ McCabe, Robert H. (2000). No One to Waste: A Report to Public Decision-Makers and Community College Leaders. Washington, DC: Community College Press, p. 42.
  19. ^ Saxon, Patrick; Levine-Brown, Patti; & Boylan, Hunter. “Affective Assessment for Developmental Students, Parts 1 & 2.” Research in Developmental Education, 22(1&2), 2008, p. 1.
  20. ^ a b Ward, William C. “Using Microcomputers for Adaptive Testing,” in Computerized adaptive testing: The state of the art in assessment at three community colleges.” League for Innovation in the Community College, Laguna Hills, CA, 1988
  21. ^ “CCCAssess Proof of Concept Report 2011: Centralizing Assessment in the California Community Colleges.” California Community Colleges Chancellor’s Office, Telecommunications and Technology Division, Sacramento, CA, 2011, pp. 30, 33.
  22. ^ "Free Online Courses. Advance your College Education & Career". Udacity. Retrieved 2012-11-22. 
  23. ^ Robb, Thomas N., & Ercanbrack, Jay. (1999). “A Study of the Effect of Direct Test preparation on the TOEIC Scores of Japanese University Students.” TESL-EJ, 3(4).
  24. ^ Perlman, Carole L. (2003). “Practice Tests and Study Guides: Do They Help? Are They Ethical? What Is Ethical Test Preparation Practice?” Measuring Up: Assessment Issues for Teachers, Counselors, and Administrators, ERIC, 12 pages.
  25. ^ Briggs, Derek C. 2001. “Are standardized test coaching programs effective? The effect of admissions test preparation: Evidence from NELS:88. Chance, Vol. 14,(1) pp 10-21.
  26. ^ Scholes, Roberta J., & Lain, M. Margaret. (1997). “The Effects of Test Preparation Activities on ACT Assessment Scores.” Paper presented at the Annual Meeting of the American Educational Research Association, Chicago, IL. March 24–28, 22 pages.
  27. ^ Buchmann, C., Condron, D. J., & Roscigno, V. J. (2010). “Shadow Education, American Style: Test Preparation, the SAT and College Enrollment.” Social Forces, 89(2), 435-461.
  28. ^ Rothman, Terri, & Henderson, Mary. (2011). “Do School-Based Tutoring Programs Significantly Improve Student Performance on Standardized Tests?” Research in Middle Level Education Online, 34 (6), p1-10.
  29. ^ Shokrpour, N., Zareii, E., Zahedi, S. S., & Rafatbakhsh, M. M. (2011). “The Impact of Cognitive and Meta-cognitive Strategies on Test Anxiety and Students' Educational Performance.” European Journal Of Social Science, 21(1), 177-188.
  30. ^ Papanastasiou, E. C. (2005). “Item Review and the Rearrangement Procedure: Its process and its results.” Educational Research And Evaluation, 11(4), 303-321.
  31. ^ Napoli, Anthony R., & Raymond, Lanette A. (2004). “How Reliable Are Our Assessment Data?: A Comparison of the Reliability of Data Produced in Graded and Un-Graded Conditions.” Research in Higher Education, 45(8), 921-929.
  32. ^ Venezia, A., Bracco, K. R., & Nodine, T. (2010). One-shot deal? Students’ perceptions of assessment and course placement in California’s community colleges. San Francisco: WestEd. http://www.wested.org/online_pubs/OneShotDeal.pdf