Assessing writing for placement into college composition
courses has traditionally been limited to one or more of a handful of
methods: multiple-choice exams (indirect assessment), timed-essay tests,
portfolio assessment (direct measures), and directed self-placement
(DSP).1 In the past, standardized tests such as
the ACT, SAT, and TOEFL included only multiple-choice questions, but now
the SAT and TOEFL include a timed-essay test as well. Indirect
assessment measures correlate poorly with writing ability, whereas
direct measures increase these correlations, with assessments of
multiple samples of student writing providing a better measure than a
single sample, though the training and expertise of the raters can
affect outcomes, too.
Huot’s (1994) nationwide survey of writing placement practices
of 1,037 public and private institutions indicates that a writing sample
is the most widely used placement method (51%), followed by
standardized test scores (42%), and a combination of a writing sample
and standardized test scores (23%). Crusan (2002) reviewed ESL writing
placement practices at 10 large public universities, with a particular
focus on ESL writers, and found that 3 (Penn State, Purdue, and
Wisconsin) used only indirect measures, 2 (Northwestern and Ohio State)
used only direct measures, and the rest (Michigan State and the
Universities of Illinois, Iowa, Michigan, and Minnesota) used a
combination of direct and indirect measures (pp. 24–25).
The CCCC Position Statement on Writing Assessment (2009) notes the following:
Decision-makers should carefully weigh the educational costs
and benefits of timed tests, portfolios, directed self-placement, etc.
In the minds of those assessed, each of these methods implicitly
establishes its value over that of others, so the first impact is likely
to be on what students come to believe about writing.” (p. 4)
While “educational costs” refers to more than financial costs,
it is worth noting that multiple choice exams, timed essay tests, and
portfolio assessment each cost successively more, according to White
(1995) and Peckham (2009), who both estimated the expense ratio of essay
to portfolio scoring is about 1 to 5. (Peckham estimated the cost of
scoring a single essay sample at $5 while a writing portfolio costs
$25).
Other educational costs noted in the CCCC Position Statement on
Writing Assessment relate to using indirect measures include the loss
of professional development opportunities for the teachers who score
student writing samples; the distorted message to students about writing
and literacy that indirect measures suggest by their traditional
emphasis on form over content; and, in the case of machine scoring of
essays used by many testing companies today, a reduction in
reader-writer interactions.
The Conference on College Composition and Communication (CCCC;
2009) Committee on Second Language Writing, in its Statement on
Second-Language Writing and Writers states:
Decisions regarding the placement of second language writers
into first-year writing courses should be based on students’ writing
proficiency rather than their race, native-language background,
nationality, or immigration status. Nor should the decisions be based
solely on the scores from standardized tests of general language
proficiency or of spoken language proficiency. Instead, scores from the
direct assessment of students’ writing proficiency should be used, and
multiple writing samples should be consulted whenever possible. Writing
programs should work toward making a wide variety of placement options
available—including mainstreaming, basic writing, and second language
writing as well as courses that systematically integrate native and
nonnative speakers of English, such as cross-cultural composition
courses. (p. 3)
Both CCCC Statements on Writing Assessment (2006) and
Second-Language Writing and Writers (2009) recommend DSP because, in the
assessment group’s opinion, “reflection by the writer on his or her own
writing processes and performances holds particular promise as a way of
generating knowledge about writing and increasing the ability to write
successfully” (p. 2), and in the ESL group’s view, “writing programs
should inform students of the advantages and disadvantages of each
placement option so that students can make informed decisions, and
should make this opportunity available to both international and
residential second language students” (p. 3).
Nevertheless, some reservations about DSP are evident in the
professional literature. Crusan (2006) describes the arguments for and
against DSP that were voiced on her campus. Some questioned whether the
cost of tuition might influence students (and their parents) to make
unrealistic appraisals of their writing ability. Others argue that
because the training international students receive in writing may
differ from that which U.S. students receive, the students and their
U.S. university teachers might have different understandings of the
strategies and qualities that writing well entails. Crusan also cites
Zamel (1995) in
noting that, in general, international students tend to have a higher
regard for authority and their teachers, and thus may not feel it is
culturally appropriate to question placement decisions. Indeed, in a
1995 survey of ESL students’ satisfaction with their placement in
writing courses, Crusan found that 2% felt they had been placed into
course below their ability, but none challenged the placement decision
(pp. 212–213). For these reasons, critics of DSP have argued that it may
lead all students, but especially international students, to bypass the
courses most needed for their academic success (Crusan, 2006, p.
211).
Gere, Aull, Green, and Porter (2010) analyzed a decade of data
since DSP was introduced at the University of Michigan in 1999, focusing
on different conceptions of test validity to determine if DSP was
leading to a better match between students’ writing ability and the
writing courses in which they enrolled. They note the inherent
difficulty of aligning survey questions and demographic data on the
student with course content: “The limited correlation of standardized
test scores and GPAs with DSP questions, along with the relatively
complex profiles of students with regards to these measures, suggests
the need for more nuanced ways of describing student achievement” (p.
171). Additionally, a majority of students who participated in surveys
and interviews about their experiences with DSP and first-year composition indicated that they followed the advice of
their advisors in deciding which courses to enroll in more than the
advice received from DSP. Nevertheless, for those students who enrolled
in the lower level writing course as recommended by DSP, more than 71%
felt they had made the right choice because the course increased their
confidence in producing college-level writing successfully (Gere et al.,
2010, p. 169).
Both Wright State University, where Crusan works, and the
University of Michigan, where Gere works, eventually shifted toward an
online directed self-placement system (ODSP). One advantage to ODSP is
the global access students have to the system as well as the advanced
notice students and administrators have of placement decisions. Peckham
(2009) notes that the savings associated with “not having to adjust
class sizes up and down during the first two weeks of classes” in
addition to “having the students settled in a class by the time they
arrived” on campus were significant (p. 522).
The University of Michigan (Gere et al., 2010) and Louisiana
State University (LSU; Peckham, 2009) both use variants of ODSP that ask
students to write essays in response to assigned reading(s). As in
traditional DSP, students select their writing class based on the course
information provided. At the University of Michigan, student essays are
submitted online and reviewed by their instructors prior to the
beginning of classes to identify strengths and weaknesses. At LSU,
students are initially placed by standardized test scores. Then they
receive course information and are given the option of challenging the
writing placement online by logging into a secure system, reading the
essay prompt and several assigned texts, and submitting their written
response within a set period of time. The more motivated students, those
with self-agency that isn’t captured in a standardized test score,
pursue this option. Roughly 10% of the students at LSU challenge their
initial placement, and of these, about half are moved up or down in the
course sequence after their writing sample is scored by writing
instructors in the program (Peckham, 2009).
This review of the professional literature to date on assessing
writing for placement into college composition indicates that there is
no one “best method” for placing students into first-year writing
courses, for each method or even combination of methods has its own
advantages and disadvantages for the university, the writing program,
and the students they serve.
Note
1. Unlike other assessment procedures, directed self-placement
does not ask students to produce writing or answer grammaticality and
usage questions. Rather, students are presented with detailed
information about available courses, guided in an evaluation of their
own background and abilities, and allowed to enroll in the course they
feel best meets their needs.
References
Conference on College
Composition and Communication. (2006). Writing assessment: A
position statement (rev. ed.). Urbana, IL: National Council of
Teachers of English. Retrieved from
http://www.ncte.org/cccc/resources/positions/writingassessment.
Conference on College Composition and Communication. (2009). Statement on second-language writing and writers
(rev. ed.). Urbana, IL: National Council of Teachers of English.
Retrieved from
http://www.ncte.org/cccc/resources/positions/secondlangwriting.
Crusan, D. (2002). An assessment of ESL writing placement
assessment. Assessing Writing, 8, 17–30.
Crusan, D. (2006). The politics of implementing online directed
self-placement for second language writers. In P. K. Matsuda, C.
Ortmeier-Hooper, & X. You (Eds.), The politics of
second language writing: In search of the promised land (pp.
205–217). West Lafayette, IN: Parlor Press.
Gere, A. R., Aull, L., Green, T., & Porter, A. (2010).
Assessing the validity of directed self-placement at a large university.Assessing Writing, 15, 154–176.
doi:10.1016/j.asw.2010.08.003
Huot, B. (1994). A survey of college and university writing
placement practices. Writing Program Administration,
17(3), 49–65.
Peckham, I. (2009). Online placement in first-year writing. College Composition and Communication, 6(3),
517–540.
White, E. M. (1995). An apologia for the timed impromptu essay
test. College Composition and Communication, 46(1),
30–45.
Zamel, V. (1995). Strangers in academia: The experiences of faculty and ESL students across the curriculum. Reprinted in V. Zamel & R. Spack (Eds.), Negotiating academic literacies: Teaching and learning across languages and cultures (pp. 249-264). Mahwah, N.J.: Lawrence Erlbaum.
Adrian Wurr is assistant dean of academic English
programs for international students at the University of Tulsa, where he
teaches courses in applied linguistics and
composition. |