October 2014
Articles
RENEWING ESL WRITING PROGRAMS WITH SUMMATIVE PORTFOLIOS
Lara Ravitch, University of Oregon, Eugene, Oregon, USA & Dr. Ana King, City Colleges of Chicago-Truman College, Chicago, Illinois, USA


Lara Ravitch


Ana King

TESOL literature finds abundant examples of portfolios in ESL teaching, but few administrators have implemented portfolios as program-wide summative assessment mechanisms. In part, this may result from a widespread, but mistaken, view of portfolios as solely a formative tool. In fact, portfolios can be used for many different purposes. Yancey (1992) explains that a portfolio is simply an assessment mechanism that includes the elements of collection (a variety of materials), selection (an element of choice), reflection (a self-assessment or other metacognitive piece), and communication (about the student and his or her context).

In using portfolios for summative assessment, of course, it is important to examine their performance on the four important criteria for assessment:

  • Validity: Is it testing what we want it to test?
  • Reliability: Is it consistent across groups, raters, etc.?
  • Practicality: Is it easy to administer and interpret?
  • Washback: What is its effect on teaching?


Portfolios are generally very valid because they measure student writing directly. They also often have positive washback because they promote the teaching of language in context.

However, portfolios can have poor reliability because it is difficult to norm raters on multiple measures. In particular, recent studies have shown that holisticrating can be unreliable, because raters often weight criteria differently. These studies suggest that an analytic approach may be more reliable (Conrad, 2001). Unfortunately, analytic scoring is notoriously more time-consuming than holistic scoring. Because practicalityis important for assessment, accommodations must be made to efficiently rate portfolios analytically.

In the end, the literature seems to support portfolios as summative assessment, showing that portfolios can better predict student performance than single-shot essays (Song & August, 2002; Renfrow, 2004). Portfolios also provide valuable information about the course and the program. When used with rubrics designed to address student learning outcomes, they can answer three important questions for program improvement: What do students know? How do we know they know it? What do we do with this knowledge? (Maki, 2004).

The examples below provide an overview of how two programs implemented program-wide summative portfolios.

Summative Portfolios at Truman College

At Truman College, the Communications Department houses the required sequence of composition classes, developmental courses for native speakers, and precomposition courses for ESL students.

Across the district, all City Colleges (of which Truman is one) must give an exit test, and until 2006, this was how Truman determined whether students could pass from the developmental and ESL sequences into composition. However, faculty at Truman were troubled by several characteristics of this instrument: 1) It was a high stakes test, with just one chance to pass or fail, and failing the test meant failing the entire course; 2) there was no standard way for students to appeal if they failed; 3) the content of the exit test was often unfamiliar; and 4) students had no control over the assessment. Due to these concerns, Truman implemented the portfolio in December 2006.

The portfolio includes the required exit test (now on a familiar topic), but also has a built-in appeal to both students and administrators, as other essays in the portfolio can demonstrate the student’s proficiency. The portfolio also provides evaluators with artifacts to see a student’s progress throughout the course. Moreover, the portfolio affords students some control, as learners select which essays will go into their portfolios. Perhaps most important, students engage their metacognitive skills through a reflective essay in which they explain how their portfolio demonstrates readiness for the next course.

The summative portfolio contains the following elements:

  1. The exit test, a single draft in-class essay, in which students read an article on a familiar topic and then summarize and respond to it
  2. An in-class essay with an out-of-class revision
  3. A revised essay, with multiple drafts, completed entirely out-of-class
  4. A reflective essay, a single draft, completed out-of-class


The portfolio not only gives students the opportunity to be assessed more holistically, but also aids in program evaluation and has led to a culture of assessment within the department. With 60% of course sections taught by adjunct faculty and an average of 50 adjunct faculty per semester, portfolio assessment has helped the department maintain academic standards and enabled full-time faculty to better orient adjunct faculty to the program.

Because portfolio assessment involves the collection and selection of artifacts for inclusion, full-time faculty mentors train adjunct instructors to emphasize the recursive and reflective nature of writing. The faculty mentoring process involves meetings before the beginning of the semester, where adjunct instructors learn to use peer feedback and self-reflection, as well as meetings at the midterm, where instructors engage in norming sessions with samples of their students’ essays, in order to maximize inter-rater reliability during the end-of-semester portfolio reading. Toward the end of the semester, adjunct faculty submit proposed articles and prompts for their exit essays to a committee of full-time faculty, who review the articles for uniformity of length and complexity, as well as similarity in writing task. This program-wide framework for assessment flows naturally over the course of a semester.

Since portfolio assessment has been implemented, there has been a significant reduction in students challenging their course outcomes, and, more importantly, the data show a positive correlation between the portfolio and course pass/fail rates.

Future plans for portfolio assessment include conducting an item-analysis of the scoring rubrics, in order to see how well student learning outcomes are being mastered in the aggregate.

Summative Portfolios at University of Oregon, American English Institute

The American English Institute (AEI) houses several programs, one of which is an intensive English program (IEP; levels zero to six) that serves both students conditionally admitted to the University of Oregon and nonmatriculating students. Conditionally admitted students who pass level six can enter the university but must take the Accuplacer, a standardized placement test, and, depending on their scores, they are placed into one of three levels in the AEI’s courses for matriculated students, Academic English for International Students (AEIS).

Prior to fall of 2013, students passed the highest level of the IEP based solely on the course grade, given by the teacher. Because the program had expanded rapidly, there were many new faculty, and concerns arose over the comparability of assessment in different sections. At the same time, student placement into AEIS was solely based on the Accuplacer, and IEP students often did more poorly on the Accuplacer than their performance in IEP classes – or subsequent AEIS classes – would suggest. These concerns led to implementation of a portfolio, which serves as both a summative assessment for the IEP and a placement assessment for AEIS.

Portfolio implementation took several steps. First, literature on portfolio assessment was examined and a proposal was submitted to the administration. Next, the portfolio was piloted, and the raters were surveyed. Feedback showed that some items in the portfolio were difficult to assess, and that a consensus-based norming process was not appropriate, given the number of new faculty. The portfolio items were then changed, and faculty with substantial experience at each level identified benchmark essays for norming.

The portfolio for transitioning from the IEP to AEIS now includes the following components:

  1. The university-required Accuplacer placement test; a single draft of an in-class essay on an unfamiliar topic, pregraded by the Accuplacer logarithm
  2. An academic summary; a single draft on a familiar topic, written in class, and pregraded by two level six teachers
  3. A revised essay, written on a familiar topic, including both a first in-class draft with minimal teacher comments and a second in-class draft graded by the portfolio readers according to a rubric
  4. A reflective introduction; a single draft written out-of-class, and used by portfolio readers only for borderline cases.


Readers follow a carefully written guide sheet to weight the items and score the portfolio.

These items are pregraded in an attempt to improve the practicality of the portfolio assessment. Because the AEI as a whole had no large-scale assessment that required raters prior to the portfolio, it was important to emphasize practicality in order to ensure teacher support.

Recent feedback from portfolio raters has been positive. There were 14 raters drawn from administrators, teachers of the AEIS courses, level six teachers, and lead teachers of writing classes at the lower levels of the IEP. Eighty-six percent of raters indicated that the reading process was easy, and the norming process was adequate. In addition, several lower-level teachers requested implementation of a portfolio at their level.

Initial student success data have also been positive, as students placed into AEIS classes via portfolio (but who would have placed into a lower level using only the Accuplacer) are experiencing similar pass rates to those whose placement (via portfolio) is the same as that which they would have received using only the Accuplacer.

Future plans for assessment include continuing longitudinal examination of student success, as well as development of training materials for level six instructors.

These models demonstrate how different ESL programs can successfully implement portfolio assessment. Although it may seem daunting initially, thoughtful adaptation can enable almost any program to adopt portfolios.

References

Conrad, C. J. (2001). Second language writing portfolio assessment: The influences of the assessment criteria and the rating process on holistic scores. Minneapolis, MN: University of Minnesota, Center for Advanced Research on Language Acquisition.

Maki, P. (2004). Assessing for learning. Sterling, VA: Stylus.

Renfrow, M. (2004). Using portfolios to predict proficiency exam scores, timed essay scores and the university grade point averages of second language learners. (Unpublished doctoral dissertation). University of Kansas.

Song, B., & August, B. (2002). Using portfolios to assess the writing of ESL students: A powerful alternative? Journal of second language writing, 11, 49–72.

Yancey, K. (1992). Portfolios in the writing classroom. Urbana, IL: National Council of Teachers of English.


Lara Ravitch received her MA in teaching foreign languages from the Monterey Institute of International Studies. She has taught EFL in Moscow, Russia, and has been both a teacher and an administrator in community college, K–12, and IEP programs in the United States. Lara is currently the IEP coordinator at the American English Institute at the University of Oregon, and her research interests include collaborative leadership, content-based instruction, and program assessment.


Ana King has had more than 20 years' experience teaching postsecondary ESOL both in the United States and internationally. She holds a doctorate in community college leadership and an MATESOL from National-Louis University, in Chicago, Illinois. Ana is the assistant chair for placement and assessment as well as a faculty member in the Communications Department at Harry S. Truman College, City Colleges of Chicago, and her research areas include learning communities and dual credit programs for first-generation college students.