HEIS Newsletter - March 2012 (Plain Text Version)

Return to Graphical Version

 

In this issue:
Leadership Update
•  LETTER FROM THE CHAIR
ARTICLES
•  MORE THAN PLEASE AND THANK YOU: TEACHING POLITENESS STRATEGIES
•  TRANSITIONING TO ONLINE PLACEMENT TESTING AT A LARGE UNIVERSITY
BOOK REVIEWS
•  BEYOND MUSIC AND LITERATURE: TOUCHING UPON ALL ASPECTS OF AMERICAN CULTURE
•  NATURALIZATION AT ALL LEVELS OF ENGLISH
Computer Technology
•  PERSONAL LEARNING NETWORK/PERSONAL LEARNING ENVIRONMENT (PLN/PLE) FOR LANGUAGE TEACHING AND LEARNING
About This Community
•  TESOL ESL IN HIGHER EDUCATION INTEREST SECTION
•  CALL FOR SUBMISSIONS
•  CALL FOR BOOK REVIEW SUBMISSIONS
•  CALL FOR COMPUTER TECHNOLOGY SUBMISSIONS

 

TRANSITIONING TO ONLINE PLACEMENT TESTING AT A LARGE UNIVERSITY

Tom Delaney

Alison Evans

The American English Institute (AEI) comprises two separate but related programs: an intensive English program (IEP) for students who have not yet attained the English proficiency necessary to be admitted to regular University of Oregon (UO) classes, and the Academic English for International Students (AEIS) program for students who are already matriculated but who still require English language support.

The IEP consists of seven levels ranging from the true beginner level to classes designed for students who are almost ready to matriculate as regular UO students. There are two core courses: Oral Skills (a listening/speaking course) and Reading/Writing/Grammar. Students may also enroll in a number of elective courses such as TOEFL Preparation, Pronunciation, or Business and Economics.

At the time of the piloting and adoption of the new testing system, the matriculated program, AEIS, consisted of a three-level academic writing sequence of courses (110, 111, and 112), a two-course academic speaking/listening sequence (101 and 102), and one academic reading course (107). Students are placed in the appropriate courses based on their placement test scores; in the case of sequential courses, they may place anywhere in the sequence, or they may test out of a course altogether.

RATIONALE FOR CHANGING PLACEMENT TESTS

The AEI has seen an unprecedented ten-fold increase in enrollment in the IEP and a tripling of the matriculated program since the post-9/11 recovery began. As enrollment grew, we began to experience difficulties with ensuring consistency across a large number of raters and finding adequate time and space to test so many students with the traditional paper- and interview-based placement tests we had been using for years. In addition, the number of teacher-hours being spent on rater training, test administration, and scoring began to be viewed as a problem. Eventually, we came to the painful conclusion that the tests we had been using in our programs were no longer meeting the important assessment criteria of reliability, validity, and practicality (Fulcher, 2010; Hughes, 2003). Thus began the search for alternatives that would be more practical and also more reliable and, therefore, valid placement tests for our programs.

THE PROCESS OF ADOPTING NEW TESTS

After looking at a number of products on the market and having faculty meetings at which we examined various sample tests, we decided to proceed to an actual pilot test with the Accuplacer battery of tests, which is produced by the College Board. Accuplacer tests are widely used, adaptive to different proficiency levels, and boast of high reliability statistics; the fact the tests are computer-scored appealed to us as a real time-saver. Furthermore, the College Board provided us with free sample test units to conduct our pilot test.

PILOTING AND ANALYSIS

Although similar piloting procedures would be used for both our IEP and AEIS programs, we felt it made sense to begin our transition with AEIS, because test results for students in that program would be easier to interpret. Fewer levels needed to be distinguished in AEIS, so setting cut scores would be easier than in the seven-level IEP.

AEIS

After making that determination, the first step was to pilot the tests using currently enrolled students to establish that the test did provide usable results for our program and to create ballpark placement cut scores. The pilot was first conducted during the spring term before the online tests were actually used for a projected large incoming fall enrollment.

We elected to use two ESL tests—Listening Comprehension and Reading Comprehension—and the test designed to assess the writing ability of native speakers of English, WritePlacer. Our rationale here was that because the students had already entered the university, we wanted to be able to measure their ability in relation to a native-speaking population rather than an ESL population. For obvious reasons, there is no listening test designed for native speakers. Since the time of our piloting, we have added another reading course (108). Thus, students can potentially place into six required classes: the entire writing sequence (110, 111, and 112), the speaking/listening sequence (101 and 102), and one reading course (107 or 108; 108 is optional for those completing 107 and offers a higher level alternative to students with higher test scores). At the other extreme are students who place into only one class, or do not have to take any classes at all.

Because the Accuplacer tests are Internet-based, they do not require any software to be installed on school computers, but working with appropriate testing and IT staff on campus was an important part of the piloting and implementation of the new placement tests. Computer lab time needs to be reserved, and having staff on hand to assist with the initial login and setup is crucial. The process for doing this with Accuplacer is relatively simple, but some things can go wrong (e.g., if all browser pop-up blockers are not turned off, the test, which relies on pop-ups, will not function properly), so having support is highly desirable.

After the tests were administered, which required about 2 hours from setup to shutdown, the next step was to analyze the results. Accuplacer results can be compiled into a spreadsheet, which simplified our analyses. It was possible to set ballpark cut scores by sorting the students by their current course levels (110, 101, etc.) in the spreadsheet and then identifying the score range for students placed into a given level, minus any outliers. This allowed us to generate a preliminary placement rubric.

One significant point revealed by the pilot test was that for placement into the appropriate writing course, it was necessary to use the reading and writing scores in conjunction. However, this is an area where the test continues to be less than perfect for us in that a student occasionally has wildly disparate scores on the reading and writing tests (i.e., an extremely low reading score but a very high writing score). In such cases, we have found it necessary to have a human rater hand-score such students’ tests. Fortunately, it is easy to access the actual student essays for the WritePlacer and not time-consuming to read and rate them. Each time we run the placement test, we make sure to have a few experienced instructors on-call for this purpose. The number of hand scores varies but is seldom more than 10 percent of the total number of tests.

IEP

The general procedure of piloting the tests with current students in AEIS was repeated several terms later in the IEP. However, there were two significant differences.

First, Accuplacer offers five ESL tests: Sentence Meaning, Language Use, Reading, Listening, and ESL WritePlacer. In the course of piloting all five tests, we discovered that administering all of the tests resulted in a very long test. Therefore, we decided to limit the number of tests to three, and it became necessary to determine which combination of the subtests would give us the best results. In order to make this determination, a regression analysis was done using SPSS with the pilot test students’ current placement as the dependent variable and the five subtests as the independent variables. The results of this analysis indicated that WritePlacer, Listening, and Sentence Meaning were the best predictors of placement, so we decided on those three subtests.

However, it is important to note that this does not mean these are the three best subtests for every institution. We know from conversations with other Accuplacer users that other institutions have found different tests more useful. For example, because our programs enroll many East Asian students who have studied grammar extensively, the Language Use test, a fairly traditional grammar test, did not provide a very accurate assessment of their actual productive language proficiency. In contrast, the Sentence Meaning test seems to measure understanding of colloquial language and lexical collocations, and therefore provides us with a much better assessment. On the other hand, according to colleagues in community colleges, programs that have large numbers of generation 1.5 students or immigrants who have lived in the United States for a relatively long time find the Language Use test very useful and the Sentence Meaning test less useful, presumably because their students are familiar with colloquial English. In short, it is important to assess each institution’s needs carefully.

The other difference between the IEP and the AEIS testing was that setting cut scores in the IEP was rather more complicated due to the large number of level distinctions to be made. In order to tackle this issue, we used the contrasting group method described by Fulcher (2010). In this method, the distribution curves from students placed in the different levels are overlaid, and the points at which the curve from students at one level, say level one, overlaps the distribution of scores of the adjacent level, say level 2, gives an approximate cut score. These approximate cut scores were then used to re-place the pilot students. The results of these placements were compared with the students’ actual placements, and the results of this comparison were used to shift cut scores up or down as seemed appropriate to get the desired results.

Finally, it should be noted that after all our labors in setting cut scores, we belatedly learned that the College Board offers technical assistance with setting cut scores, so should you decide to use Accuplacer or a similar test, we would strongly encourage you to research the services offered by the test producer.

BENEFITS AND CHALLENGES OF ADOPTING THE ONLINE TESTS

Benefits

Overall, the adoption of the Accuplacer tests has proven to be cost-effective and a reasonably accurate way to place large numbers of students. The benefits we have perceived as results of this change include the following:

  • More systematic, accurate, and consistent placements
  • Better use of faculty time (far fewer teacher-hours spent training, administering, and scoring placement tests)
  • Better face validity with students (the tests look more professional than photocopied paper tests)

AEIS, the matriculated program, has experienced several additional benefits. After much work with UO’s Testing and Registrar’s offices, it will soon be possible to upload students’ placement test results directly into their individual degree audits. This will allow academic advisors to track students’ compliance with their requirements more easily, and will automatically ensure that students do not register for classes that they are not ready for. In the past, this information had to be manually entered into the system.

Challenges

Although adopting these new tests is a decision that we do not regret, it is important to recognize that the tests do not do everything. Administrators still need to be prepared to undertake a number of tasks:

  • Scheduling labs and liaising with relevant offices on campus (the Testing Office, International Affairs, and the registrar in our case)
  • Creating instruction documents and training faculty in how to set up computers, administer the test, and troubleshoot problems during administration of the test, such as
    • Assisting low-level students with logging in and so on.
    • Students clicking “submit” before they are finished and other unpredictable things
  • Investigating widely disparate scores; human oversight and judgment are still sometimes necessary
  • Implementing a system whereby placements are confirmed by teachers
    • Many students take the placement test shortly after arriving in the United States (that is, they are still jetlagged and disoriented), so in our programs teachers administer pretests in the first week of classes to make sure placements are appropriate and may recommend students be moved to another level.
  • Monitoring cut scores and re-evaluating student performance from time to time
    • We have found that when populations shift (e.g., with the recent influx of Chinese students), cut scores may need to be adjusted.

Finally, there are presently a limited number of prompts for Accuplacer’s writing tests. This means that test security can become an issue. However, the College Board has gradually been making new prompts available, so this issue will presumably be solved in the near future.

CONCLUSION

Although implementing an online testing system has not been effortless or trouble-free, it has addressed the problems that prompted this transition, and it has been successful overall. Our students are placed quite accurately, and teachers have a lot more time to spend working on curriculum and preparing classes. We hope that this narrative of our experience provides other institutions with some useful guidelines should they be facing the same challenges we have faced as a result of a rapid increase in student population.

REFERENCES

Fulcher, G. (2010). Practical language testing. London: Hodder Education.

Hughes, A. (2003). Testing for language teachers. Cambridge, England: Cambridge University Press.


Tom Delaney is an instructor in the American English Institute and the Linguistics Department at the University of Oregon. Alison Evans is the associate director of the American English Institute at the University of Oregon. The authors invite correspondence regarding this article.