What can a language program do when the screening test becomes
obsolete? Find another test, or make a new one.
This is the dilemma facing many ITA programs around the United
States with respect to the SPEAK test. Many feel the SPEAK test is
obsolete and needs to be replaced with a new test. The TOEFL iBT has
been marketed by Educational Testing Service as a new ITA screening
test, the test to replace the SPEAK that programs are seeking. However,
the lack of correlations between TOEFL iBT and SPEAK has caused concern
(and perhaps alarm?) for many programs and an alternative is still
needed. Therefore, many respected programs are still using the SPEAK.
However, some respected programs have created their own new tests.
The University of Minnesota has developed a new performance
test that is called the Spoken English Test for Teaching Assistants, or
SETTA. This test is in the beta phase, pending an analysis of predictive
validity, criterion-related validity, and consideration of
appropriateness of tasks (a kind of construct validity). This article
gives a brief overview of the SETTA.
As mentioned earlier, the SETTA is used for screening the
English ability of international TAs. It is used for exemption from or
placement into English/teaching courses. The Web site for the Center for
Teaching and Learning at the University of Minnesota describes the
purpose of the SETTA as follows:
The Spoken English Test for Teaching Assistants (SETTA) ...
measures spoken English pronunciation, fluency, grammar, and vocabulary,
as well as listening comprehension. A passing score indicates highly
accurate and comprehensible spoken English. Because a wide variety of
Englishes are spoken around the world, you are not expected to speak
with a North American accent or to produce completely error-free English
in order to pass the SETTA.
Essentially, the SETTA is a 15-minute presentation in front of
two trained raters and an undergraduate student. Twenty minutes before
the presentation, the test taker is given a task sheet with randomly
generated material from an introductory textbook in a discipline that
the test taker selects when he or she registers. Then, the test taker
has 20 minutes to prepare and then about 15 minutes to present two
tasks. Each task is allotted 5 minutes and followed by two questions
(one minute each).
THE ROLE OF CHOICES
What is unique about this test is the inclusion of choices.
When the testing committee at the University of Minnesota was
considering the format of this test, there was a concern about a
difference in difficulty of topics, both within and between disciplines
(cf. Papajohn, 1999). Although there can, of course, still be some
question of the generalizability of specific task performance to other
real-life situations (cf. Bachman, 2002), the testing committee at the
University of Minnesota hoped that by giving the test taker some control
over the material to be presented, the unfairness of different
difficulties would be mitigated.
SETTA incorporates choice in three ways. First, the test taker,
when registering for the test, chooses the academic field from which
the tasks will be chosen. Currently there are 19 academic fields to
choose from, focusing on introductory courses (e.g., physics and
sociology) common to most disciplines.
Then, the test taker has a choice of doing two out of three
teaching tasks (described below). Finally, within each task, there is a
choice of two or three items that fulfill the task’s function. A sample
task sheet can be found on the University of Minnesota’s
Center for Teaching and Learning Web site..
AUTHENTIC DISCIPLINARY MATERIALS
Another unique feature of the SETTA is that the task content
has been generated from introductory textbooks that are actually used at
the University of Minnesota. For each test, the tasks are randomly
generated from a database of questions that was created from the
introductory textbooks by experienced graduate teaching assistants (TAs)
in each field.
AUTHENTIC TEACHING TASKS
The test taker chooses two tasks. The task choices are typical
tasks that teaching assistants perform in their duties as TAs. These
tasks are contrasting two terms, describing a visual (graph, table,
etc.), and solving a problem (or discussing a discussion question). In
addition, to assess listening and ability to answer questions, after
each task the test taker is asked two questions. Although an integral
part of communicative tests is unpredictable input, in order to try to
limit the variation in questions, the undergraduate student who asks
questions is given a list of question stems for each task (e.g., “What
do you mean by . . . ?”). These stems were taken from the pilot version
of the SETTA and from other microteaching tests at the University of
Minnesota.
RATING
The rating of each test is done in person by two experienced
instructors in the ITA program at the University of Minnesota. In
addition, in order to allow raters to focus solely on the examinee’s
language performance, all test instructions, timing, and camera
operation are handled by a trained undergraduate student. The
undergraduate student also asks the two questions after each task.
The rubric that the test raters use does not assess the quality
of teaching or presentation. It assesses only segmental and
suprasegmental features of pronunciation, fluency, grammar, vocabulary,
and listening comprehension. This language rubric is the same one as the
language portion of the rubric used in the ITA program for the
end-of-semester mock teaching tests. The test-creation committee
considered revising the rubric specifically for the SETTA, but it was
decided that the goal is the same for both tests: sufficient English for
performing TA duties.
The SETTA will likely be revised after its components and
performance are analyzed in the coming year. The testing committee at
the University of Minnesota welcomes any questions and suggestions. For
more information, please visit our Web
site.
REFERENCES
Bachman, L. F. (2002). Some reflections on task-based language
performance assessment. Language Testing 19,
453-476.
Papajohn, D. (1999). The effect of topic variation in
performance testing: The case of the chemistry TEACH test for
international teaching assistants. Language Testing
16, 52-81.
Barbara Beers has worked in the International TA
Program at the University of Minnesota for over 10 years. She is also
currently the coordinator of testing for the
program. |