Last semester, students in my upper-intermediate English for
academic purposes (EAP) courses were required to annotate a text as part
of a critical thinking module, which also included a summary and
critical response to the text that they annotated. This three-part
assignment constituted one of the major assessments for the course and
was mandated across all sections of the level (about 10 in total). While
I had taught and graded the genres of summary and response in the past,
it dawned on me that annotation wasn’t something I had explicitly
taught before—at least not to any great extent—and it definitely wasn’t
something I had ever tried to formally assess. In fact, it seemed
questionable to me whether one could assess such an idiosyncratic
document (or process?) in any kind of objective way. However, as a
newcomer to this particular language program, I decided I should just go
with the flow and see how the assignment panned out.
So in spite of these initial qualms, the assignment was carried
out as planned: Students chose and carefully annotated a text (one of
five approximately 1,500-word academic essays) and then scanned and
submitted their annotations via our course Moodle webpage. After a
calibration meeting where the other teachers and I refined the criteria
for assessment and identified benchmarks for different score ranges, the
annotations were then assessed by individual teachers and scored using a
rubric developed for the task.
The result? The annotations proved to be rather difficult to assess for the following reasons:
Lack of Consistency Among Texts
While three of the texts followed the prototypical structure of
a problem-solution essay, the other two texts did not. This
inconsistency among texts was problematic because, while the
problem-solution papers lent themselves to a certain variety of
annotation techniques (e.g., highlighting the thesis statement, labeling
the text structure in the margins), the other two texts elicited
annotations that were highly varied and (incidentally) seemed to miss
the point of the texts. This resulted in a high degree of variation
among students’ annotations, making it difficult to fairly score
different students’ work.
Unclear Purpose for Annotating
Usually readers have a specific purpose for annotating. For
example, they may want to find support for their arguments when writing
an essay or identify information from a reading that they will be tested
on later, or perhaps they just want to engage more deeply with the text
in order to improve their reading comprehension. For this assignment,
however, we were not as clear as we should have been about the purpose
of the annotations, which led to a lack of common focus among students’
work (similar, in a way, to the effect of including texts with different
rhetorical structures).
Ambiguity of Annotations
As is quite common in actual practice, students did not label
every part of their annotations. This caused difficulties in terms of
interpreting the meaning or purpose behind their annotations (i.e., the
reasons why parts of the text had been underlined, highlighted, circled,
etc.).
The Solution: Using Screencasting to Fill in the Gaps
Reflecting on the causes of the problems listed above, it
seemed to me that the first two issues (inconsistency of texts and
purpose for annotating) could be addressed fairly easily by
predetermining the type of texts that students would use and giving
students a clear purpose for annotating at the outset. However, the last
issue (ambiguity of annotations) remained problematic because without
knowing the unstated logic or reasoning behind students’ annotations, it
would be difficult—if not impossible—to fully understand why they
annotated in a particular way.
And then a thought occurred to me: What if we could access those cognitive processes? How could
this be accomplished? Eventually, I came up with the following solution:
Have students create screencasts in order to fill in the gaps of their
annotations. In other words, ask students to include a narrative
component to explain aspects of their annotations that would otherwise
be ambiguous.
This idea was inspired in part by work that a colleague of mine
had done using screencasts to give written feedback and is a sort of
spin on the think-aloud protocol (or self-report). This elicitation
technique is used in applied linguistics research to gain access to the
cognitive processes that second language learners engage in when
performing language tasks (Cohen, 2012) and is viewed as a vital means
of assessing the validity of assessment instruments (Green, 1998, cited
in Cohen, 2012). Thus, the use of screencasting seemed particularly apt
for assessing annotation skills, as it simultaneously grants access to
the visual component of the annotation while also addressing the issue
of ambiguity through the inclusion of an accompanying narrative.
Assignment Details
During the following semester, I decided to test this technique
with two of my classes (once again, upper-intermediate EAP classes).
This time, however, the purpose was more specific and the task itself
more authentic. Because students were in the process of gathering
sources to write a problem-solution essay, I asked them to annotate one
of the sources they were planning to use in their papers. I then had
students create a screencast using the free web application Screencast-o-matic
to explain their annotations and tell how their source would be
incorporated into their papers.
Before giving out this assignment, we reviewed the purposes of
annotating and various
techniques that can be used. I also created a screencast of
my own explanation of the assignment in detail, provided two sample
annotation screencasts, and created a step-by-step
guide for how to use Screencast-o-matic.
Below are the directions to the assignment that I gave students:
I would like to know more about the annotation techniques you
use and how annotation helps you obtain important information from texts
you read. For this assignment, please do the following:
- Carefully annotate one of the sources you plan to use for your problem-solution essay.
-
Scan your annotation and obtain the color PDF file.
-
Open your scanned annotation (the PDF file) on a computer.
-
Open Screencast-o-matic.
-
Record a 3- to 5-minute screencast to explain the following information:
a. The source you’ve annotated and its relation to your essay topic (be brief)
b. The annotation techniques you used (e.g., brackets, squiggly lines, stars)
c. The aspects of your annotation that might be difficult for
someone else to understand (i.e., things that are not clearly labeled,
such as highlighted sections)
-
Save your screencast as an MP4 and upload your file to
Moodle under the “Annotation Screencast” assignment submission by
(date), at midnight.
Assessment Procedures
To assess students’ annotation screencasts, I developed a rubric,
which was modeled on the rubric used in the previous semester for the
first annotation assignment. As I had hoped, the verbal report
information was very useful in clarifying the ambiguities in students’
annotations. I also found that the rubric was fairly easy to use and
resulted in a fairly normal spread of scores (in the statistical sense).
I should mention that while the rubric does not explicitly state
“explains relevance of the text to essay,” almost all students did so,
as it was one of the main directions given for the assignment. I ended
up considering this criterion as part of the “evidence of critical
thinking” band, in addition to the other criteria listed.
Guidelines and Other Considerations
This assignment or adaptations thereof can be quite useful for
assessing students’ ability to annotate effectively and think
critically. However, there are a few things that should be considered
before asking students to complete such an assignment.
First, you should consider whether the task will be a
high-stakes or low-stakes assessment, as this may have implications for
scoring procedures. Specifically, if you were to adopt this as a major
form of assessment, it would be advisable to use two raters (to increase
reliability) and to conduct norming sessions and establish benchmarks
before scoring. For lower stakes assessment, such procedures might not
be necessary (and may ultimately be impractical).
Another consideration is how you define the purpose of the
task. As mentioned earlier, there are many specific purposes for
annotating a text; depending on your course objectives, students’
proficiency level, and their English needs, the assignment should be
adapted so that it’s most relevant to your course and meaningful to
students.
Finally, it should be mentioned that the process of assessing
the screencasts can be a bit time-consuming, so you should be prepared
for this and think carefully about your resources (in terms of time)
before giving the assignment. Nevertheless, I believe it is time well
spent that can provide insight into a wealth of information about
students’ reading and critical thinking skills.
Reference
Cohen, A. (2012).
Verbal report. In C. Chapelle (Ed.), Encyclopedia of applied
linguistics. Oxford, England: John Wiley & Sons.
Kerry Pusey
has an MA in TESL from Northern Arizona University and has taught in the
United States, Brazil, and Macau. He has also done work in small- and
large-scale language assessment. When Kerry is not in the classroom, you
can usually find him traveling by plane, boat, tuk tuk, or bicycle to
various locations around the globe. |