ALC Newsletter - 12/14/2015 (Plain Text Version)

Return to Graphical Version

 

In this issue:
LEADERSHIP UPDATES
•  LETTER FROM THE CHAIR
•  LETTER FROM THE PRESIDENT
SUMMARIES FROM 2015 TESOL CONVENTION
•  2015 AFFILIATE EDITORS' WORKSHOP
•  REFLECTIONS: TESOL SESSIONS STRENGTHEN AFFILIATES
ARTICLES
•  REPETITION AND MASTERY
•  DEVELOPING CULTURAL AWARENESS AND ACCEPTANCE IN THE CULTURALLY-DIVERSE CLASSROOM
•  NEWS FROM TEXTESOL V
•  WINDS OF CHANGE IN ELT
•  CELEBRATING OUR HISTORY...INFORMING OUR FUTURE!
•  INTERNATIONAL STUDENT ENGAGEMENT INITIATIVE
•  DEVELOPING A CAN-DO CULTURE FOR ENGLISH LANGUAGE LEARNERS
•  SMALL SCHOOL FOR BIG PROJECTS: THE MONTH OF ENGLISH CULTURE IN NIKOLA TESLA PRIMARY SCHOOL
•  IF YOU BUILD IT, THEY WILL COME: BRINGING THE CONFERENCE EXPERIENCE IN-HOUSE
•  TESOL ARABIA FEATURED IN THE PROFESSIONAL PROGRAM AT THE ABU DHABI INTERNATIONAL BOOK FAIR
•  THE FIRST ANNUAL MEXTESOL SPELLING BEE 2015
•  MATSOL PRESIDENT'S FALL LETTER
•  TRANSCENDING BOUNDARIES: SOCIAL MEDIA FOR WAESOL AND TRITESOL
•  LESSON IDEA: THE MAGIC OF THINGLINK
•  PERMACULTURE AS PEDAGOGY: SUSTAINABLE STUDENT ENGAGEMENT IN THE ESL/SIFE CLASSROOM

 

ARTICLES

REPETITION AND MASTERY


One of the fundamental goals of education is offering methods for learners to master content, or, in other words, to demonstrate clear and knowledgeable performance concerning content so that one understanding of an issue, task, or concept is performable (Klien, Fan, & Preacher, 2006). Because no clear delineation of what constitutes “mastery” exists, researchers largely leave the determination of mastery to the evaluator but suggest several methods to assess mastery of content. Several of these methods are observations, conferences, interactive journals, and tests. Some researchers consider some methods more effective at assessing certain content; for example, journals assess writing better than a conference would; tests can effectively measure achievement; and conferences and observations can measure a participant’s view formation, drives, and concerns (Genesee, 2004). However, mastery must be formed—it cannot bloom full blown, but is honed and sharpened by the learner’s engagement with the material.

In a recent study, researchers analyzed this formation process using tests to assess learners’ content knowledge and how that knowledge may develop into mastery of specific content. Their pilot study examined the effect of single- and multiple-attempt online content and application/discussion-styled assessments, and correlated data from these assessments to analyze the effects of multiple-attempt, versus single-attempt, online assessments on content learning and content mastery.

Researchers gathered data from 16 participants from eight nations, ranging from ages 19–33 with an average age of 25.44, median age of 26, and mode of 29 years. Fourteen of sixteen were graduate students, eight of those fourteen were concurrently enrolled in graduate courses; the other two students were undergraduate students. Two of the sixteen were female. For the most part, these students were highly motivated, as this course and these assessments would determine eligibility to enter pursuit of or continue pursuing advanced degrees.

Methodology

Researchers gathered data from 18 students enrolled in a university language class using an online learning management system (LMS), Schoology. Researchers removed two students who did not participate in the research. Students completed a total of 21 multiple-choice, multiple-attempt assessment linked to a mastery assessment in three sets:

1. 8 online multiple-choice reading assessments with multiple attempts allowed with mastery determined via linked discussion assessments with one attempt offered per assessment.

2. 10 multiple-attempt, online multiple-choice “writing – structure or grammar” assessments; seven evaluated student mastery via compositions, three evaluated student mastery via linked discussion assessments with one attempt offered per assessment.

3. 3 online multiple-choice assessments using online discussion assessments with multiple attempts.

To measure which assessment method produces the greater content mastery, researchers ran several correlations: 1) Scores from single attempts at a multiple-choice assessment and scores from a discussion assessment, and 2) scores from multiple attempts at a multiple-choice assessment and scores from discussion assessments. To measure the effect of multiple attempts on a multiple-choice assessment on scores, researchers correlated multiple-choice “reading” assessments having multiple attempts and scores on those multiple-choice assessments, and multiple-choice “writing” assessments having multiple attempts and scores on those multiple-choice assessments. Researchers evaluated all correlations for significance by determining the degrees of freedom and corresponding “Level of Significance (p) for Two-Tailed Test.” Researchers reported significance at .05 levels; if researchers found significance at the .01 level, this they also reported.

Table 1. Results


Study - Correlation of:

Pearson r

Df

Level of Significance


Needed

Significant


.05

.01

.05

.01


1. Multiple Attempts on Multiple-Choice Assessments and Scores


Multiple-Choice “Reading Assessment” Times Taken & Scores

-0.073

99

0.205

0.267

no

no


Multiple-Choice “Writing Assessment” Times Taken & Scores

-0.090

126

0.195

0.254

no

no


2. Assessment Method & Content Mastery


Single-Attempt Multiple-Choice Scores & Discussion Assessment Scores

-0.032

56

0.273

0.354

no

no


Multiple-Attempt Multiple-Choice Scores & Discussion Assessment Scores

0.457

94

0.205

0.267

yes

yes

Monceaux, A. (2013). The effect of multiple content assessments on student achievement and stress level in the hybrid classroom (pp. 6062–6070). Proceedings from EDULEARN13. Barcelona, Spain.

Discussion

Number of Attempts on a Multiple-Choice Assessment and Scores

This study looks at how many attempts a student actually made on a multiple-attempt assessment and their assessment average for two different content domains, reading and writing. In both cases, no significant correlation emerged from this study. Instead, one must look at the individual students’ attempts and scores. A high level student may take an assessment, make a “high enough” score, and select to not retake the assessment. However, a low level student may take the assessment three times (max offered in these studies), gain points each time, but perhaps never make a perfect score. Additionally, another student may select to not retake an assessment, regardless of the score.

Assessment Method and Content Mastery

This study reflects getting greater content mastery demonstrated in a student creative performance in assessments—the student applies content he or she has received in an analytical and evaluative form. This study tested whether a single or whether multiple attempts at an online multiple-choice assessment designed to foster content knowledge—remembrance and understanding—would correlate with higher discussion scores.

When correlating single-attempt, online multiple-choice assessment scores and online discussion assessment scores using Pearson product-moment correlation coefficient, no significant relationship between these two factors emerged. However, when correlating multiple-attempt, online multiple-choice assessment scores and online discussion assessment scores, a strong (.05 and .01) positive relationship emerged. This seems to signify that multiple attempts at an online multiple-choice assessment will produce higher discussion assessment scores.

Looking more closely at individual student performance seems to explain this phenomenon better. Students initially scoring high did not retake the assessments. However, those students who initially scored low retook the assessment until either their score was “high enough” or they were out of attempts. In our assessments, the assessment offered an incentive to those students who retook the assessment—each of the student’s attempts at the assessment is averaged, or the highest or last score would be used.

One explanation for these results may be that when a student has one attempt, his or her results become only results, but did not become fodder for learning—the one attempt does not allow the student to scaffold his or her knowledge against itself. However, when offered multiple attempts, the student can adjust errors in additional attempts. Thus, the student is returning to the content, gaining new understandings, and increasing the likelihood he or she will recall the information. Multiple attempts seem to compel needy students back into the material.

The likelihood of a student engaging in self-correction may increase with incentives, such as averaging assessment scores, counting the highest scores, or counting the last score. In each of these cases, the student potentially earns a higher assessment score/grade by restudying the content material and retaking the assessment. Furthermore, all of this takes place in a low-pressure system—the student determines if he or she will retest and how much time he or she will spend studying the material.

Conclusions

Some researchers are concerned with repetition in that students tend to memorize material rather than learn. Additionally, one may question the efficacy for students in a course with higher proficiency in a particular domain or area of content. Care should be given to these effects in the assessment’s construction (e.g., the number of questions should be substantial enough to make memorization difficult). Additionally, questions and answer choices should be randomized. Further, the assessment should have a broad enough window to engage the active participation of the most versed student—as students are given opportunities to reevaluate, this creates a greater reward for the student who will reevaluate his or her work, and test again. The best assessments compel the student to search the material thoroughly and repeatedly, and these assessments, hopefully, cause the student to memorize details, or, in other words, create a formative knowledge process so that a student may utilize the data in the summative task.

An additional concern is time. Using an LMS enables one to “game-ify” assessments: create challenging, objective assessments that offer immediate feedback; show student/course rankings; and offer both social, emotional, and scholastic rewards for continued play. Prior to this study, researchers found that students would repeat assessments up to 10 times when allowed to—creating additional areas of concern for time management, especially for the completion of course material that was not game-ified. Thus, for this study, researchers placed limits on the number of times a student could attempt an assignment to mitigate this issue.

References

Genesee, F. (Ed.). (2004). Educating second language children: The whole child, the whole curriculum, the whole community (11th ed.). Cambridge, United Kingdom: Cambridge University Press.

Klein, H. J., Fan, J., & Preacher, K. J. (2006). The effects of early socialization experiences on content mastery and outcomes: A mediational approach. Journal of Vocational Behavior 68(1), 96-115.
http://quantpsy.org/pubs/klein_fan_preacher_2006.pdf

Monceaux, A. (2013). The effect of multiple content assessments on student achievement and stress level in the hybrid classroom (pp. 6062–6070). Proceedings from EDULEARN13. Barcelona, Spain.

 


Alex Monceaux teaches English at TIEP at Lamar University and researches formative/summative assessments; effectiveness of repetition for summative task, stress, and content mastery; and using rubrics to coach and evaluate instructional effectiveness. He also serves on the TexTESOL IV Board of Directors as newsletter and journal editor, is a reviewer for several academic journals, and is past-president of the Southeast Texas Counseling Association.