Although receptive (aural) and productive (oral) fluencies are
precursors to reading and writing fluency, it is not uncommon for them
to be marginalized in EFL classrooms. This disparity exists for various
reasons, one being learners’ lack of access to the English language.
However, this inequality should be rapidly disappearing as interactive
technologies give learners and educators greater access to English. If
learners and educators advance alongside 21st-century technologies, the
aural-oral disparity in EFL learning may become a 20th-century plague.
Watch this Mobile
in Review video to catch a glimpse of how mobile took over
the world in 2011. Mobile learning, known as mlearning, refers to
learning that is available anytime, anywhere through hand-held devices.
Mlearning is undergoing rapid evolution. The pervasive use of
interactive wireless mobile devices by learners has positioned them to
lead rather than follow. The relationship between technology and
teaching is synergistic in that the more interactive the technology, the
more diverse and multifaceted its use as a learning tool (Quinn, 2012).
Mtools fulfill a new learner dependency on easy, rapid, and interactive
access to data, which, in turn, drive up demand for mobile devices and
resources. Rapid development of new interactive apps continues to drive
consumers toward the latest and greatest in such a quick succession that
most shudder at the thought of staying on top―but therein lies the crux
of mtools: mtools are learner-centric not learner-centered, thus
creating a paradigm shift in learning pedagogies as the i-generation
comes of age in the 21st century.
SHATTERING THE SILENCE
An implicit barrier in virtual learning is building a sense of
community due to the lack of face-to-face (f2f) contact, which
deemphasizes experience and overemphasizes description (Bonk, 2011).
Choreographed courses are being swiftly overrun by the rapid explosion
of interactive mtools, which, if used alongside experiential, social
constructionist pedagogy, may very well explicitly shatter the
psychological barrier between f2f and virtual learning by increasing
rather than decreasing interaction and community building.
Twentieth-century education has relied on institutional or instructor
control over course design and delivery, whether f2f or virtual.
Constructivist pedagogy, which is learner-centered, supports the
co-construction of learning to more closely align education with the
learner (Horton, 2012); however, mlearning tools are learner-centric,
shifting learning from controlled ingestion to uncontrolled creation.
The 21st century is set to make the learner the creator. Industry
leaders in interactive mtools are app creators and smart mobile device
companies. Soon to be loaded with quad core processors, such as the
Nvidia Tegra 3 chip, their incredible performance capabilities will run
on cloud platforms to further increase interactive capabilities by
allowing users to sync and manage their content and interactions.
Mdevices, equipped with megapixel resolution cameras, which can be used
alongside free interactive photo-voice apps such as Yodio and commercial ones like VoiceThread, give
learners an opportunity to comment, discuss, and collaborate on videos
and images, using webcams, voice, text, and freehand drawings. Their
uses are as endless as your imagination; combine this with new
dual-interaction apps like Apple’s FaceTime
that allow users to video chat and host a meeting simultaneously and
voilà, you have a speaking class in the cloud! Online tech-gurus rumor
that very soon, maybe as early as this year, mcreate apps will be
released. As an ESL educator, primarily immersed in secondary school
speaking classes, I find that teaching speaking is getting really
exciting!
GOING SONIC
The world is facing a huge increase in the number of students
as the world population moves from six billion to the United
Nations-predicted nine billion by 2050 (Bonk, 2011). This
shift in demographics means that learning opportunities must become
available on a much grander scale and platform. To meet these growing
demands, education must shift rapidly from a passive knowledge-transfer
epistemology to an adaptive
interactive one whereby learning is no longer learner-centered but
rather learner-centric. Mlearning is well positioned to develop the
first global learner-centric platform.
LISTENING TO SPEAK: SPEAKING TO LISTEN
Mlearning tools are rapidly paving the way for interactive
listening and speaking. Interactive scripts are already available on TEDtalks, which allow
listeners to access spoken and written discourse with simultaneous
translations between L1 and L2. EReaders already highlight text and
allow readers to adjust the reading speed of natural voice
speech-to-text (STT). It is just a matter of time before translation
devices such as the
ImTranslator are fundamental mtools―speak, listen, translate
at your own speed anywhere, anytime! Such fluidity gives learners
opportunities to create individualized learning that best represents
their learning needs, building on autodidactic learner-centric
pedagogy―a grave necessity to educate the upcoming population.
Android tablets and iPads are set to rapidly leverage mlearning
as most learning management systems (LMSs) already have smart mobile
capabilities. According to tech analysts, LMSs are swiftly changing to
support learner interface control, input, and plugin apps for external
media, including the incorporation of seamless social media integration
and creation apps. Moodle,
an open-source LMS, already offers plugins for most of these features in
their 2.0 or 2.1 versions, and because the systems are open source,
institutions can have code written that aligns directly with their
needs. Therefore, total integration of socially mediated
learner-negotiated material design is well set to explode. MediaWiki
powers open-source wikis, such as Wikipedia, but a good
example of an institution co-constructing learning using MediaWiki is
the UBC
wiki. And now you can embed commercial tools such as Cisco’s WebEx or Panopto and take synchronous
and asynchronous interaction, listening, and speaking to a whole new
level. (Skype offers an
affordable option but does not, at this time, have mvideo integration
for group interactions.) Embedding web f2f interaction tools in LMSs
allows interaction between learners and educators to be more private.
Thus, in this framework, the question is not where listening and
speaking are situated in the 21st century, but where are they not situated. The fact that this digital generation
has no perception of life without technology may be amazing to digital
immigrants. Here are a couple of unadulterated excerpts from young
digital age EFL adolescents, taken from a January 2012 writing
examination:
Technology takes many advantages for us, so we can not live
without technology. When you go out, there are many people have a smart
phone, compairs with past. It is a huge change, smart phone has a lot of
functions, it can replace many things which is important for us in
past. In the past, we can surf internet at home only, but now if you
have a smart phone, you can surf internet everywhere, you can take
photo, video, go to YOUtube, find information, send SMS with your smart
phone. Sometimes, I even think that smart phone look like Hong Kong
people’s girl friend. ―Chow Hui Tak
Hong Kong people can not lose technology or smart phone because
technology or phone is our lives. One day if technology or phone lose
in this place, I think we will die because technology is our life, our
life is technology. Many people have smart phone now. Like I-phone, HTC,
and Galaxy. All smart phone can download apps. Some apps are game.
Technology and games are very popular in Hong Kong because there have
our childhood and we cannot lose them. ―Leung Ho Yi
The oral and aural narrating traditions that evolved to retain
collective knowledge and experience have cultivated language learning.
Therefore, moving toward haptic integration for a total
language-learning experience is not a radical new idea but one as well
immersed in our antediluvian past as it is in the present. Therefore, it
is not surprising that haptics are increasingly drawn onto the
interactive stage. It will be interesting to see if Samsung’s haptic
transparent smart
window is a precursor to flexible smart-haptic 3D mobile
screens. If so, how will moving into a 3D mtool environment change
speaking and listening interactions in the near future, or how will
intentional coding, being spear-headed by Dr. Charles
Simonyi of Stanford University, change cross-lingual
communication. Silicon m-educators may seem like science fiction to
carbon educators, but it won’t be the first time science fiction has
defined a new reality. Compound this with the discovery and manipulation
of the language of life (genetic engineering) and who knows how this
will shape interactive communication. But what is obvious is that
teaching speaking and listening with mtools is rapidly becoming
authentic, dynamic, and learner-centric.
REFERENCES
Bonk, C. (2011). The world is open. San Francisco, CA: Jossey-Bass.
Horton, W. (2012). E-learning by design (2nd ed.). San Francisco, CA: Pfeiffer.
Quinn, C. (2012). The mobile academy: mlearning for
higher education. San Francisco, CA: Jossey-Bass.
Ruth M. Smith has taught English for 25 years and has
been working in an EFL context for the past 2 years. She is currently an
instructor at the Man Kwan Educational Organisation: The Jockey Club
Edu Young College in Tin Shui Wai and will be completing her master’s in
TESOL in April 2012. |