New Literacies, Multiliteracies, and Multimodality: Pushing Ourselves and Our Students to Interpret and Present Information in Multiple Dimensions

Though many feel overwhelmed by the increasingly complex and varied presentations of mass media in the 21st century, it’s important to remember that mass media has been rapidly evolving since the 18th century, and each major change was met with similar resistance before individuals truly came around to embrace and understand the new capabilities of these advancements.

Despite the fact that “written text was [the] pervasive source of knowledge” for over “half a millennium or longer,” photography and illustration soon began to add an entirely new form of representation to the practice of spreading information (Cope and Kalantzis 361). With illustration, consumers of media didn’t simply have to read and write when it came to digesting or circulating the news; they could add a visual component to their exchanges that allowed participants to actively see the events being described. With the invention of “analogue telephones and radio,” audio added yet another dimension to the discourse of news, “as sound was made from the same bits and bytes as image and character” (Cope and Kalantzis 361). Individuals could receive reports from the comfort of their own home without ever having to go out and pick up a newspaper or magazine at all. However, it wasn’t until the rise of television in the 1950s that “truly integrated multimodal media [emerged],” as this technology allowed for communication to combine text and visuals and audio for a truly dynamic and immersive experience (Cope and Kalantzis 361). With televisions, everyday citizens were able to read, see, and hear their news simultaneously. This was a shock to the system for sure, but the next step would further radicalize the world of news consumption. When the Internet exploded onto the scene in the mid 1990s, this event contributed to a “blurring of boundaries,” as new “spatial and architectonic metaphors associated with site navigation” came into play (Cope and Kalantzis 362). The Internet added the element of interaction with news media. We no longer had to sit back and blindly consume information from our televisions; now, we could actually work with our screens.

With all these technologies, pedagogy has grown increasingly complicated for educators. When experimenting with new ways to present information and instruct a classroom, there are plentiful opportunities for multimodal interaction at our disposal, but many teachers routinely stick to the traditional lecture and PowerPoint methods. Their refusal to diversify their instructional strategies comes not from a place of disdain for novel technological capabilities, but from a place of ignorance. Because “traditional literacy does not by and large recognize or adequately use the meaning and learning potentials inherent in different modes and the synaesthesia involved in shifting between one mode or another,” many teachers refuse to stray from these age-old habits, favoring what is known (“the monomodal formalities of the written language”), over what is foreign (Cope and Kalantzis 363). Nevertheless, if we are to be the best educators possible for our students, we must seek to first understand the ways this generation interprets and receives information and then subsequently adapt to these synaesthetic tendencies. After all, “in a very ordinary, material sense, our bodily sensations are holistically integrated” (Cope and Kalantzis 363). 

In addition, it simply does not make sense to continually teach vastly different subjects with the same presentation method over and over and over again. The beauty of multimodality lies in the fact that “meaning expressed in one mode cannot be directly and completely translated into another” (Cope and Kalantzis 363). Different modes of representing information (from written to oral to visual to aural to tactile to gestural to spatial) or combinations of these representations work better for different subject matter than others. While “each of these different modes has the capacity to express many of the same kinds of things, they also have representational potentials that are unique unto themselves” (Cope and Kalantzis 363).

(“Classwork”)

Though we may not realize it, we as teachers encounter this type of multimodal representation on a near daily basis if we utilize any online newspaper or blog of any kind. It may seem frightening to incorporate multimodality into teaching, but we’re certainly no stranger to interacting with this type of display in our free time. John Branch’s highly acclaimed interactive article “Snow Fall: The Avalanche at Tunnel Creek,” published in The New York Times, was a six-part narrative that delivered an all-encompassing exploration of the “February 19, 2012 avalanche that occurred at Tunnel Creek” by incorporating video “interviews with every survivor, the families of the deceased, first responders at Tunnel Creek, officials at Stevens Pass and snow-science experts” and “a computer-generated simulation of the avalanche” alongside more traditional news reporting (Branch). Branch simply could’ve just published an article composed of written explanations of the natural tragedy and interviews with the survivors and experts, but this interactive strategy sought to place readers in the event by presenting them with wholly immersive visual recreations and allowing them the opportunity to hear from those who had suffered right as they read the article itself. Similarly, Andrew Beck Grace crafted an “interactive documentary” to coincide with the five-year anniversary of a tornado that decimated Tuscaloosa, Alabama in 2011; the piece, titled “After the Storm,” was published by The Washington Post and it eschews any written narrative of any kind by simply seeking to capture the authentic experience of living through a tornado via online virtual-reality experiences and highly engaging audiovisual depictions of what sights and sounds one can expect from this natural disaster (Grace). Countless articles have been written about tornadoes, but Grace’s piece is wholly unique unto itself by capturing the sheer terror and devastation associated with such an event thanks to his clever and inventive use of multimodality. Finally, Jeff Himmelman’s “A Game of Shark and Minnow,” also published in The New York Times, seamlessly blends Himmelman’s in-depth coverage of “a geopolitical struggle that will shape the future of the South China Sea” with photography courtesy of Ashley Gilbertson, as this all unfolds in one unbroken narrative (Himmelman). Unlike Branch’s or Grace’s stories, Himmelman’s article is not interactive, but it rather incorporates visuals directly into the story for a reader to come across as they scroll, as opposed to making viewers stop and watch a video or explore a virtual reality experience. Each of these approaches (from Branch’s to Grace’s to Himmelman’s) take extraordinary real-world events and utilize multimodality capabilities to assure that the stories covering these events are as unique and layered as the actual events themselves. 

Just as we consume material like this on a near daily basis, our students do as well. Rather than lamenting about their shorter attention spans or fast-paced attitudes, teachers need to accept that students won’t alter these ingrained patterns; the ball is in our court to find a way to better appeal to them. Not every single lesson requires an interactive component or a visual example, but we should seek to capitalize on what mode students are most comfortable learning from. Luckily, due to the parallelism of multimodality, “if words don’t make sense” at first, “[a] diagram might, and then the words start to make sense” in retrospect (Cope and Kalantzis 364). Just because we start instructing on a topic in a certain mode doesn’t mean we have to stay in that mode for the duration of the lesson; as students come to understand information from one mode, interpreting other modes becomes easier. Though it may be harder for teachers of another generation to transition from the “compelling linearity [of a] traditional page of written text” to embracing the “navigational efforts” required when you interact with choose your own “reading path” on more complex digital resources, the very activities that seem too advanced or complicated for some educators are actually commonplace for students (Cope and Kalantzis 364-365). Rather than forcing students to regress and learn via the ways we were taught, we must progress and tailor our lessons to their needs. In the end, “written language is not going away; it is just becoming more closely intertwined with the other modes,” and we can use this solid foundation to begin our expedition into multimodality (Cope and Kalantzis 365).

When it comes to embracing a multimodal grammar in class, this is a natural progression of education. One does not have to simply ditch written/typed instruction altogether and solely teach with visuals and flashy digital interactive activities. Each mode is not better or worse than the other, and they shouldn’t be used singularly or interchangeably. Rather, “multimodal meaning [represents] the sum of linguistic, visual, spatial, gestural and audio modes of meaning, and it is the process of integration that fully captures “the inherent ‘multiness’ of human expression and perception, or synaesthesia” (Cope and Kalantzis 423). We do not interpret the world in one dimension or via one mode, and it does not make sense to only teach one set way. 

Furthermore, utilizing multimodality in instruction doesn’t mean embracing a whole new teaching style; it simply means achieving the same instructional goals via new means. Teachers who use multimodality in their presentations “still must analyze an audience, choose a purpose, craft rhetorical appeals, and negotiate many of the same decision-making processes required in print-based writing situations”; likewise, “students must invent, draft, revise, and edit when composing a multimodal text just like they do when composing a written essay” (DePalma and Alexander 183). One of the greatest complaints regarding teachers and multimodality lies around the fact that they don’t feel prepared to grade multimodal assignments. Luckily, current rubrics do not have to be upended or tossed out in any capacity; in fact, many teachers assert that they “can use the same rubric during the grading process regardless of if students are being asked to complete a single-mode or multiple-mode assignment” and just add requirements here or there if necessary (DePalma and Alexander 183).

Before requiring multimodal assignments in the classroom, we must better understand our students’ familiarity with the material. Though many of our students are “digital natives” who have “been immersed in technology since their early years”, they may not have been properly prepared to “produce rhetorically sophisticated texts” (DePalma and Alexander 184). Consumption and production are two vastly different activities, and this is always a key distinction one must understand before diving headfirst into multimodal classroom integration. Therefore, teachers must make an effort to gain information and evaluate which “experiences as alphabetic composers might apply to [students’] work as multimodal composers” to format assignments to their special needs (DePalma and Alexander 184). At the end of any day, if multimodal material is becoming too advanced to navigate or too distracting, one should never feel ashamed about diminishing its use either. We should always be making sure that “the methods utilized in composition teaching are those best apt to help students develop and transfer the kinds of literacies they will need to thrive in a range of twenty-first century contexts” first and foremost (DePalma and Alexander 197).

Multimodality may not be the final frontier of technological integration in classrooms, but it is the future of educational instruction, and teachers would be wise to accept and acquaint themselves with this new literacy in order to run the most effective classroom possible.

Works Cited

Branch, John. “Snow Fall: The Avalanche at Tunnel Creek.” The New York Times, The New York Times, 20 December 2012, http://www.nytimes.com/projects/2012/snow-fall/index.html#/?part=tunnel-creek.  

“Classwork 7/5: What is multimodality?” Inquiry and the Craft of Argument, WordPress, 2017, https://rampages.us/univ200szabo/2017/07/05/classwork-75-what-is-multimodality/

Cope, Bill, and Mary Kalantzis. “A Grammar of Multimodality.” The International Journal of Learning, vol. 16, no. 2, 2009, pp. 361-425.

DePalma, Michael J., and Kara P. Alexander. “A Bag Full of Snakes: Negotiating the Challenges of Multimodal Composition..” Computers and Composition, vol. 37, 2015, pp. 182-200.

Grace, Andrew B. “After the Storm.” The Washington Post, The Washington Post, 27 April 2015, http://www.pbs.org/independentlens/interactive/after-the-storm/#/dear-future-disaster-survivor

 Himmelman, Jeff. “A Game of Shark and Minnow.” The New York Times, The New York Times, 27 October 2013, http://www.nytimes.com/newsgraphics/2013/10/27/south-china-sea/index.html.  

Published by Zach Gilbert

I am a junior at the University of Nebraska at Omaha majoring in Secondary English Language Arts Education while also pursuing a minor in Communication Studies!

4 thoughts on “New Literacies, Multiliteracies, and Multimodality: Pushing Ourselves and Our Students to Interpret and Present Information in Multiple Dimensions

  1. we must seek to first understand the ways this generation interprets and receives information and then subsequently adapt to these synaesthetic tendencies

    Zach,

    What a wonderfully written and inspiring piece. I really appreciate the call to action you incite for the teachers’ need to understand how the younger generations are acquiring their information and how to use them effectively in the classroom. I’ve constantly heard people talk about how the “new generation” is completely different, unable to learn, always busy with their phones or whatever. But if you go back, people were saying the same things about writing. And then libraries. And then the printing press. And then the radio, and TV, and internet. Older people are always intimidated by new technologies that they didn’t have. We can’t fall into that trap. We need to realize that new technology makes life easier and people smarter, and we need to encourage it.
    Thanks again for you intriguing thoughts and excellent writings!

    John

    Like

    1. John,
      Thank you so much for this response! I wholeheartedly agree that the conversations we’re having today are eerily reminiscent of the talk that’s plagued any new technology for centuries, and we’re constantly going through this up and down! I wish more people would view technology as not either “good” or “bad,” but simply as a tool plain and simple! It’s not an inherently negative OR positive presence in the classroom; it all depends on how its utilized!

      Like

  2. Zach,

    This was a really great post! I loved how you said “we do not interpret the world in one dimension or via one mode, and it does not make sense to only teach one set way.” I think it is imperative that we, as teachers, strive to help our students understand the world they are living in. Especially with all of the new forms of media, we should teach them how to analyze different things. It’s pointless for us to assume they will only come in contact with traditional written texts, so why should we teach them like they are?

    Great job!
    Maria Mainelli

    Like

    1. Maria,

      Thank you so much for this comment! I’m so glad that we see eye-to-eye on this issue. I truly believe that teachers really don’t have a choice anymore on “if” they teach with multimodality or not; it’s ingrained in our culture and it’s near impossible to remove its influence from education or news consumption in any capacity. Better understanding how to employ it in the classroom is key to our success!

      Like

Leave a comment

Design a site like this with WordPress.com
Get started