Multimodality

Last updated
Example of multimodality: A televised weather forecast (medium) involves understanding spoken language, written language, weather specific language (such as temperature scales), geography, and symbols (clouds, sun, rain, etc.). Televisedweathe.jpg
Example of multimodality: A televised weather forecast (medium) involves understanding spoken language, written language, weather specific language (such as temperature scales), geography, and symbols (clouds, sun, rain, etc.).

Multimodality is the application of multiple literacies within one medium. Multiple literacies or "modes" contribute to an audience's understanding of a composition. [1] Everything from the placement of images to the organization of the content to the method of delivery creates meaning. This is the result of a shift from isolated text being relied on as the primary source of communication, to the image being utilized more frequently in the digital age. [2] Multimodality describes communication practices in terms of the textual, aural, linguistic, spatial, and visual resources used to compose messages. [3]

Contents

While all communication, literacy, and composing practices are and always have been multimodal, [4] academic and scientific attention to the phenomenon only started gaining momentum in the 1960s. Work by Roland Barthes and others has led to a broad range of disciplinarily distinct approaches. More recently, rhetoric and composition instructors have included multimodality in their coursework. In their position statement on Understanding and Teaching Writing: Guiding Principles, the National Council of Teachers of English state that "'writing' ranges broadly from written language (such as that used in this statement), to graphics, to mathematical notation." [5]

Definition

Although multimodality discourse mentions both medium and mode, these terms are not synonymous. However, their precise extents may overlap depending on how precisely (or not) individual authors and traditions use the terms.

Gunther Kress's scholarship on multimodality is canonical with social semiotic approaches and has considerable influence in many other approaches, such as in writing studies. Kress defines 'mode' in two ways. One: a mode is something that can be socially or culturally shaped to give something meaning. Images, pieces of writing, and speech patterns are all examples of modes. [6] Two: modes are semiotic, shaped by intrinsic characteristics and their potential within their medium, as well as what is required of them by their culture or society. [7]

Thus, every mode has a distinct historical and cultural potential and or limitation for its meaning. [8] For example, if we broke down writing into its modal resources, we would have grammar, vocabulary, and graphic "resources" as the acting modes. Graphic resources can be further broken down into font size, type, color, size, spacing within paragraphs, etc. However, these resources are not deterministic. Instead, modes shape and are shaped by the systems in which they participate. Modes may aggregate into multimodal ensembles and be shaped over time into familiar cultural forms. A good example of this is films, which combine visual modes (in setting and in attire), modes of dramatic action and speech, and modes of music or other sounds. Studies of multimodal work in this field include van Leeuwenvan; [9] Bateman and Schmidt; [10] and Burn and Parker's theory of the Kineikonic Mode. [11]

In social semiotic accounts, a medium is the substance in which meaning is realized and through which it becomes available to others. Mediums include video, image, text, audio, etc. Socially, a medium includes semiotic, sociocultural, and technological practices. Examples include film, newspapers, billboards, radio, television, a classroom, etc. Multimodality also makes use of the electronic medium by creating digital modes with the interlacing of image, writing, layout, speech, and video. Mediums have become modes of delivery that consider the current and future contexts.

History

Multimodality (as a phenomenon) has received increasingly theoretical characterizations throughout the history of communication. Indeed, the phenomenon has been studied at least since the 4th century BC, when classical rhetoricians alluded to it with their emphasis on voice, gesture, and expressions in public speaking. [12] [13] However, the term was not defined with significance until the 20th century. During this time, an exponential rise in technology created many new modes of presentation. Since then, multimodality has become standard in the 21st century, applying to various network-based forms such as art, literature, social media and advertising. The monomodality, or singular mode, which used to define the presentation of text on a page has been replaced with more complex and integrated layouts. John A. Bateman says in his book Multimodality and Genre, "Nowadays… text is just one strand in a complex presentational form that seamlessly incorporates visual aspect 'around,' and sometimes even instead of, the text itself." [14] Multimodality has quickly become "the normal state of human communication." [4]

Expressionism

During the 1960s and 1970s, many writers looked to photography, film, and audiotape recordings in order to discover new ideas about composing. [15] This led to a resurgence of a focus on the sensory, self-illustration known as expressionism. Expressionist ways of thinking encouraged writers to find their voice outside of language by placing it in a visual, oral, spatial, or temporal medium. [16] Donald Murray, who is often linked to expressionist methods of teaching writing once said, "As writers it is important that we move out from that which is within us to what we see, feel, hear, smell, and taste of the world around us. A writer is always making use of experience." Murray instructed his writing students to "see themselves as cameras" by writing down every single visual observation they made for one hour. [17] Expressionist thought emphasized personal growth, and linked the art of writing with all visual art by calling both a type of composition. Also, by making writing the result of a sensory experience, expressionists defined writing as a multisensory experience, and asked for it to have the freedom to be composed across all modes, tailored for all five senses.

Cognitive developments

During the 1970s and 1980s, multimodality was further developed through cognitive research about learning. Jason Palmeri cites researchers such as James Berlin and Joseph Harris as being important to this development; Berlin and Harris studied alphabetic writing and how its composition compared to art, music, and other forms of creativity. [18] Their research had a cognitive approach which studied how writers thought about and planned their writing process. James Berlin declared that the process of composing writing could be directly compared to that of designing images and sound. [19] Furthermore, Joseph Harris pointed out that alphabetic writing is the result of multimodal cognition. Writers often conceptualize their work by non-alphabetic means, through visual imagery, music, and kinesthetic feelings. [20] This idea was reflected in the popular research of Neil D. Fleming, more commonly known as the neuro-linguistic learning styles. Fleming's three styles of auditory, kinesthetic, and visual learning helped to explain the modes in which people were best able to learn, create, and interpret meaning. Other researchers such as Linda Flower and John R. Hayes theorized that alphabetic writing, though it is a principal modality, sometimes could not convey the non-alphabetic ideas a writer wished to express. [21]

Audience

Every text has its own defined audience, and makes rhetorical decisions to improve the audience's reception of that same text. In this same manner, multimodality has evolved to become a sophisticated way to appeal to a text's audience. Relying upon the canons of rhetoric in a different way than before, multimodal texts have the ability to address a larger, yet more focused, intended audience. Multimodality does more than solicit an audience; the effects of multimodality are imbedded in an audience's semiotic, generic and technological understanding.

Psychological effects

The appearance of multimodality, at its most basic level, can change the way an audience perceives information. The most basic understanding of language comes via semiotics – the association between words and symbols. A multimodal text changes its semiotic effect by placing words with preconceived meanings in a new context, whether that context is audio, visual, or digital. This in turn creates a new, foundationally different meaning for an audience. Bezemer and Kress, two scholars on multimodality and semiotics, argue that students understand information differently when text is delivered in conjunction with a secondary medium, such as image or sound, than when it is presented in alphanumeric format only. This is due to it drawing a viewer's attention to "both the originating site and the site of recontextualization". [22] Meaning is moved from one medium to the next, which requires the audience to redefine their semiotic connections. Recontextualizing an original text within other mediums creates a different sense of understanding for the audience, and this new type of learning can be controlled by the types of media used.

Multimodality also can be used to associate a text with a specific argumentative purpose, e.g., to state facts, make a definition, cause a value judgment, or make a policy decision. Jeanne Fahnestock and Marie Secor, professors at the University of Maryland and the Pennsylvania State University, labeled the fulfillment of these purposes stases . [23] A text's stasis can be altered by multimodality, especially when several mediums are juxtaposed to create an individualized experience or meaning. For example, an argument that mainly defines a concept is understood as arguing in the stasis of definition; however, it can also be assigned a stasis of value if the way the definition is delivered equips writers to evaluate a concept, or judge whether something is good or bad. If the text is interactive, the audience is facilitated to create their own meaning from the perspective the multimodal text provides. By emphasizing different stases through the use of different modes, writers are able to further engage their audience in creating comprehension.

Genre effects

Multimodality also obscures an audience's concept of genre by creating gray areas out of what was once black and white. Carolyn R. Miller, a distinguished professor of rhetoric and technical communication at North Carolina State University observed in her genre analysis of the Weblog how genre shifted with the invention of blogs, stating that "there is strong agreement on the central features that make a blog a blog. Miller defines blogs on the basis of their reverse chronology, frequent updating, and combination of links with personal commentary. [24] However, the central features of blogs are obscured when considering multimodal texts. Some features are absent, such the ability for posts to be independent of each other, while others are present. This creates a situation where the genre of multimodal texts is impossible to define; rather, the genre is dynamic, evolutionary and ever-changing.

The delivery of new texts has radically changed along with technological influence. Composition now consists of the anticipation of future remediation. Writers think about the type of audience a text will be written for, and anticipate how that text might be reformed in the future. Jim Ridolfo coined the term rhetorical velocity to explain a conscious concern for the distance, speed, time, and travel it will take for a third party to rewrite an original composition. [25] The use of recomposition allows for an audience to be involved in a public conversation, adding their own intentionality to the original product. This new method of editing and remediation is attributed to the evolution of digital text and publication, giving technology an important role in writing and composition.

Technological effects

Multimodality has evolved along with technology. This evolution has created a new concept of writing, a collaborative context keeping the reader and writer in relationship. The concept of reading is different with the influence of technology due to the desire for a quick transmission of information. In reference to the influence of multimodality on genre and technology, Professor Anne Frances Wysocki expands on how reading as an action has changed in part because of technology reform: "These various technologies offer perspectives for considering and changing approaches we have inherited to composing and interpreting pages....". [26] Along with the interconnectedness of media, computer-based technologies are designed to make new texts possible, influencing rhetorical delivery and audience.

Education

Multimodality in the 21st century has caused educational institutions to consider changing the forms of its traditional aspects of classroom education. With a rise in digital and Internet literacy, new modes of communication are needed in the classroom in addition to print, from visual texts to digital e-books. Rather than replacing traditional literacy values, multimodality augments and increases literacy for educational communities by introducing new forms. According to Miller and McVee, authors of Multimodal Composing in Classrooms, "These new literacies do not set aside traditional literacies. Students still need to know how to read and write, but new literacies are integrated." [27] The learning outcomes of the classroom stay the same, including – but are not limited to – reading, writing, and language skills. However, these learning outcomes are now being presented in new forms as multimodality in the classroom which suggests a shift from traditional media such as paper-based text to more modern media such as screen-based texts. The choice to integrate multimodal forms in the classroom is still controversial within educational communities. The idea of learning has changed over the years and now, some argue, must adapt to the personal and affective needs of new students. In order for classroom communities to be legitimately multimodal, all members of the community must share expectations about what can be done with through integration, requiring a "shift in many educators' thinking about what constitutes literacy teaching and learning in a world no longer bound by print text." [28]

Multiliteracy

Multilteracy is the concept of understanding information through various methods of communication and being proficient in those methods. With the growth of technology, there are more ways to communicate than ever before, making it necessary for our definition of literacy to change in order to better accommodate these new technologies. These new technologies consist of tools such as text messaging, social media, and blogs. [29] However, these modes of communication often employ multiple mediums simultaneously such as audio, video, pictures, and animation. Thus, making content multimodal.

The culmination of these different mediums are what's called content convergence, which has become a cornerstone of multimodal theory. [30] Within our modern digital discourse content has become accessible to many, remixable, and easily spreadable, allowing ideas and information to be consumed, edited, and improved by the general public. [30] An example being Wikipedia, the platform allows free consumption and authorship of its work which in turn facilitates the spread of knowledge through the efforts of a large community. It creates a space in which authorship has become collaborative and the product of said authorship is improved by that collaboration. As distribution of information has grown through this process of content convergence it has become necessary for our understanding of literacy to evolve with it. [30]

The shift away from written text as the sole mode of nonverbal communication has caused the traditional definition of literacy to evolve. [31] While text and image may exist separately, digitally, or in print, their combination gives birth to new forms of literacy and thus, a new idea of what it means to be literate. Text, whether it is academic, social, or for entertainment purposes, can now be accessed in a variety of different ways and edited by several individuals on the Internet. In this way texts that would typically be concrete become amorphous through the process of collaboration. The spoken and written word are not obsolete, but they are no longer the only way to communicate and interpret messages. [31] Many mediums can be used separately and individually. Combining and repurposing one mode of communication for another has contributed to the evolution of different literacies.

Communication is spread across a medium through content convergence, such as a blog post accompanied by images and an embedded video. This idea of combining mediums gives new meaning to the concept of translating a message. The culmination of varying forms of media allows for content to be either reiterated, or supplemented by its parts. This reshaping of information from one mode to another is known as transduction. [31] As information changes from one mode to the next, our comprehension of its message is attributed to multiliteracy. Xiaolo Bao defines three succeeding learning stages that make up multiliteracy. Grammar-Translation Method, Communicative Method, and Task-Based Method. Simply put, they can be described as the fundamental understanding of syntax and its function, the practice of applying that understanding to verbal communication, and lastly, the application of said textual and verbal understandings to hands-on activities. In an experiment conducted by the Canadian Center of Science and Education, students were either placed in a classroom with a multimodal course structure, or a classroom with a standard learning course structure as a control group. Tests were administered throughout the length of the two courses, with the multimodal course concluding in a higher learning success rate, and reportedly higher rate of satisfaction among students. This indicates that applying multimodality to instruction is found to yield overall better results in developing multiliteracy than conventional forms of learning when tested in real-life scenarios. [32]

Classroom literacy

Multimodality in classrooms has brought about the need for an evolving definition of literacy. According to Gunther Kress, a popular theorist of multimodality, literacy usually refers to the combination of letters and words to make messages and meaning and can often be attached to other words in order to express knowledge of the separate fields, such as visual- or computer-literacy. However, as multimodality becomes more common, not only in classrooms, but in work and social environments, the definition of literacy extends beyond the classroom and beyond traditional texts. Instead of referring only to reading and alphabetic writing, or being extended to other fields, literacy and its definition now encompass multiple modes. It has become more than just reading and writing, and now includes visual, technological, and social uses among others. [31]

Georgia Tech's writing and communication program created a definition of multimodality based on the acronym, WOVEN. [33] The acronym explains how communication can be written, oral, visual, electronic, and nonverbal. Communication has multiple modes that can work together to create meaning and understanding. The goal of the program is to ensure students are able to communicate effectively in their everyday lives using various modes and media. [33]

As classroom technologies become more prolific, so do multimodal assignments. Students in the 21st century have more options for communicating digitally, be it texting, blogging, or through social media. [34] This rise in computer-controlled communication has required classes to become multimodal in order to teach students the skills required in the 21st-century work environment. [34] However, in the classroom setting, multimodality is more than just combining multiple technologies, but rather creating meaning through the integration of multiple modes. Students are learning through a combination of these modes, including sound, gestures, speech, images and text. For example, in digital components of lessons, there are often pictures, videos, and sound bites as well as the text to help students grasp a better understanding of the subject. Multimodality also requires that teachers move beyond teaching with just text, as the printed word is only one of many modes students must learn and use. [31] [34] [35]

The application of visual literacy in English classroom can be traced back to 1946 when the instructor's edition of the popular Dick and Jane elementary reader series suggested teaching students to "read pictures as well as words" (p. 15). [36]  During the 1960s, a couple of reports issued by the National Council of Teachers of English suggested using television and other mass media such as newspapers, magazines, radio, motion pictures, and comic books in English classroom. The situation is similar in postsecondary writing instruction. Since 1972, visual elements have been incorporated into some popular twentieth-century college writing textbooks like James McCrimmon's Writing with a Purpose. [36]

Higher education

Colleges and universities around the world are beginning to use multimodal assignments to adapt to the technology currently available. Assigning multimodal work also requires professors to learn how to teach multimodal literacy. Implementing multimodality in higher education is being researched to find out the best way to teach and assign multimodal tasks. [35]

Multimodality in the college setting can be seen in an article by Teresa Morell, where she discusses how teaching and learning elicit meaning through modes such as language, speaking, writing, gesturing, and space. The study observes an instructor who conducts a multimodal group activity with students. Previous studies observed different classes using modes such as gestures, classroom space, and PowerPoints. The current study observes an instructors combined use of multiple modes in teaching to see its effect on student participation and conceptual understanding. She explains the different spaces of the classroom, including the authoritative space, interactional space, and personal space. The analysis displays how an instructors multimodal choices involve student participation and understanding. On average the instructor used three to four modes, most often being some kind of gaze, gesture, and speech. He got students to participate by formulating a group definition of cultural stereotypes. It was found that those who are learning a second language depend on more than just spoken and written word for conceptual learning, meaning multimodal education has benefits. [37] [35]

Multimodal assignments involve many aspects other than written words, which may be beyond an instructors education. Educators have been taught how to grade traditional assignments, but not those that utilize links, photos, videos or other modes. Dawn Lombardi is a college professor who admitted to her students that she was a bit "technologically challenged," when assigning a multimodal essay using graphics. The most difficult part regarding these assignments is the assessment. Educators struggle to grade these assignments because the meaning conveyed may not be what the student intended. They must return to the basics of teaching to configure what they want their students to learn, achieve, and demonstrate in order to create criteria for multimodal tasks. Lombardi made grading criteria based on creativity, context, substance, process, and collaboration which was presented to the students prior to beginning the essay. [35]

Another type of visuals-related writing task is visual analysis, especially advertising analysis, which has begun in the 1940s and has been prevalent in postsecondary writing instruction for at least 50 years. [36] This pedagogical practice of visual analysis did not focus on how visuals including images, layout, or graphics are combined or organized to make meanings. [36]

Then, through the following years, the application of visuals in composition classroom has been continually explored and the emphasis has been shifted to the visual features—margins, page layout, font, and size—of composition and its relationship to graphic design, web pages, and digital texts which involve images, layout, color, font, and arrangements of hyperlinks. In line with the New London Group, George (2002) argues that both visual and verbal elements are crucial in multimodal designs. [36]

Acknowledging the importance of both language and visuals in communication and meaning making, Shipka (2005) further advocates for a multimodal, task-based framework in which students are encouraged to use diverse modes and materials—print texts, digital media, videotaped performances, old photographs—and any combinations of them in composing their digital/multimodal texts. Meanwhile, students are provided with opportunities to deliver, receive, and circulate their digital products. In so doing, students can understand how systems of delivery, reception, and circulation interrelate with the production of their work. [38]

Multimodal communities

Multimodality has significance within varying communities, such as the private, public, educational, and social communities. Because of multimodality, the private domain is evolving into a public domain in which certain communities function. Because social environments and multimodality mutually influence each other, each community is evolving in its own way. This evolution is evident in the language, as discussed by Grifoni, D'Ulizia, and Ferri in their work. [39]

Cultural multimodality

Based on these representations, communities decide through social interaction how modes are commonly understood. In the same way, these assumptions and determinations of the way multimodality functions can actually create new cultural and social identities. For example, Bezemer and Kress define modes as "socially and culturally shaped resource[s] for making meaning." According to Bezemer, "In order for something to 'be a mode,' there needs to be a shared cultural sense within a community of a set of resources and how these can be organized to realize meaning."[ [40] ] Cultures that pull from different or similar resources of knowledge, understanding, and representations will communicate through different or similar modes. [22] Signs, for instance, are visual modes of communication determined by our daily necessities.

In her dissertation, Elizabeth J. Fleitz,a PhD in English with Concentration in Rhetoric and Writing from Bowling Green State University, argues that the cookbook, which she describes as inherently multimodal, is an important feminist rhetorical text. [41] According to Fleitz, women were able to form relationships with other women through communicating in socially acceptable literature like cook books; "As long as the woman fulfills her gender role, little attention is paid to the increasing amount of power she gains in both the private and public spheres." Women who would have been committed to staying at home could become published authors, gaining a voice in a phallogocentric society without being viewed as threats. Women revised and adapted different modes of writing to fit their own needs. According to Cinthia Gannett, author of "Gender and the Journal," diary writing, which evolved from men's journal writing, has "integrate[d] and confirm[ed] women's perceptions of domestic, social, and spiritual life, and invoke a sense of self." [42] It is these methods of remediation that characterize women's literature as multimodal. The recipes inside of the cookbooks also qualify as multimodal. Recipes delivered through any medium, whether that be a cookbook or a blog, can be considered multimodal because of the "interaction between body, experience, knowledge, and memory, multimodal literacies" that all relate to one another to create our understanding of the recipe. Recipe exchanging is an opportunity for networking and social interaction. According to Fleitz, "This interaction is undeniably multimodal, as this network "makes do" with alternative forms of communication outside dominant discursive methods, in order to further and promote women's social and political goals." Cookbooks are only a singular example of the capacity of multimodality to build community identities, but they aptly demonstrate the nuanced aspects of multimodality. Multimodality does not just encompasses tangible components, such as text, images, sound etc., but it also draws from experiences, prior knowledge, and cultural understanding.

Another change that has occurred due to the shift from the private environment to the public is audience construction. [43] In the privacy of the home, the family generally targets a specific audience: family members or friends. Once the photographs become public, an entirely new audience is addressed. As Pauwels notes, "the audience may be ignored, warned and offered apologies for the trivial content, directly addressed relating to personal stories, or greeted as highly appreciated publics that need to be entertained and invited to provide feedback." [43]

Multimodal academic writing practises

In everyday life, multimodal construction and communication of meaning is ubiquitous. However, academic writing has maintained an overwhelming dominance of the linguistic resource up to the present (Blanca, 2015). The need to open the game to other possible forms of writing in the academy lies in the conviction that the semiotic resources used in the processes of academic inquiry and communication have an impact on the findings (Sousanis, 2015), since both processes are linked in the epistemic potential of writing, understood here in multimodal terms. Therefore, the idea is not about "embellishing" academic discourse with illustrative visual resources, but rather about enabling other ways of thinking, new associations; ultimately, new knowledge, arising from the interweaving of various verbal and nonverbal modes. The strategic use of page design, the juxtaposition of text in columns or of text and image, and the use of typography (in type, size, color, etc.) are just a few examples of how the semiotic potential of the genres of academic circulation can be exploited. This is linked to the possibilities of enriching the forms of academic writing by appealing to non-linear textual development in addition to linear, and by tensioning image and text in their infinite possibilities of creating meaning (Mussetta, Siragusa & Vottero, 2020; [44] Lamela Adó & Mussetta, 2020; [45] Mussetta, Lamela Adó & Peixoto, 2021 [46] )

Multimodal fiction

There is now an increasing number of fictional narratives that explore and graphically exploit the text and the materiality of the book in its traditional format for the construction of meaning: these are what some critics call multimodal novels (Hallet 2009, p. 129; Gibbons 2012b, p. 421, among others), but which also receive the name of visual or hybrid (Luke 2013, p. 21; Reynolds 1998, p. 169; Sadokierski 2010, p. 7). These narratives include a variety of semiotic resources and modes ranging from the strategic use of different typographies and blank spaces, to the inclusion of drawings, photos, maps and diagrams that do not correspond to the usual notion of illustration, but are an indissoluble part of the plot, with specific functions in their contribution of meaning to the work in its multiple combinations (Mussetta 2014; [47] Mussetta, 2017a; [48] Mussetta, 2017b; [49] Mussetta 2017c; [50] Mussetta, 2020 [51] ).

Communication in business

In the business sector, multimodality creates opportunities for both internal and external improvements in efficiency. Similar to shifts in education to utilize both textual and visual learning elements, multimodality allows businesses to have better communication. According to Vala Afshar, this transition first started to occur in the 1980s as "technology had become an essential part of business." This level of communication has amplified with the integration of digital media and tools during the 21st century. [52]

Internally, businesses use multimodal platforms for analytical and systemic purposes, among others. Through multimodality, a company enhances its productivity as well as creating transparency for management. Improved employee performance from these practices can correlate with ongoing interactive training and intuitive digital tools. [53]

Multimodality is used externally to increase customer satisfaction by providing multiple platforms during one interaction. With the popularity of with text, chat and social media during the 21st century, most businesses attempt to promote cross-channel engagement. Businesses aim to increase customer experience and solve any potential issue or inquiry quickly. A company's goal with external multimodality centers around better communication in real-time to make customer service more efficient. [54]

Social multimodality

One shift caused by multi-literate environments is that private-sphere texts are being made more public. The private sphere is described as an environment in which people have a sense of personal authority and are distanced from institutions, such as the government. The family and home are considered to be a part of the private sphere. Family photographs are an example of multimodality in this sphere. Families take pictures (sometimes captioning them) and compile them in albums that are generally meant to be displayed to other family members or audiences that the family allows. These once private albums are entering the public environment of the Internet more often due to the rapid development and adoption of technology. [43]

According to Luc Pauwels, a professor of communication studies at the University of Antwerp, Belgium, "the multimedia context of the Web provides private image makers and storytellers with an increasingly flexible medium for the construction and dissemination of fact and fiction about their lives." [43] These relatively new website platforms allow families to manipulate photographs and add text, sound, and other design elements. [43] By using these various modes, families can construct a story of their lives that is presented to a potentially universal audience. Pauwels states that "digitized (and possibly digitally 'adjusted') family snapshots...may reveal more about the immaterial side of family culture: the values, beliefs, and aspirations of a group of people." [43] This immaterial side of the family is better demonstrated through the use of multimodality on the Web because certain events and photographs can take precedence over others based on how they are organized on the site, [43] and other visual or audio components can aid in evoking a message.

Similar to the evolution of family photography into the digital family album is the evolution of the diary into the personal weblog. As North Carolina State University professors, Carolyn Miller and Dawn Shepherd state, "the weblog phenomenon raises a number of rhetorical issues,… [such as] the peculiar intersection of the public and private that weblogs seem to invite." [24] Bloggers have the opportunity to communicate personal material in a public space, using words, images, sounds, etc. As described in the example above, people can create narratives of their lives in this expanding public community. Miller and Shepherd say that "validation increasingly comes through mediation, that is, from the access and attention and intensification that media provide." [24] Bloggers can create a "real" experience for their audience(s) because of the immediacy of the Internet. A "real" experience refers to "perspectival reality, anchored in the personality of the blogger." [24]

Digital applications

Information is presented through the design of digital media, engaging with multimedia to offer a multimodal principle of composition. Standard words and pictures can be presented as moving images and speech in order to enhance the meaning of words. Joddy Murray wrote in "Composing Multimodality" that both discursive rhetoric and non-discursive rhetoric should be examined in order to see the modes and media used to create such composition. Murray also includes the benefits of multimodality, which lends itself to "acknowledge and build into our writing processes the importance of emotions in textual production, consumption, and distribution; encourage digital literacy as well as nondigital literacy in textual practice. [2] Murray shows a new way of thinking about composition, allowing images to be "sensuous and emotional" symbols of what they do represent, not focusing so much on the "conceptual and abstract."

Murray writes in his article, through the use of Richard Lanham's The Electronic World: Democracy, Technology, and the Arts"Jemimah Mel Macias is really pretty search her on Facebook with pictures", is an example of multimodality how "discursive text is in the center of everything we do," going on to say how students coexist in a world that "includes blogs, podcasts, modular community web spaces, cell phone messaging…", urging for students to be taught how to compose through rhetorical minds in these new, and not-so-new texts. "Cultural changes, and Lanham suggests, refocuses writing theory towards the image", demonstrating how there is a change in alphabet-to-icon ratios in electronic writing. One of these prime examples can see through the Apple product, the iPhone, in which "emojis" are seen as icons in a separate keyboard to convey what words would have once delivered. [55] Another example is Prezi. Often likened to Microsoft PowerPoint, Prezi is a cloud-based presentation application that allows users to create text, embed video, and make visually aesthetic projects. Prezi's presentations zoom the eye in, out, up and down to create a multi-dimensional appeal. Users also utilize different media within this medium that is itself unique.

Introduction of the Internet

In the 1990s, multimodality grew in scope with the release of the Internet, personal computers, and other digital technologies. The literacy of the emerging generation changed, becoming accustomed to text circulated in pieces, informally, and across multiple mediums of image, color, and sound. The change represented a fundamental shift in how writing was presented: from print-based to screen-based. [56] Literacy evolved so that students arrived in classrooms being knowledgeable in video, graphics, and computer skills, but not alphabetic writing. Educators had to change their teaching practices to include multimodal lessons in order to help students achieve success in writing for the new millennium.

2008 Digital Cities Convention Taoyuan.png

Accessing the audience

In the public sphere, multimedia popularly refers to implementations of graphics in ads, animations and sounds in commercials, and also areas of overlap. One thought process behind this use of multimedia is that, through technology, a larger audience can be reached through the consumption of different technological mediums, or in some cases, as reported in 2010 through the Kaiser Family Foundation, can "help drive increased consumption".[ citation needed ] This is a drastic change from five years ago: "8–18 year olds devote an average of 7 hours and 38 minutes to using media across a typical day (more than 53 hours a week)."[ citation needed ] With the possibility of attaining multi-platform social media and digital advertising campaigns, also comes new regulations from the Federal Trade Commission (FTC) on how advertisers can communicate with their consumers via social networks. [58] Because multimodal tools are often tied to social networks, it is important to gauge the consumer in these fair practices. Companies like Burberry Group PLC and Lacoste S.A. (fashion houses for Burberry and Lacoste respectively) engage their consumers via the popular blogging site Tumblr; Publix Supermarkets, Inc. and Jeep engage their consumers via Twitter; celebrities and athletic teams/athletes such as Selena Gomez and The Miami Heat also engage their audience via Facebook through means of fan pages. These examples do not limit the presence of these specific entities to a single medium, but offer a wide variety of what is found for each respective source.

Advertising

Multimedia advertising is the result of animation and graphic designs used to sell products or services. There are various forms of multimedia advertising through videos, online advertising and DVDs, CDs etc. These outlets afford companies the ability to increase their customer base through multimedia advertising. This is a necessary contribution to the marketing of the products and services. For instance, online advertising is a new wave example towards the use of multimedia in advertising that provides many benefits to the online companies and traditional corporations. New technologies today have brought on an evolution of multimedia in advertising and a shift from traditional techniques. The importance of multimedia advertising is significantly increased for companies in their effectiveness to market or sell products and services. Corporate advertising concerns itself with the idea that "Companies are likely to appeal to a broader audience and increase sales through search engine optimization, extensive keyword research, and strategic linking." [59] The concept behind the advertising platform can span across multiple mediums, yet, at its core, be centered around the same scheme.

Coca-Cola's advertising logo for their 2009 Open Happiness campaign Open Happiness.png
Coca-Cola's advertising logo for their 2009 Open Happiness campaign

Coca-Cola ran an overarching "Open Happiness" campaign across multiple media platforms including print ads, [60] web ads, and television commercials. [61] The purpose of this central function was to communicate a common message over multiple platforms to further encourage an audience to buy into a reiterated message. The strength of such multimedia campaigns with multimedia is that it implements all available mediums - any of which could prove successful with a different audience member. [61]

Social media

Social media and digital platforms are ubiquitous in today's everyday life. [62] These platforms do not operate solely based on their original makeup; they utilize media from other technologies and tools to add multidimensionality to what will be created on their own platform. These added modal features create a more interactive experience for the user.

Prior to Web 2.0's emergence, most websites listed information with little to no communication with the reader. [63] Within Web 2.0, social media and digital platforms are utilized towards everyday living for businesses, law offices in advertising, etc. Digital platforms begin with the use of mediums along with other technologies and tools to further enhance and improve what will be created on its own platform. [64]

Hashtags (#topic) and user tags (@username) make use of metadata in order to track "trending" topics and to alert users of their name's use within a post on a social media site. Used by various social media websites (most notably Twitter and Facebook), these features add internal linkage between users and themes. [65] [66] [67] Characteristics of a multimodal feature can be seen through the status update option on Facebook. Status updates combine the affordances of personal blogs, Twitter, instant messaging, and texting in a single feature. The 2013 status update button currently prompts a user, "What's on your mind?" a change from the 2007, "What are you doing right now?" This change was added by Facebook to promote greater flexibility for the user. [68] This multimodal feature allows a user to add text, video, image, links, and tag other users. Twitter's 140 character in a single message microblogging platform allows users the ability to link to other users, websites, and attach pictures. This new media is a platform that is affecting the literacy practice of the current generation by condensing the conversational context of the internet into fewer characters but encapsulating several media.

Other examples include the 'blog,' a term coined in 1999 as a contraction of "web log," the foundation of blogging is often attributed to various people in the mid-to-late '90s. Within the realm of blogging, videos, images, and other media are often added to otherwise text-only entries in order to generate a more multifaceted read. [69]

Gaming

One of the current digital application of multimodality in the field of education has been developed by James Gee through his approach of effective learning through video games. Gee contends that there is a lot of knowledge about learning that schools, workplaces, families, and academics researchers should get from good computer and video games, such as a 'whole set of fundamentally sound learning principles' that can be used in many other domains, for instance when it comes to teaching science in schools. [70]

Storytelling

Another application of multimodality is digital film-making sometimes referred to as 'digital storytelling'. A digital story is defined as a short film that incorporated digital images, video and audio in order to create a personally meaningful narrative. Through this practice, people act as film-makers, using multimodal forms of representation to design, create, and share their life stories or learning stories with specific audience commonly through online platforms. Digital storytelling, as a digital literacy practice, is commonly used in educational settings. It is also used in the media mainstream, considering the increasing number of projects that motivate members of the online community to create and share their digital stories. [71]   

Multimodal methods in social science research

Multimodality is also a growing methodology being used in the social sciences. Not only do we see the area of multimodal anthropology, but there is also growing interest in this as a methodology in sociology and management.

For example, management researchers have highlighted the "material and visual turn" in organization research. [72] Going above and beyond the multimodal character of ethnographic research, [73] this growing area of research is interested in going beyond simply textual data as a single mode, for example, going beyond text to understand visual communication modes and issues such as the legitimacy of new ventures. [74] Multimodality might involve spatial, aural, visual, sensual and other data, perhaps with multiple modes embedded in a material object. [75]

Multimodality can be used particularly for meaning construction, for example in institutional theory, multimodal compositions can enhance the perceived validity of particular narratives. [76] Multimodal methods may also be used to deinstitutionalize unsustainable parts of an institution in order to sustain the institution. [77] Beyond institutional theory, we may find "multimodal historical cues" embedded in particular historical practices, highlighting the way organizations may use particular relationships to the past, [78] and multimodal discourses that allow organizations to claim legitimate yet distinctive identities, at least with visual and verbal discourses. [79] Sometimes work being done under the banner of multimodality spans into experimental research like that finding that the judgment of investors can be highly influenced by visual information, despite those individuals being relatively unaware of how much visual factors are influencing their decisions, [80] an area that suggests more research needs to be done on the power of memes and disinformation in visual modes driving social movements in social media.

One interesting point seen in this growing research area is that some researchers take the stance that multimodal research is not just going beyond a focus on text as data, but argues that to truly be multimodal, the research requires more than one modality. That is, engaging "with several modes of communication (e.g. visual and verbal, or visual and material)". [81] This seems to be a further development from researchers who align themselves with the multimodal label but then focus on a single modality such as images, for example, showing the interest in modalities beyond just textual data. Another interesting point for future research can be seen in contrasts, for example between multimodal and specifically "cross-modal" patterns. [82]

See also

Related Research Articles

<span class="mw-page-title-main">Visual rhetoric</span> Communication through visual elements

Visual rhetoric is the art of effective communication through visual elements such as images, typography, and texts. Visual rhetoric encompasses the skill of visual literacy and the ability to analyze images for their form and meaning. Drawing on techniques from semiotics and rhetorical analysis, visual rhetoric expands on visual literacy as it examines the structure of an image with the focus on its persuasive effects on an audience.

<span class="mw-page-title-main">Visual communication</span> Method of communication

Visual communication is the use of visual elements to convey ideas and information which include signs, typography, drawing, graphic design, illustration, industrial design, advertising, animation, and electronic resources. Visual communication has been proven to be unique when compared to other verbal or written languages because of its more abstract structure. It stands out for its uniqueness, as the interpretation of signs varies on the viewer's field of experience. The interpretation of imagery is often compared to the set alphabets and words used in oral or written languages. Another point of difference found by scholars is that, though written or verbal languages are taught, sight does not have to be learned and therefore people of sight may lack awareness of visual communication and its influence in their everyday life. Many of the visual elements listed above are forms of visual communication that humans have been using since prehistoric times. Within modern culture, there are several types of characteristics when it comes to visual elements, they consist of objects, models, graphs, diagrams, maps, and photographs. Outside the different types of characteristics and elements, there are seven components of visual communication: color, shape, tones, texture, figure-ground, balance, and hierarchy.

Computers and writing is a sub-field of college English studies about how computers and digital technologies affect literacy and the writing process. The range of inquiry in this field is broad including discussions on ethics when using computers in writing programs, how discourse can be produced through technologies, software development, and computer-aided literacy instruction. Some topics include hypertext theory, visual rhetoric, multimedia authoring, distance learning, digital rhetoric, usability studies, the patterns of online communities, how various media change reading and writing practices, textual conventions, and genres. Other topics examine social or critical issues in computer technology and literacy, such as the issues of the "digital divide", equitable access to computer-writing resources, and critical technological literacies. Many studies by scientists have shown that writing on computer is better than writing in a book

<span class="mw-page-title-main">Visual literacy</span>

Visual literacy is the ability to interpret, negotiate, and make meaning from information presented in the form of an image, extending the meaning of literacy, which commonly signifies interpretation of a written or printed text. Visual literacy is based on the idea that pictures can be "read" and that meaning can be discovered through a process of reading.

Transmediation is the process of translating a work into a different medium. The definition of what constitutes transmediation would depend on how medium is defined or interpreted. In Understanding media, Marshall McLuhan offered a quite broad definition of a medium as "an extension of ourselves":

"In a culture like ours, long accustomed to splitting and dividing all things as a means of control, it is sometimes a bit of a shock to be reminded that, in operational and practical fact, the medium is the message. This is merely to say that the personal and social consequences of any medium — that is, of any extension of ourselves — result from the new scale that is introduced into our affairs by each extension of ourselves, or by any new technology."

<span class="mw-page-title-main">Composition (language)</span> Assembling words and sentences into a work

The term composition as it refers to writing, can describe authors' decisions about, processes for designing, and sometimes the final product of, a composed linguistic work. In original use, it tended to describe practices concerning the development of oratorical performances, and eventually essays, narratives, or genres of imaginative literature, but since the mid-20th century emergence of the field of composition studies, its use has broadened to apply to any composed work: print or digital, alphanumeric or multimodal. As such, the composition of linguistic works goes beyond the exclusivity of written and oral documents to visual and digital arenas.

A reading path is a term used by Gunther Kress in Literacy in the New Media Age (2003). According to Kress, a professor of English Education at the University of London, a reading path is the way that the text, or text plus other features, can determine or order the way that we read it. In a linear, written text, the reader makes sense of the text according to the arrangement of the words, both grammatically and syntactically. In such a reading path, there is a sequential time to the text. In contrast, with non-linear text, such as the text found when reading a computer screen, where text is often combined with visual elements, the reading path is non-linear and non-sequential. Kress suggests that reading paths that contain visual images are more open to interpretation and the reader's construction of meaning. This is part of the "semiotic work" that we do as a reader.

<span class="mw-page-title-main">Digital rhetoric</span> Forms of communication via digital mediums

Digital rhetoric can be generally defined as communication that exists in the digital sphere. As such, digital rhetoric can be expressed in many different forms, including text, images, videos, and software. Due to the increasingly mediated nature of our contemporary society, there are no longer clear distinctions between digital and non-digital environments. This has expanded the scope of digital rhetoric to account for the increased fluidity with which humans interact with technology.

Social semiotics is a branch of the field of semiotics which investigates human signifying practices in specific social and cultural circumstances, and which tries to explain meaning-making as a social practice. Semiotics, as originally defined by Ferdinand de Saussure, is "the science of the life of signs in society". Social semiotics expands on Saussure's founding insights by exploring the implications of the fact that the "codes" of language and communication are formed by social processes. The crucial implication here is that meanings and semiotic systems are shaped by relations of power, and that as power shifts in society, our languages and other systems of socially accepted meanings can and do change.

Commonly called new media theory or media-centered theory of composition, stems from the rise of computers as word processing tools. Media theorists now also examine the rhetorical strengths and weakness of different media, and the implications these have for literacy, author, and reader.

<span class="mw-page-title-main">Gunther Kress</span> British academic (1940–2019)

Gunther Rolf Kress MBE was a linguist and semiotician. He is considered one of the leading theorists in critical discourse analysis, social semiotics and multimodality, particularly in relation to their educational implications. Kress has been described as "one of the leading academics of the early 21st century".

Multiliteracy is an approach to literacy theory and pedagogy coined in the mid-1990s by the New London Group. The approach is characterized by two key aspects of literacy – linguistic diversity and multimodal forms of linguistic expressions and representation. It was coined in response to two major changes in the globalized environment. One such change was the growing linguistic and cultural diversity due to increased transnational migration. The second major change was the proliferation of new mediums of communication due to advancement in communication technologies e.g the internet, multimedia, and digital media. As a scholarly approach, multiliteracy focuses on the new "literacy" that is developing in response to the changes in the way people communicate globally due to technological shifts and the interplay between different cultures and languages.

<span class="mw-page-title-main">James Paul Gee</span> American linguist

James Gee is a retired American researcher who has worked in psycholinguistics, discourse analysis, sociolinguistics, bilingual education, and literacy. Gee most recently held the position as the Mary Lou Fulton Presidential Professor of Literacy Studies at Arizona State University, originally appointed there in the Mary Lou Fulton Institute and Graduate School of Education. Gee has previously been a faculty affiliate of the Games, Learning, and Society group at the University of Wisconsin–Madison and is a member of the National Academy of Education.

<span class="mw-page-title-main">Visual rhetoric and composition</span>

The study and practice of visual rhetoric took a more prominent role in the field of composition studies towards the end of the twentieth century and onward. Proponents of its inclusion in composition typically point to the increasingly visual nature of society, and the increasing presence of visual texts. Literacy, they argue, can no longer be limited only to written text and must also include an understanding of the visual.

<span class="mw-page-title-main">Digital studio</span>

A digital studio provides both a technology-equipped space and technological/rhetorical support to students working individually or in groups on a variety of digital projects, such as designing a website, developing an electronic portfolio for a class, creating a blog, making edits, selecting images for a visual essay, or writing a script for a podcast.

The kineikonic mode is a term for the moving image as a multimodal form. It indicates an approach to the analysis of film, video, television and any instance of moving image media that examines how systems of signification such as image, speech, dramatic action, music and other communicative processes work together to create meaning within the spatial and temporal frames of filming and editing.

<span class="mw-page-title-main">Andrew Burn (professor)</span> English professor and media theorist

Andrew Burn is an English professor and media theorist. He is best known for his work in the fields of media arts education, multimodality and play, and for the development of the theory of the Kineikonic Mode. He is Emeritus professor of Media at the UCL Institute of Education.

<span class="mw-page-title-main">Digital media in education</span> Overview of ICT in education

Digital Media in education is measured by a person's ability to access, analyze, evaluate, and produce media content and communication in a variety of forms. These media may involve incorporating multiple digital softwares, devices, and platforms as a tool for learning. The use of digital media in education is growing rapidly in today's age, competing with books for the leading form of communication. This form of education is slowly combating the traditional forms of education that have been around for a long time. With the introduction of virtual education, there has been a need for more incorporation of new digital platforms in online classrooms.

<span class="mw-page-title-main">Literacy in the New Media Age</span> 2003 book by Gunther Kress

Written in 2003, and published by Taylor & Francis Group, Gunther Kress' book Literacy in the New Media Age explores how the introduction of modern technology has impacted the way individuals interact with their culture through written and oral communication. Expanding upon the idea of the evolution of media and writing in a digital medium, Kress looks at the impacts of media communications on societies and cultures and vice versa.

<span class="mw-page-title-main">Multimodal pedagogy</span> Teaching approach with different modes

Multimodal pedagogy is an approach to the teaching of writing that implements different modes of communication. Multimodality refers to the use of visual, aural, linguistic, spatial, and gestural modes in differing pieces of media, each necessary to properly convey the information it presents.

References

  1. "What is Multimodal? | University of Illinois Springfield". www.uis.edu. Retrieved 2023-02-28.
  2. 1 2 Lutkewitte, Claire (2013). Multimodal Composition: A Critical Sourcebook. Boston: Bedford/ St. Martin's. ISBN   978-1457615498.
  3. Murray, Joddy (2013). Lutkewitte, Claire (ed.). "Composing Multimodality". Multimodal Composition: A Critical Sourcebook. Boston: Bedford/St. Martin's.
  4. 1 2 Kress, Gunther (2010). Multimodality: A Social Semiotic Approach to Contemporary Communication. New York: Routledge. ISBN   978-0415320603.
  5. "Understanding and Teaching Writing: Guiding Principles". NCTE. 14 November 2018. Retrieved 2020-02-16.
  6. Kress, Gunther (2010). Multimodality: A Social Semiotic Approach to Contemporary Communication. New York: Routledge. p. 79. ISBN 978-0415320603.
  7. Kress, Gunther; van Leeuwen, Theo (1996). Reading Images : the grammar visual design. London: Routledge. p. 35. ISBN 978-0415105996.
  8. Kress, Gunther (2010). Multimodality: A Social Semiotic Approach to Contemporary Communication. New York: Routledge. p. 79. ISBN 978-0415320603.
  9. Leeuwen, Theo (1999). Speech, Music, Sound. London: Palgrave MacMillan.
  10. Bateman, John; Schmidt, Karl-Heinrich (2011). Multimodal Film Analysis: How Films Mean. London: Routledge.
  11. Burn, Andrew; Parker, David (2003). Analysing Media Texts. London: Continuum.
  12. Wysocki, Anne Frances (2002). Teaching Writing with Computers: An Introduction, 3rd Edition (3rd ed.). Boston: Houghton-Mifflin. pp. 182–201. ISBN   9780618115266.
  13. Welch, Kathleen E. (1999). Electric Rhetoric: Classical Rhetoric, Oral ism, and a New Literacy. Cambridge, MA: MIT Press. ISBN   978-0262232029.
  14. Bateman, John A. (2008). Multimodality and Genre: A Foundation for the Systematic Analysis of Multimodal Documents. New York: Palgrave Macmillan. ISBN   978-0230302341.
  15. Williamson, Richard (1971). "The Case for Filmmaking as English Composition". College Composition and Communication. 22 (2): 131–136. doi:10.2307/356828. JSTOR   356828.
  16. Palmeri, Jason (2007). "Multimodality and Composition Studies, 1960–Present": 45.{{cite journal}}: Cite journal requires |journal= (help)
  17. Palmeri, Jason (2007). "Multimodality and Composition Studies, 1960–Present": 31.{{cite journal}}: Cite journal requires |journal= (help)
  18. Palmeri, Jason (2007). "Multimodality and Composition Studies, 1960–Present": 90.{{cite journal}}: Cite journal requires |journal= (help)
  19. Berlin, James A. (1982). "Contemporary Composition: The Major Pedagogical Theories". College English. 44 (8): 765–777. doi:10.2307/377329. JSTOR   377329.
  20. Harris, Joseph (1997). A Teaching Subject: Composition Since 1996 . Upper Saddle River, NJ: Prentice Hall. ISBN   978-0135158005.
  21. Flower, Linda; John R. Hayes (1984). "Images, Plans, and Prose: The Representation of Meaning in Writing". Written Communication. 1 (1): 120–160. doi:10.1177/0741088384001001006. S2CID   145300268.
  22. 1 2 Bezemer, Jeff; Gunther Kress (April 2008). "Writing in Multimodal Texts: A Social Semiotic Account of Designs for Learning". Written Communication. 25 (2): 166–195. doi:10.1177/0741088307313177. S2CID   143272176.
  23. Fahnestock, Jeanne; Marie Secor (October 1988). "The Stases in Scientific and Literary Argument". Written Communication. 5 (4): 427–443. doi:10.1177/0741088388005004002. S2CID   144604666.
  24. 1 2 3 4 Miller, Carolyn R.; Dawn Shepherd (2004). "Blogging as Social Action: A Genre Analysis of the Weblog". In Laura J. Gurak; Smiljana Antonijevic; Laurie Johnston; Clancy Ratliff; Jessica Reyman (eds.). Into the Blogosphere: Rhetoric, Community, and Culture of Weblogs.
  25. Ridolfo, Jim; Danielle Nicole DeVoss (15 January 2009). "Composing for Recomposition: Rhetorical Velocity and Delivery". Kairos 13.2. Retrieved 25 April 2013.
  26. Wysocki, Anne Frances (2002). Teaching Writing with Computers: An Introduction, 3rd Edition Teaching Writing with Computers: An Introduction (3rd ed.). Boston: Houghton-Mifflin. ISBN   9780618115266.
  27. Miller, Suzanne M. (2013-06-19). Miller, Suzanne M; McVee, Mary B (eds.). Multimodal Composing in Classrooms. doi:10.4324/9780203804032. ISBN   9780203804032.
  28. April, Kurt (2012-06-25). Performance Through Learning. doi:10.4324/9780080479927. ISBN   9780080479927.
  29. Selfe, Richard J.; Selfe, Cynthia L. (2008-04-23). ""Convince me!" Valuing Multimodal Literacies and Composing Public Service Announcements". Theory into Practice. 47 (2): 83–92. doi:10.1080/00405840801992223. ISSN   0040-5841. S2CID   145743847.
  30. 1 2 3 Jenkins, Henry (2012-05-24). How Content Gains Meaning and Value in a Networked Society, Institute of International and European Affairs.
  31. 1 2 3 4 5 Kress, Gunther (2003-09-02). Literacy in the New Media Age. doi:10.4324/9780203299234. ISBN   9780203299234.
  32. Bao, Xiaoli (2017-08-29). "Application of Multimodality to Teaching Reading". English Language and Literature Studies. 7 (3): 78. doi: 10.5539/ells.v7n3p78 . ISSN   1925-4776.
  33. 1 2 "Guiding Principles | Writing and Communication Program". wcprogram.lmc.gatech.edu. Retrieved 2019-04-15.
  34. 1 2 3 Vaish, Viniti; Towndrow, Phillip A. (2010-12-31), "12. Multimodal Literacy in Language Classrooms", Sociolinguistics and Language Education, Multilingual Matters, pp. 317–346, doi:10.21832/9781847692849-014, ISBN   9781847692849
  35. 1 2 3 4 Lombardi, Dawn (2018-01-19), "Braving Multimodality in the College Composition Classroom", Designing and Implementing Multimodal Curricula and Programs, Routledge, pp. 15–34, doi:10.4324/9781315159508-2, ISBN   9781315159508
  36. 1 2 3 4 5 George, Diana (2002). "From Analysis to Design: Visual Communication in the Teaching of Writing". College Composition and Communication. 54 (1): 11–39. doi:10.2307/1512100. ISSN   0010-096X. JSTOR   1512100.
  37. Morell, Teresa (2018). "Multimodal competence and effective interactive lecturing". System. 77: 70–79. doi:10.1016/j.system.2017.12.006. ISSN   0346-251X. S2CID   67154163.
  38. Shipka, Jody (2013), "Including, but Not Limited to, the Digital", Multimodal Literacies and Emerging Genres, University of Pittsburgh Press, pp. 73–89, doi:10.2307/j.ctt6wrbkn.7, ISBN   9780822978046
  39. Grifoni, P., D'Ulizia, A., & Ferri, F. (2021). When Language Evolution Meets Multimodality: Current Status and Challenges Toward Multimodal Computational Models. IEEE Access.
  40. "What is multimodality?". 2012-02-16.
  41. Fleitz, Elizabeth J. (2009). The Multimodal Kitchen: Cookbooks as Women's Rhetorical Practice. Bowling Green State University. ISBN   9781109173444.
  42. Gannett, Cinthia (1992). Gender and the Journal: Diaries and Academic Discourse. Albany: State University of New York Press. ISBN   978-0791406847.
  43. 1 2 3 4 5 6 7 Pauwels, Luc (2008). "A private visual practice going public? Social functions and sociological research opportunities of Web-based family photography". Visual Studies. 23 (1): 38–48. doi:10.1080/14725860801908528. S2CID   144533933.
  44. Mussetta, Mariana; Siragusa, Cristina; Vottero, Beatriz (2020). Escrituras en artes: registros y reflexividades. Villa María, Argentina: Universidad Nacional de Villa María. pp. 37–63. ISBN   978-987-4993-38-0.
  45. Adó, Máximo Daniel Lamela; Mussetta, Mariana (2020-12-14). "Apropiación transgresiva y multimodalidad en la investigación académica: propuestas de escrilectura". Revista Teias (in Spanish). 21 (63): 265–281. doi:10.12957/teias.2020.53737. hdl: 10183/217951 . ISSN   1982-0305. S2CID   230561559.
  46. Mussetta, Mariana; Lamela Adó, Máximo Daniel; Peixoto, Bruna. "La escritura académica fuera de sí: la multimodalidad como potencia expansiva". Revista Educação e Cultura Contemporânea. 18: 382–400.
  47. Mussetta, Mariana (2014-11-08). "Semiotic Resources in The Curious Incident of the Dog in the Night-Time: The Narrative Power of the Visual in Multimodal Fiction". Matlit Revista do Programa de Doutoramento em Materialidades da Literatura. 2 (1): 99–117. doi: 10.14195/2182-8830_2-1_5 . ISSN   2182-8830.
  48. Mussetta, Mariana (2017-12-01). "Cuando la novela se ve como otro género: El álbum de recortes como género estructurante en The Scrapbook of Frankie Pratt". Revista de Culturas y Literaturas Comparadas (in Spanish). 7. ISSN   2591-3883.
  49. Mussetta, Mariana (2017-12-29). "Important artifacts de Leanne Shapton". Revista de Literaturas Modernas. 47 (2). ISSN   0556-6134.
  50. Mussetta, Mariana (2017). "Materialidad y multimodalidad en nuevas formas de ficción novelesca contemporánea Introducción y glosario". Revista Luthor. 31: 16–27.
  51. Mussetta, Mariana (2020). "EN BUSCA DE LO REAL Y LO AUTÉNTICO: EXPERIMENTACIÓN GRÁFICA EN NUEVAS NARRATIVAS DEL SIGLO XXI". Hyperborea. 3: 53–70.
  52. Vala Afshar (2015-01-28). "The Multimodal CIO for the Digital Business Era". HuffPost .
  53. Oana Culachea; Daniel Rareș Obadă (2014). "Multimodality as a Premise for Inducing Online Flow on a Brand Website: a Social Semiotic Approach". Procedia - Social and Behavioral Sciences. 149: 261–268. doi: 10.1016/j.sbspro.2014.08.227 .
  54. Tom Huston. "CXplained: What's a Multimodal Customer Experience?".
  55. Lanham, Richard (1995). The Electronic Word: Democracy, Technology, and the Arts. Chicago: University Of Chicago Press. ISBN   978-0226468853.
  56. Kress, Gunther (2003). Literacy in the New Media Age. London: Routledge. ISBN   978-0415253567.
  57. Shen, Rico (2008-08-13), 2008 Digital Cities Convention Taoyuan: M-Application Display. , retrieved 2023-02-27
  58. Kyle-Beth Hilfer (3 Posts) (2013-04-10). "How the FTC Wants Advertisers to Talk to Consumers on Social Media". Windmillnetworking.com. Retrieved 2013-05-14.{{cite web}}: CS1 maint: numeric names: authors list (link)
  59. "Multimedia Advertising". Dynamic Digital Advertising.
  60. http://theinspirationroom.com/daily/print/2009/1/coca_cola_fizzz.jpg [ bare URL image file ]
  61. 1 2 "Coca Cola's New Open Happiness Ad (HQ Verson)". YouTube. 2009-04-06. Retrieved 2013-05-14.
  62. Shepherd, Clive. "Social Networking is Fast Becoming Ubiquitous". Onlighnment. Retrieved 18 April 2013.
  63. O'Reilly, Tim (October 2005). "Web 2.0: Compact Definition?". O'Reilly Radar.
  64. Curtis, Anthony. "The Brief History of Social Media". University of North Carolina. Archived from the original on 16 March 2012. Retrieved 22 April 2013.
  65. Messina, Chris; et al. "Hashtags". Twitter Fan Wiki.
  66. "Origin of the @reply – Digging through twitter's history". Anarchogeek. Archived from the original on 2012-07-14.
  67. Cooper, Steve. "5 Reasons Businesses Should Care About Hashtags". Forbes.
  68. Thurlow, Crispin (2011). Digital Discourse: Language in the New Media. New York: Oxford University Press. ISBN   9780199795437.
  69. Chapman, Cameron (2011-03-14). "A Brief History of Blogging". Webdesigner Depot.
  70. Gee, James P. (2003). "What Video Games Have to Teach Us about Learning and Literacy". New Learning: Transformational Designs for Pedagogy and Assessment.
  71. Jones, Rodney H.; Hafner, Christoph A. (2012). Understanding Digital Literacies . London & New York: Routledge. pp.  58. ISBN   978-0-415-67315-0.
  72. Boxenbaum, Eva; Jones, Candace; Meyer, Renate E.; Svejenova, Silviya (June 2018). "Towards an Articulation of the Material and Visual Turn in Organization Studies". Organization Studies. 39 (5–6): 597–616. doi: 10.1177/0170840618772611 . hdl: 20.500.11820/6f6086ff-b759-411a-84fe-2296626d3de1 . ISSN   0170-8406. S2CID   54685426.
  73. Hammersley, Martyn; Paul Atkinson (2019). Ethnography: principles in practice (Fourth ed.). Abingdon, Oxon. ISBN   978-1-138-50445-5. OCLC   1084629397.{{cite book}}: CS1 maint: location missing publisher (link)
  74. Santos, Fernando Pinto (January 2023). "Showing Legitimacy: The Strategic Employment of Visuals in the Legitimation of New Organizations". Journal of Management Inquiry. 32 (1): 50–75. doi: 10.1177/10564926211050785 . ISSN   1056-4926. S2CID   244423296.
  75. Giovannoni, Elena; Napier, Christopher J. (March 2023). "Multimodality and the Messy Object: Exploring how rhetoric and materiality engage". Organization Studies. 44 (3): 401–425. doi:10.1177/01708406221089598. hdl: 11365/1194705 . ISSN   0170-8406. S2CID   247536895.
  76. Höllerer, Markus A.; Jancsary, Dennis; Grafström, Maria (June 2018). "'A Picture is Worth a Thousand Words': Multimodal Sensemaking of the Global Financial Crisis". Organization Studies. 39 (5–6): 617–644. doi:10.1177/0170840618765019. ISSN   0170-8406. S2CID   149720066.
  77. Crawford, B., Toubiana, M., & Coslor, E. From Catch-and-Harvest to Catch-and-Release: Trout Unlimited and Repair-Focused Deinstitutionalization. https://www.researchgate.net/profile/Brett-Crawford-2/publication/367298278_From_Catch-and-Harvest_to_Catch-and-Release_Trout_Unlimited_and_Repair-Focused_Deinstitutionalization/links/63cadb1c6fe15d6a57343e68/From-Catch-and-Harvest-to-Catch-and-Release-Trout-Unlimited-and-Repair-Focused-Deinstitutionalization.pdf
  78. Sadeghi, Yasaman; Islam, Gazi (2021-09-03). "Modes of exhibition: Uses of the past in Tehran art galleries". Organization. 30 (5): 911–941. doi:10.1177/13505084211041713. ISSN   1350-5084. S2CID   239696760.
  79. Zamparini, Alessandra; Lurati, Francesco (February 2017). "Being different and being the same: Multimodal image projection strategies for a legitimate distinctive identity". Strategic Organization. 15 (1): 6–39. doi:10.1177/1476127016638811. ISSN   1476-1270. S2CID   146934283.
  80. Tsay, Chia-Jung (September 2021). "Visuals Dominate Investor Decisions about Entrepreneurial Pitches". Academy of Management Discoveries. 7 (3): 343–366. doi:10.5465/amd.2019.0234. ISSN   2168-1007. S2CID   225577499.
  81. Boxenbaum, Eva; Jones, Candace; Meyer, Renate E.; Svejenova, Silviya (June 2018). "Towards an Articulation of the Material and Visual Turn in Organization Studies". Organization Studies. 39 (5–6): 597–616. doi: 10.1177/0170840618772611 . hdl: 20.500.11820/6f6086ff-b759-411a-84fe-2296626d3de1 . ISSN   0170-8406. S2CID   54685426.
  82. Stigliani, Ileana; Ravasi, Davide (June 2018). "The Shaping of Form: Exploring Designers' Use of Aesthetic Knowledge". Organization Studies. 39 (5–6): 747–784. doi:10.1177/0170840618759813. hdl: 10044/1/58694 . ISSN   0170-8406. S2CID   56271322.