Visual learning

Last updated

Visual learning is a learning style among the learning styles of Neil Fleming's VARK model in which information is presented to a learner in a visual format. Visual learners can utilize graphs, charts, maps, diagrams, and other forms of visual stimulation to effectively interpret information. The Fleming VARK model also includes Kinesthetic Learning and Auditory learning. [1] There is no evidence that providing visual materials to students identified as having a visual style improves learning.

Contents

Techniques

A review study concluded that using graphic organizers improves student performance in the following areas: [2]

Retention
Students remember information better and can better recall it when it is represented and learned both visually and verbally. [2]
Reading comprehension
The use of graphic organizers helps improve reading comprehension of students. [2]
Student achievement
Students with and without learning disabilities improve performance across content areas and grade levels. [2]
Thinking and learning skills; critical thinking
When students develop and use a graphic organizer their higher order thinking and critical thinking skills are enhanced. [2]

Areas of the brain affected

Various areas of the brain work together in a multitude of ways in order to produce the images that we see with our eyes and that are encoded by our brains. The basis of this work takes place in the visual cortex of the brain. The visual cortex is located in the occipital lobe of the brain and harbors many other structures that aid in visual recognition, categorization, and learning. One of the first things the brain must do when acquiring new visual information is to recognize the incoming material. Brain areas involved in recognition are the inferior temporal cortex, the superior parietal cortex, and the cerebellum. During tasks of recognition, there is increased activation in the left inferior temporal cortex and decreased activation in the right superior parietal cortex. Recognition is aided by neural plasticity, or the brain's ability to reshape itself based on new information. [3] Next the brain must categorize the material using the three main areas that are used when categorizing new visual information: the orbitofrontal cortex and two dorsolateral prefrontal regions which begin the process of sorting new information into groups and further assimilating that information into things that you might already know. [4]


After recognizing and categorizing new material entered into the visual field, the brain is ready to begin the encoding process – the process which leads to learning. Multiple brain areas are involved in this process such as the frontal lobe, the right extrastriate cortex, the neocortex, and again, the neostriatum. One area in particular, the limbic-diencephalic region, is essential for transforming perceptions into memories. [5] With the coming together of tasks of recognition, categorization and learning; schemas help make the process of encoding new information and relating it to things you already know much easier. One can remember visual images much better when they can apply it to an already known schema. Schemas actually provide enhancement of visual memory and learning. [6]

Infancy

Where it starts

Between the fetal stage and 18 months, a baby experiences rapid growth of a substance called gray matter. Gray matter is the darker tissue of the brain and spinal cord, consisting mainly of nerve cell bodies and branching dendrites.[ citation needed ] It is responsible for processing sensory information in the brain such as areas like the primary visual cortex. The primary visual cortex is located within the occipital lobe in the back of infant's brain and is responsible for processing visual information such as static or moving objects and pattern recognition.

The four pathways

Within the primary visual cortex, there are four pathways: the superior colliculus pathway (SC pathway), the middle temporal area pathway (MT pathway), the frontal eye fields pathway (FEF pathway), and the inhibitory pathway. Each pathway is crucial to the development of visual attention in the first few months of life.

The SC pathway is responsible for the generation of eye movements toward simple stimuli. It receives information from the retina and the visual cortex and can direct behavior toward an object. The MT pathway is involved in the smooth tracking of objects and travels between the SC pathway and the primary visual cortex. In conjunction with the SC pathway and the MT pathway, the FEF pathway allows the infant to control eye movements as well as visual attention. It also plays a part in sensory processing in the infant.

Lastly, the inhibitory pathway regulates the activity in the superior colliculus and is later responsible for obligatory attention in the infant. The maturation and functionality of these pathways depends on how well the infant can make distinctions as well as focus on stimuli.

Supporting studies

A study by Haith, Hazan, & Goodman in 1988 showed that babies as young as 3.5 months, are able to create short-term expectations of situations they confront. Expectations in this study refer to the cognitive and perceptual ways in which an infant can forecast a future event. This was tested by showing the infant either a predictable pattern of slides or an irregular pattern of slides and tracking the infant's eye movements. [7]

A later study by Johnson, Posner, & Rothbart in 1991 showed that by 4 months, infants can develop expectations. This was tested through anticipatory looks and disengagement with stimuli. For example, anticipatory looks portray the infant as being able to predict the next part of a pattern which can then be applied to the real world scenario of breast-feeding. Infants are able to predict a mother's movements and expect feeding so they can latch onto the nipple for feeding. Expectations, anticipatory looks, and disengagement all show that infants can learn visually, even if it is only short term. [8]

David Roberts (2016) tested multimedia learning propositions, he found that using certain images dislocates pedagogically harmful excesses of text, reducing cognitive overloading and exploiting under-used visual processing capacities [9]

In early childhood

From the ages 3–8, visual learning improves and begins to take many different forms. At the toddler age of 3–5, children's bodily actions structure the visual learning environment. At this age, toddlers are using their newly developed sensory-motor skills quite often and fusing them with their improved vision to understand the world around them. This is seen by toddlers using their arms to bring objects of interest close to their sensors, such as their eyes and faces, to explore the object further. The act of bringing objects close to their face affects their immediate view by placing their mental and visual attention on that object and just blocking the view of other objects that are around them and out of view.

There is an emphasis placed on objects and things that are directly in front of them and thus proximal vision is the primary perspective of visual learning. This is different from how adults utilize visual learning. This difference in toddler vision and adult vision is attributable to their body sizes, and body movements such that their visual experiences are created by their body movement. An adult's view is broad due to their larger body size, with most objects in view because of the distance between them and objects. Adults tend to scan a room, and see everything rather than focusing on one object only. [10]

The way a child integrates visual learning with motor experiences enhances their perceptual and cognitive development. [11] For elementary school children aged 4–11, intellect is positively related to their level of auditory-visual integrative proficiency. The most significant period for the development of auditory-visual integration occurs between ages 5–7. During this time, the child has mastered visual-kinesthetic integration, and the child's visual learning can be applied to formal learning focused towards books and reading, rather than physical objects, thus impacting their intellect. As reading scores increase, children are able to learn more, and their visual learning has developed to not only focus on physical objects in close proximity to them, but also to interpret words and as such acquire knowledge by reading. [12]

In middle childhood

Here we categorize middle childhood as ages 9 to 14. By this stage in a child's normal development, vision is sharp and learning processes are well underway. Most studies that have focused their efforts on visual learning have found that visual learning styles as opposed to traditional learning styles greatly improve the totality of a student's learning experience. Firstly, visual learning engages students, note that student engagement is one of the most important factors that motivate students to learn. Visuals increase student interest with the use of graphics animation and video. Consequently, it has been found that students pay greater attention to lecture material when visuals are used. With increased attention to lesson material, many positive outcomes have been seen with the use of visual tactics in the classrooms of middle-aged students.

Students organize and process information more thoroughly when they learn visually which helps them to understand the information better and they are more likely to remember information that is learned with a visual aid. [13] Research shows that when teachers used visual tactics to teach middle-aged students they found that students had more positive attitudes about the material they were learning. [14] Students also exemplified higher test performance, higher standard achievement scores, thinking on levels that require higher-order thinking, and more engagement. One study also found that learning about emotional events, such as the Holocaust, with visual aids increase middle- aged children's empathy. [14]

In adolescence

Brain maturation into young adulthood

Gray matter is responsible for generating nerve impulses that process brain information, and white matter is responsible for transmitting that brain information between lobes and out through the spinal cord. Nerve impulses are transmitted by myelin, a fatty material that grows around a cell. White matter has a myelin sheath (a collection of myelin) while gray matter does not which efficiently allows neural impulses to move swiftly along the fiber. The myelin sheath isn't fully formed until around ages 24–26. [15] This means that adolescents and young adults typically learn differently, and subsequently often utilize visual aids in order to help them better comprehend difficult subjects.[ citation needed ]

Learning preferences can vary across a wide spectrum. Specifically, within the realm of visual learning, they can vary between people who prefer being given learning instructions with text as opposed to those who prefer graphics. College students were tested in general factors like learning preference and spatial ability (being able to be proficient in creating, holding, and manipulating spatial representations). [16] The study determined that college-age individuals report efficient learning styles and learning preferences for themselves individually. These personal assessments have proved accurate, meaning that self-ratings of factors such as spatial ability and learning preference can be effective measures of how well one learns visually.[ citation needed ]

Gender differences

Studies have indicated that adolescents learn best through 10 various styles: reading, manipulative activity, teacher explanation, auditory stimulation, visual demonstration, visual stimulation (electronic), visual stimulation (just pictures), games, social interaction, and personal experience. [17] According to the study, young adult males demonstrate a preference for learning through activities they are able to manipulate while young adult females show a greater preference for learning through teacher notes visually or by using graphs, and through reading. This suggests that women are more visually stimulated, interested in information that they can have physical direct control over. Men, on the other hand, learn best through reading information and having it explained in an auditory fashion.

Lack of evidence

Although learning styles have "enormous popularity", and both children and adults express personal preferences, there is no evidence that identifying a student's learning style produces better outcomes, and there is significant evidence that the widely touted "meshing hypothesis" (that a student will learn best if taught in a method deemed appropriate for that student's learning style) is invalid. [18] Well-designed studies "flatly contradict the popular meshing hypothesis". [18] Rather than targeting instruction to the "right" learning style, students appear to benefit most from mixed modality presentations, for instance using both auditory and visual techniques for all students. [19]

See also

Related Research Articles

An illusion is a distortion of the senses, which can reveal how the mind normally organizes and interprets sensory stimulation. Although illusions distort the human perception of reality, they are generally shared by most people.

Lip reading, also known as speechreading, is a technique of understanding a limited range of speech by visually interpreting the movements of the lips, face and tongue without sound. Estimates of the range of lip reading vary, with some figures as low as 30% because lip reading relies on context, language knowledge, and any residual hearing. Although lip reading is used most extensively by deaf and hard-of-hearing people, most people with normal hearing process some speech information from sight of the moving mouth.

Learning styles refer to a range of theories that aim to account for differences in individuals' learning. Although there is ample evidence that individuals express personal preferences on how they prefer to receive information, few studies have found validity in using learning styles in education. Many theories share the proposition that humans can be classified according to their "style" of learning, but differ on how the proposed styles should be defined, categorized and assessed. A common concept is that individuals differ in how they learn.

Kinesthetic learning, kinaesthetic learning, or tactile learning is learning that involves physical activity. As cited by Favre (2009), Dunn and Dunn define kinesthetic learners as students who prefer whole-body movement to process new and difficult information. However, scientific studies do not support the claim that using kinesthetic modality improves learning in students identified as kinesthetic learning as their preferred learning style.

<span class="mw-page-title-main">Visual memory</span> Ability to process visual and spatial information

Visual memory describes the relationship between perceptual processing and the encoding, storage and retrieval of the resulting neural representations. Visual memory occurs over a broad time range spanning from eye movements to years in order to visually navigate to a previously visited location. Visual memory is a form of memory which preserves some characteristics of our senses pertaining to visual experience. We are able to place in memory visual information which resembles objects, places, animals or people in a mental image. The experience of visual memory is also referred to as the mind's eye through which we can retrieve from our memory a mental image of original objects, places, animals or people. Visual memory is one of several cognitive systems, which are all interconnected parts that combine to form the human memory. Types of palinopsia, the persistence or recurrence of a visual image after the stimulus has been removed, is a dysfunction of visual memory.

<span class="mw-page-title-main">Subvocalization</span> Internal process while reading

Subvocalization, or silent speech, is the internal speech typically made when reading; it provides the sound of the word as it is read. This is a natural process when reading, and it helps the mind to access meanings to comprehend and remember what is read, potentially reducing cognitive load.

Auditory imagery is a form of mental imagery that is used to organize and analyze sounds when there is no external auditory stimulus present. This form of imagery is broken up into a couple of auditory modalities such as verbal imagery or musical imagery. This modality of mental imagery differs from other sensory images such as motor imagery or visual imagery. The vividness and detail of auditory imagery can vary from person to person depending on their background and condition of their brain. Through all of the research developed to understand auditory imagery behavioral neuroscientists have found that the auditory images developed in subjects' minds are generated in real time and consist of fairly precise information about quantifiable auditory properties as well as melodic and harmonic relationships. These studies have been able to recently gain confirmation and recognition due to the arrival of Positron emission tomography and fMRI scans that can confirm a physiological and psychological correlation.

<span class="mw-page-title-main">Language processing in the brain</span> How humans use words to communicate

In psycholinguistics, language processing refers to the way humans use words to communicate ideas and feelings, and how such communications are processed and understood. Language processing is considered to be a uniquely human ability that is not produced with the same grammatical understanding or systematicity in even human's closest primate relatives.

In developmental psychology and developmental biology, a critical period is a maturational stage in the lifespan of an organism during which the nervous system is especially sensitive to certain environmental stimuli. If, for some reason, the organism does not receive the appropriate stimulus during this "critical period" to learn a given skill or trait, it may be difficult, ultimately less successful, or even impossible, to develop certain associated functions later in life. Functions that are indispensable to an organism's survival, such as vision, are particularly likely to develop during critical periods. "Critical period" also relates to the ability to acquire one's first language. Researchers found that people who passed the "critical period" would not acquire their first language fluently.

Sensory processing is the process that organizes and distinguishes sensation from one's own body and the environment, thus making it possible to use the body effectively within the environment. Specifically, it deals with how the brain processes multiple sensory modality inputs, such as proprioception, vision, auditory system, tactile, olfactory, vestibular system, interoception, and taste into usable functional outputs.

The two-streams hypothesis is a model of the neural processing of vision as well as hearing. The hypothesis, given its initial characterisation in a paper by David Milner and Melvyn A. Goodale in 1992, argues that humans possess two distinct visual systems. Recently there seems to be evidence of two distinct auditory systems as well. As visual information exits the occipital lobe, and as sound leaves the phonological network, it follows two main pathways, or "streams". The ventral stream leads to the temporal lobe, which is involved with object and visual identification and recognition. The dorsal stream leads to the parietal lobe, which is involved with processing the object's spatial location relative to the viewer and with speech repetition.

In psychology and cognitive neuroscience, pattern recognition describes a cognitive process that matches information from a stimulus with information retrieved from memory.

Dyslexia is a reading disorder wherein an individual experiences trouble with reading. Individuals with dyslexia have normal levels of intelligence but can exhibit difficulties with spelling, reading fluency, pronunciation, "sounding out" words, writing out words, and reading comprehension. The neurological nature and underlying causes of dyslexia are an active area of research. However, some experts believe that the distinction of dyslexia as a separate reading disorder and therefore recognized disability is a topic of some controversy.

Educational neuroscience is an emerging scientific field that brings together researchers in cognitive neuroscience, developmental cognitive neuroscience, educational psychology, educational technology, education theory and other related disciplines to explore the interactions between biological processes and education. Researchers in educational neuroscience investigate the neural mechanisms of reading, numerical cognition, attention and their attendant difficulties including dyslexia, dyscalculia and ADHD as they relate to education. Researchers in this area may link basic findings in cognitive neuroscience with educational technology to help in curriculum implementation for mathematics education and reading education. The aim of educational neuroscience is to generate basic and applied research that will provide a new transdisciplinary account of learning and teaching, which is capable of informing education. A major goal of educational neuroscience is to bridge the gap between the two fields through a direct dialogue between researchers and educators, avoiding the "middlemen of the brain-based learning industry". These middlemen have a vested commercial interest in the selling of "neuromyths" and their supposed remedies.

The study of memory incorporates research methodologies from neuropsychology, human development and animal testing using a wide range of species. The complex phenomenon of memory is explored by combining evidence from many areas of research. New technologies, experimental methods and animal experimentation have led to an increased understanding of the workings of memory.

<span class="mw-page-title-main">Cross modal plasticity</span> Reorganization of neurons in the brain to integrate the function of two or more sensory systems

Cross modal plasticity is the adaptive reorganization of neurons to integrate the function of two or more sensory systems. Cross modal plasticity is a type of neuroplasticity and often occurs after sensory deprivation due to disease or brain damage. The reorganization of the neural network is greatest following long-term sensory deprivation, such as congenital blindness or pre-lingual deafness. In these instances, cross modal plasticity can strengthen other sensory systems to compensate for the lack of vision or hearing. This strengthening is due to new connections that are formed to brain cortices that no longer receive sensory input.

The Troland Research Awards are an annual prize given by the United States National Academy of Sciences to two researchers in recognition of psychological research on the relationship between consciousness and the physical world. The areas where these award funds are to be spent include but are not limited to areas of experimental psychology, the topics of sensation, perception, motivation, emotion, learning, memory, cognition, language, and action. The award preference is given to experimental work with a quantitative approach or experimental research seeking physiological explanations.

Melodic Learning is a multimodal learning method that uses the defining elements of singing to facilitate the capture, storage and retrieval of information. Widely recognized examples of Melodic Learning include using the alphabet song to learn the alphabet and This Old Man to learn counting.

Form perception is the recognition of visual elements of objects, specifically those to do with shapes, patterns and previously identified important characteristics. An object is perceived by the retina as a two-dimensional image, but the image can vary for the same object in terms of the context with which it is viewed, the apparent size of the object, the angle from which it is viewed, how illuminated it is, as well as where it resides in the field of vision. Despite the fact that each instance of observing an object leads to a unique retinal response pattern, the visual processing in the brain is capable of recognizing these experiences as analogous, allowing invariant object recognition. Visual processing occurs in a hierarchy with the lowest levels recognizing lines and contours, and slightly higher levels performing tasks such as completing boundaries and recognizing contour combinations. The highest levels integrate the perceived information to recognize an entire object. Essentially object recognition is the ability to assign labels to objects in order to categorize and identify them, thus distinguishing one object from another. During visual processing information is not created, but rather reformatted in a way that draws out the most detailed information of the stimulus.

Multisensory learning is the assumption that individuals learn better if they are taught using more than one sense (modality). The senses usually employed in multisensory learning are visual, auditory, kinesthetic, and tactile – VAKT. Other senses might include smell, taste and balance.

References

  1. Leite, Walter L.; Svinicki, Marilla; and Shi, Yuying: Attempted Validation of the Scores of the VARK: Learning Styles Inventory With Multitrait–Multimethod Confirmatory Factor Analysis Models, p. 2. Sage Publications, 2009.
  2. 1 2 3 4 5 "Graphic Organizers: A Review of Scientifically Based Research, The Institute for the Advancement of Research in Education at AEL" (PDF).
  3. Poldrack, R., Desmond, J., Glover, G., & Gabrieli, J. "The Neural Basis of Visual Skill Learning: An fMRI Study of Mirror Reading". Cerebral Cortex. Jan/Feb 1998.
  4. Vogel, R., Sary, G., Dupont, P., Orban, G. Human Brain Regions Involved in Visual Categorization. Elsevier Science (US) 2002.
  5. Squire, L. "Declarative and Nondeclarative Memory: Multiple Brain Systems Supporting Learning and Memory". 1992 Massachusetts Institute of Technology. Journal of Cognitive Neuroscience 4.3.
  6. Lord, C. "Schemas and Images as Memory Aids: Two Modes of Processing Social Information". Stanford University. 1980. American Psychological Association.
  7. Haith, M. M., Hazan, C., & Goodman, G. S. (1988). "Expectation and Anticipation of Dynamic Visual Events by 3.5 Month Old Babies". Child Development, 59, 467–479.
  8. Johnson, M. H., Posner, M. I., & Rothbart, M. K. (1991). "Components of Visual Orienting in Early Infancy: Contingency Learning, Anticipatory Looking, and Disengaing". Journal of Cognitive Neuroscience, 335–344
  9. "David Roberts Academic Consulting". vl.catalystitsolutions.co.uk. Retrieved 2017-01-04.
  10. Smith, L.B., Yu, C., & Pereira, A. F. (2011). "Not your mother's view: The dynamics of toddler visual experience". Developmental science, 14(1), 9–17.
  11. Bertenthal, B. I., Campos, J. J., & Kermoian, R. (1994). "An epigenetic perspective on the development of self-produced locomotion and its consequences". Current Directions in Psychological Science, 3(5), 140–145.
  12. Birch, H. G., & Belmont, L. (1965). "Auditory-visual integration, intelligence and reading ability in school children". Perceptual and Motor Skills, 20(1), 295–305.
  13. Beeland, W. "Student Engagement, Visual Learning, and Technology: Can Interactive Whiteboards Help?" (2001). Theses and Dissertations from Valdosta State University Graduate School.
  14. 1 2 Farkas, R. "Effects of Traditional Versus Learning-Styles Instructional Methods on Middle School Students" The Journal of Educational Research. Vol. 97, No. 1 (Sep. – Oct., 2003), pp. 42–51.
  15. Wolfe, Pat. (2001). "Brain Matters: Translating the Research to Classroom Practice". ASCD: 1–207
  16. Mayer, R. E., & Massa, L. J. (2003). "Three Facets of Visual and Verbal Learners: Cognitive Ability, Cognitive Style, and Learning Preference". Journal of Educational Psychology, 95(4), 833.
  17. Eiszler, C. F. (1982). "Perceptual Preferences as an Aspect of Adolescent Learning Styles".
  18. 1 2 Harold Pashler, Mark McDaniel, Doug Rohrer, and Robert Bjork (2009). "Learning Styles: Concepts and Evidence". Psychological Science in the Public Interest. 9 (3): 105–119. doi: 10.1111/j.1539-6053.2009.01038.x . ISSN   1539-6053. PMID   26162104.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  19. Coffield, F., Moseley, D., Hall, E., Ecclestone, K. (2004). Learning styles and pedagogy in post-16 learning. A systematic and critical review Archived 2008-12-05 at the Wayback Machine . London: Learning and Skills Research Centre.