Rote learning is a memorization technique based on repetition. The method rests on the premise that the recall of repeated material becomes faster the more one repeats it. Some of the alternatives to rote learning include meaningful learning, associative learning, spaced repetition and active learning.
Rote learning is widely used in the mastery of foundational knowledge. Examples of school topics where rote learning is frequently used include phonics in reading, the periodic table in chemistry, multiplication tables in mathematics, anatomy in medicine, cases or statutes in law, basic formulae in any science, etc. By definition, rote learning eschews comprehension, so by itself it is an ineffective tool in mastering any complex subject at an advanced level. [1] For instance, one illustration of rote learning can be observed in preparing quickly for exams, a technique which may be colloquially referred to as "cramming". [2]
Rote learning is sometimes disparaged with the derogative terms parrot fashion, regurgitation , cramming, or mugging because one who engages in rote learning may give the wrong impression of having understood what they have written or said. It is strongly discouraged by many new curriculum standards. For example, science and mathematics standards in the United States specifically emphasize the importance of deep understanding over the mere recall of facts, which is seen to be less important. The National Council of Teachers of Mathematics stated:
More than ever, mathematics must include the mastery of concepts instead of mere memorization and the following of procedures. More than ever, school mathematics must include an understanding of how to use technology to arrive meaningfully at solutions to problems instead of endless attention to increasingly outdated computational tedium. [3]
However, advocates of traditional education have criticized the new American standards as slighting learning basic facts and elementary arithmetic, and replacing content with process-based skills. In math and science, rote methods are often used, for example to memorize formulas. There is greater understanding if students commit a formula to memory through exercises that use the formula rather than through rote repetition of the formula. Newer standards often recommend that students derive formulas themselves to achieve the best understanding. [4] Nothing is faster than rote learning if a formula must be learned quickly for an imminent test and rote methods can be helpful for committing an understood fact to memory. However, students who learn with understanding are able to transfer their knowledge to tasks requiring problem-solving with greater success than those who learn only by rote. [5]
On the other side, those who disagree with the inquiry-based philosophy maintain that students must first develop computational skills before they can understand concepts of mathematics. These people would argue that time is better spent practicing skills rather than in investigations inventing alternatives, or justifying more than one correct answer or method. In this view, estimating answers is insufficient and, in fact, is considered to be dependent on strong foundational skills. Learning abstract concepts of mathematics is perceived to depend on a solid base of knowledge of the tools of the subject. Thus, these people believe that rote learning is an important part of the learning process. [6]
Rote learning is also used to describe a simple learning pattern used in machine learning, although it does not involve repetition, unlike the usual meaning of rote learning. The machine is programmed to keep a history of calculations and compare new input against its history of inputs and outputs, retrieving the stored output if present. This pattern requires that the machine can be modeled as a pure function — always producing same output for same input — and can be formally described as follows:
Rote learning was used by Samuel's Checkers on an IBM 701, a milestone in the use of artificial intelligence. [8]
The flashcard, outline, and mnemonic device are traditional tools for memorizing course material and are examples of rote learning. [9] [10] [11] [12]
Supervised learning (SL) is a paradigm in machine learning where input objects and a desired output value train a model. The training data is processed, building a function that maps new data on expected output values. An optimal scenario will allow for the algorithm to correctly determine output values for unseen instances. This requires the learning algorithm to generalize from the training data to unseen situations in a "reasonable" way. This statistical quality of an algorithm is measured through the so-called generalization error.
In mathematics, a multiplication table is a mathematical table used to define a multiplication operation for an algebraic system.
In machine learning, the perceptron is an algorithm for supervised learning of binary classifiers. A binary classifier is a function which can decide whether or not an input, represented by a vector of numbers, belongs to some specific class. It is a type of linear classifier, i.e. a classification algorithm that makes its predictions based on a linear predictor function combining a set of weights with the feature vector.
Learning is the process of acquiring new understanding, knowledge, behaviors, skills, values, attitudes, and preferences. The ability to learn is possessed by humans, non-human animals, and some machines; there is also evidence for some kind of learning in certain plants. Some learning is immediate, induced by a single event, but much skill and knowledge accumulate from repeated experiences. The changes induced by learning often last a lifetime, and it is hard to distinguish learned material that seems to be "lost" from that which cannot be retrieved.
An artificial neuron is a mathematical function conceived as a model of biological neurons in a neural network. Artificial neurons are the elementary units of artificial neural networks. The artificial neuron is a function that receives one or more inputs, applies weights to these inputs, and sums them to produce an output.
Memorization is the process of committing something to memory. It is a mental process undertaken in order to store in memory for later recall visual, auditory, or tactical information.
Statistical learning theory is a framework for machine learning drawing from the fields of statistics and functional analysis. Statistical learning theory deals with the statistical inference problem of finding a predictive function based on data. Statistical learning theory has led to successful applications in fields such as computer vision, speech recognition, and bioinformatics.
In machine learning, backpropagation is a gradient estimation method used to train neural network models. The gradient estimate is used by the optimization algorithm to compute the network parameter updates.
A feedforward neural network (FNN) is one of the two broad types of artificial neural network, characterized by direction of the flow of information between its layers. Its flow is uni-directional, meaning that the information in the model flows in only one direction—forward—from the input nodes, through the hidden nodes and to the output nodes, without any cycles or loops, in contrast to recurrent neural networks, which have a bi-directional flow. Modern feedforward networks are trained using the backpropagation method and are colloquially referred to as the "vanilla" neural networks.
Elementary mathematics, also known as primary or secondary school mathematics, is the study of mathematics topics that are commonly taught at the primary or secondary school levels around the world. It includes a wide range of mathematical concepts and skills, including number sense, algebra, geometry, measurement, and data analysis. These concepts and skills form the foundation for more advanced mathematical study and are essential for success in many fields and everyday life. The study of elementary mathematics is a crucial part of a student's education and lays the foundation for future academic and career success.
Predictive learning is a machine learning technique where an artificial intelligence model is fed new data to develop an understanding of its environment, capabilities, and limitations. The fields of neuroscience, business, robotics, computer vision, and other fields employ this technique extensively. This concept was developed and expanded by French computer scientist Yann LeCun in 1988 during his career at Bell Labs, where he trained models to detect handwriting so that financial companies could automate check processing.
Study skills or study strategies are approaches applied to learning. Study skills are an array of skills which tackle the process of organizing and taking in new information, retaining information, or dealing with assessments. They are discrete techniques that can be learned, usually in a short time, and applied to all or most fields of study. More broadly, any skill which boosts a person's ability to study, retain and recall information which assists in and passing exams can be termed a study skill, and this could include time management and motivational techniques.
Quantum neural networks are computational neural network models which are based on the principles of quantum mechanics. The first ideas on quantum neural computation were published independently in 1995 by Subhash Kak and Ron Chrisley, engaging with the theory of quantum mind, which posits that quantum effects play a role in cognitive function. However, typical research in quantum neural networks involves combining classical artificial neural network models with the advantages of quantum information in order to develop more efficient algorithms. One important motivation for these investigations is the difficulty to train classical neural networks, especially in big data applications. The hope is that features of quantum computing such as quantum parallelism or the effects of interference and entanglement can be used as resources. Since the technological implementation of a quantum computer is still in a premature stage, such quantum neural network models are mostly theoretical proposals that await their full implementation in physical experiments.
In computer science, recursion is a method of solving a computational problem where the solution depends on solutions to smaller instances of the same problem. Recursion solves such recursive problems by using functions that call themselves from within their own code. The approach can be applied to many types of problems, and recursion is one of the central ideas of computer science.
The power of recursion evidently lies in the possibility of defining an infinite set of objects by a finite statement. In the same manner, an infinite number of computations can be described by a finite recursive program, even if this program contains no explicit repetitions.
Traditional education, also known as back-to-basics, conventional education or customary education, refers to long-established customs that society has traditionally used in schools. Some forms of education reform promote the adoption of progressive education practices, and a more holistic approach which focuses on individual students' needs; academics, mental health, and social-emotional learning. In the eyes of reformers, traditional teacher-centered methods focused on rote learning and memorization must be abandoned in favor of student centered and task-based approaches to learning.
The Volterra series is a model for non-linear behavior similar to the Taylor series. It differs from the Taylor series in its ability to capture "memory" effects. The Taylor series can be used for approximating the response of a nonlinear system to a given input if the output of the system depends strictly on the input at that particular time. In the Volterra series, the output of the nonlinear system depends on the input to the system at all other times. This provides the ability to capture the "memory" effect of devices like capacitors and inductors.
ADALINE is an early single-layer artificial neural network and the name of the physical device that implemented this network. The network uses memistors. It was developed by professor Bernard Widrow and his doctoral student Ted Hoff at Stanford University in 1960. It is based on the perceptron. It consists of a weight, a bias and a summation function.
"Teaching to the test" is a colloquial term for any method of education whose curriculum is heavily focused on preparing students for a standardized test.
Sparse distributed memory (SDM) is a mathematical model of human long-term memory introduced by Pentti Kanerva in 1988 while he was at NASA Ames Research Center.
Meaningful learning refers to the act of higher order thinking and development through intellectual engagement that uses pattern recognition and concept association. It can include—but is not limited to—critical and creative thinking, inquiry, problem solving, critical discourse, and metacognitive skills. The concept and theory of meaningful learning is that learned information is completely understood and can now be used to make connections with other previously known knowledge aiding in further understanding. Since information is stored in a network of connections, it can be accessed from multiple starting points depending on the context of recall. Meaningful learning is often contrasted with rote learning, a method in which information is memorized sometimes without elements of understanding or relation to other objects or situations. A real-world example of a concept the learner has learned is an instance of meaningful learning. Utilization of meaningful learning may trigger further learning, as the relation of a concept to a real-world situation may be encouraging to the learner. It may encourage the learner to understand the information presented and will assist with active learning techniques to aid their understanding. Although it takes longer than rote memorization information, it is typically retained for a longer period of time.
{{cite conference}}
: CS1 maint: multiple names: authors list (link)