Artificial grammar learning

Last updated

Artificial grammar learning (AGL) is a paradigm of study within cognitive psychology and linguistics. Its goal is to investigate the processes that underlie human language learning by testing subjects' ability to learn a made-up grammar in a laboratory setting. It was developed to evaluate the processes of human language learning but has also been utilized to study implicit learning in a more general sense. The area of interest is typically the subjects' ability to detect patterns and statistical regularities during a training phase and then use their new knowledge of those patterns in a testing phase. The testing phase can either use the symbols or sounds used in the training phase or transfer the patterns to another set of symbols or sounds as surface structure.

Contents

Many researchers propose that the rules of the artificial grammar are learned on an implicit level since the rules of the grammar are never explicitly presented to the participants. The paradigm has also recently been utilized for other areas of research such as language learning aptitude, structural priming [1] and to investigate which brain structures are involved in syntax acquisition and implicit learning.

Apart from humans, the paradigm has also been used to investigate pattern learning in other species, e.g. cottontop tamarins and starlings.

History

More than half a century ago George A. Miller [2] established the paradigm of artificial grammar learning in order to investigate the influence of explicit grammar structures on human learning, he designed a grammar model of letters with different sequences. His research demonstrated that it was easier to remember a structured grammar sequence than a random sequence of letters. His explanation was that learners could identify the common characteristics between learned sequences and accordingly encode them to a memory set. He predicted that subjects could identify which letters will most likely appear together as a sequence repeatedly and which letters would not and that the subjects would use this information to form memory sets. Those memory sets served participants as a strategy later on during their memory tests.

Reber [3] doubted Miller's explanation. He claimed that if participants could encode the grammar rules as productive memory sets, then they should be able to verbalize their strategy in detail. He conducted research that led to the development of the modern AGL paradigm. This research used a synthetic grammar learning model to test implicit learning. AGL became the most used and tested model in the field. As in the original paradigm developed by Miller, participants were asked to memorize a list of letter strings which were created from an artificial grammar rule model. It was only during the test phase that participants were told that there was a set of rules behind the letter sequences they memorized. They were then instructed to categorize new letter strings based on the same set of rules which they had not previously been exposed to. They classified new letter strings as "grammatical" (constructed from the grammar rule), vs. "randomly constructed" sequences. If subjects correctly sorted the new strings above chance level, it could be inferred that subjects had acquired the grammatical rule structure without any explicit instruction of the rules. Reber [3] found that participants sorted out new strings above chance level. While they reported using strategies during the sorting task, they could not actually verbalize those strategies. Subjects could identify which strings were grammatically correct but could not identify the rules that composed grammatical strings.

This research was replicated and expanded upon by many others. [4] [5] [6] [7] The conclusions of most of these studies were congruent with Reber's hypothesis: the implicit learning process was done with no intentional learning strategies. These studies also identified common characteristics for the implicitly acquired knowledge:

  1. Abstract representation for the set of rules.
  2. Unconscious strategies that can be tested with performance.

The modern paradigm

The modern AGL paradigm can be used to investigate explicit and implicit learning, although it is most often used to test implicit learning. In a typical AGL experiment, participants are required to memorize strings of letters previously generated by a specific grammar. The length of the strings usually ranges from 2-9 letters per string. An example of such a grammar is shown in figure 1.

Artificial grammar learning example.jpg

Figure 1: Example of an artificial grammar rule

In order to compose a grammatically "ruleful" string of letters, according to the predetermined grammar rule, a subject must follow the rules for the pairing of letters as represented in the model (figure 1). When observing a violation of the grammatical rule system that composes the string, it is considered an "unruleful" or randomly constructed string.

In the case of a standard AGL implicit learning task, [3] subjects are not told that the strings are based on a specific grammar. Instead, they are simply given the task to memorize the letter strings for a memory. After the learning phase, subjects are told that the letter strings presented during the learning phase were based on specific rules, but are not explicitly told what the rules are. During a test phase, the subjects are instructed to categorize new letter strings as "ruleful" or "unruleful". The dependent variable usually measured is the percentage of correctly categorized strings. Implicit learning is considered to be successful when the percentage of correctly sorted strings is significantly higher than chance level. If this significant difference is found, it indicates the existence of a learning process that is more involved than memorizing the presented letter strings. [8]

Bayesian learning

The mechanism behind the implicit learning that is hypothesized to occur while people engage in artificial grammar learning is statistical learning or, more specifically, Bayesian learning. Bayesian learning takes into account types of biases or "prior probability distributions" individuals have that contribute to the outcome of implicit learning tasks. These biases can be thought of as a probability distribution that contains the probability that each possible hypothesis is likely to be correct. Due to the structure of the Bayesian model, the inferences output by the model are in the form of a probability distribution rather than a single most probable event. This output distribution is a "posterior probability distribution". The posterior probability of each hypothesis in the original distribution is the probability of the hypothesis being true given the data and the probability of data given the hypothesis is true. [9] This Bayesian model for learning is fundamental for understanding the pattern detection process involved in implicit learning and, therefore, the mechanisms that underlie the acquisition of artificial grammar learning rules. It is hypothesized that the implicit learning of grammar involves predicting co-occurrences of certain words in a certain order. For example, "the dog chased the ball" is a sentence that can be learned as grammatically correct on an implicit level due to the high co-occurrence of "chase" being one of the words to follow "dog". A sentence like "the dog cat the ball" is implicitly recognized as grammatically incorrect due to the lack of utterances that contain those words paired in that specific order. This process is important for teasing apart thematic roles and parts of speech in grammatical processing (see grammar). While the labeling of the thematic roles and parts of speech is explicit, the identification of words and parts of speech is implicit.

Explanatory models

Traditional approaches to AGL claim that the stored knowledge obtained during the learning phase is abstract. [3] Other approaches [5] [10] argue that this stored knowledge is concrete and consists of exemplars of strings encountered during the learning phase or "chunks" of these exemplars. [6] [11] In any case, it is assumed that the information stored in memory is retrieved in the test phase and is used to aid decisions about letter strings. [12] [13] Three main approaches attempt to explain the AGL phenomena:

  1. Abstract Approach: According to this traditional approach, participants acquire an abstract representation of the artificial grammar rule in the learning stage. That abstract structure helps them to decide if the new string presented during the test phase is grammatical or randomly constructed. [14]
  2. Concrete knowledge approach: This approach proposes that during the learning stage participants learn specific examples of strings and store them in their memory. During the testing stage, participants do not sort the new strings according to an abstract rule; instead they will sort them according to their similarity to the examples stored in memory from the learning stage. There are multiple opinions concerning how concrete the learned knowledge really is. Brooks & Vokey [5] [10] argue that all of the knowledge stored in memory is represented as concrete examples of the full examples studied during the learning stage. The strings are sorted during the testing stage according to a full representation of the string examples from the learning stage. On the other hand, Perruchet & Pacteau [6] claimed that the knowledge of the strings from the learning stage is stored in the form of "memory chunks" where 2 - 3 letters are learned as a sequence along with knowledge about their permitted location in the full string. [6] [11]
  3. Dual Factor approach: Dual process learning model, combines the approaches described above. This approach proposes that a person will rely on concrete knowledge when they can. When they cannot rely on concrete knowledge (for example on a transfer of learning task), the person will use abstract knowledge of the rules. [4] [15] [16] [17]

Research with amnesia patients suggests the "Dual Factor approach" may be the most accurate model. [18] A series of experiments with amnesiac patients support the idea that AGL involves both abstract concepts and concrete exemplars. Amnesiacs were able to classify stimuli as "grammatical" vs. "randomly constructed" just as well as participants in the control group. While able to successfully complete the task, amnesiacs were not able to explicitly recall grammatical "chunks" of the letter sequence while the control group was able to explicitly recall them. When performing the task with the same grammar rules but a different sequence of letters than those that they were previously tested on, both amnesiacs and the control group were able to complete the task (although performance was better when the task was completed using the same set of letters used for training). The results of the experiment support the dual factor approach to artificial grammar learning in that people use abstract information to learn rules for grammars and use concrete, exemplar-specific memory for chunks. Since the amnesiacs were unable to store specific "chunks" in memory, they completed the task using an abstract set of rules. The control group was able to store these specific chunks in memory and (as evidenced by recall) did store these examples in memory for later reference.

Automaticity debate

AGL research has been criticized due to the "automatic question": Is AGL considered to be an automatic process? During encoding (see encoding (memory)), performance can be automatic in the sense of occurring without conscious monitoring (without conscious guidance by the performer's intentions). In the case of AGL, it was claimed that implicit learning is an automatic process due to the fact that it is done with no intention of learning a specific grammar rule. [3] This complies with the classic definition of an "automatic process" as a fast, unconscious, effortless process that may start unintentionally. When aroused, it continues until it is over without the ability to stop or ignore its consequences. [19] [20] [21] This definition has been challenged many times. Alternative definitions for automatic process have been given. [22] [23] [24] Reber's presumption that AGL is automatic could be problematic by implying that an unintentional process is an automatic process in its essence. When focusing on AGL tests, a few issues need to be addressed. The process is complex and contains encoding and recall or retrieval. Both encoding and retrieval could be interpreted as automatic processes since what was encoded during the learning stage is not necessary for the task intentionally performed during the test stage. [25] Researchers need to differentiate between implicitness as referring to the process of learning or knowledge encoding and also as referring to performance during the test phase or knowledge retrieval. Knowledge encoded during training may include many aspects of the presented stimuli (whole strings, relations among elements, etc.). The contribution of the various components to performance depends on both the specific instruction in the acquisition phase and the requirements of the retrieval task. [13] Therefore, the instructions on every phase are important in order to determine whether or not each stage will require automatic processing. Each phase should be evaluated for automaticity separately.

One hypothesis that contradicts the automaticity of AGL is the "mere exposure effect". The mere exposure effect is increased affect towards a stimulus that is the result of nonreinforced, repeated exposure to the stimulus. [26] Results from over 200 experiments on this effect indicate that there is a positive relationship between mean "goodness" rating and frequency of stimulus exposure. Stimuli for these experiments included line drawings, polygons and nonsense words (which are types of stimuli used in AGL research). These experiments exposed participants to each stimulus up to 25 times. Following each exposure participants were asked to rate the degree to which each stimulus suggested "good" vs. "bad" effect on a 7-point scale. In addition to the main pattern of results, it was also found in several experiments that participants rated higher positive affect for previously exposed items than for novel items. Since implicit cognition should not reference previous study episodes, the effects on affect ratings should not have been observed if processing of this stimuli is truly implicit. The results of these experiments suggests that different categorization of the strings may occur due to differences in affect associated with the strings and not due to implicitly learned grammar rules.

Artificial intelligence

Since the advent of computers and artificial intelligence, computer programs have been adapted that attempt to simulate the implicit learning process observed in the AGL paradigm. The AI programs first adapted to simulate both natural and artificial grammar learning used the following basic structure:

Given
A set of grammatical sentences from some language.
Find
A procedure for recognizing and/or generating all grammatical sentences in that language.

An early model for AI grammar learning is Wolff's SNPR System. [27] [28] The program acquires a series of letters with no pauses or punctuation between words and sentences. The program then examines the string in subsets and looks for common sequences of symbols and defines "chunks" in terms of these sequences (these chunks are akin to the exemplar-specific chunks described for AGL). As the model acquires these chunks through exposure, the chunks begin to replace the sequences of unbroken letters. When a chunk precedes or follows a common chunk, then the model determines disjunctive classes in terms of the first set. [28] For example, when the model encounters "the-dog-chased" and "the-cat-chased" it classifies "dog" and "cat" as being members of the same class since they both precede "chase". While the model sorts chunks into classes, it does explicitly define these groups (e.g., noun, verb). Early AI models of grammar learning such as these ignored the importance of negative instances of grammar's effect on grammar acquisition and were also lacking in the ability to connect grammatical rules to pragmatics and semantics. Newer models have attempted to factor these details in. The Unified Model [29] attempts to take both of these factors into account. The model breaks down grammar according to "cues". Languages mark case roles using five possible cue types: word order, case marking, agreement, intonation and verb-based expectation (see grammar). The influence that each cue has over a language's grammar is determined by its "cue strength" and "cue validity". Both of these values are determined using the same formula, except that cue strength is defined through experimental results and cue validity is defined through corpus counts from language databases. The formula for cue strength/validity is as follows:

Cue strength/cue validity = cue availability * cue reliability

Cue availability is the proportion of times that the cue is available over the times that it is needed. Cue reliability is the proportion of times that the cue is correct over the total occurrences of the cue. By incorporating cue reliability along with cue availability, The Unified Model is able to account for the effects of negative instances of grammar since it takes accuracy and not just frequency into account. As a result, this also accounts for the semantic and pragmatic information since cues that do not produce grammar in the appropriate context will have low cue strength and cue validity. While MacWhinney's model [29] also simulates natural grammar learning, it attempts to model the implicit learning processes observed in the AGL paradigm.

Cognitive neuroscience and the AGL paradigm

Contemporary studies with AGL have attempted to identify which structures are involved in the acquisition of grammar and implicit learning. Agrammatic aphasic patients (see Agrammatism) were tested with the AGL paradigm. The results show that breakdown of language in agrammatic aphasia is associated with an impairment in artificial grammar learning, indicating damage to domain-general neural mechanisms sub serving both language and sequential learning. [30] De Vries, Barth, Maiworm, Knecht, Zwitserlood & Flöel [31] found that electrical stimulation of Broca's area enhances implicit learning of an artificial grammar. Direct current stimulation may facilitate acquisition of grammatical knowledge, a finding of potential interest for rehabilitation of aphasia. Petersson, Vasiliki & Hagoort, [32] examine the neurobiological correlates of syntax, the processing of structured sequences, by comparing fMRI results on artificial and natural language syntax. They argue that the "Chomsky hierarchy" is not directly relevant for neurobiological systems through AGL testing.

See also

Related Research Articles

The Atkinson–Shiffrin model is a model of memory proposed in 1968 by Richard Atkinson and Richard Shiffrin. The model asserts that human memory has three separate components:

  1. a sensory register, where sensory information enters memory,
  2. a short-term store, also called working memory or short-term memory, which receives and holds input from both the sensory register and the long-term store, and
  3. a long-term store, where information which has been rehearsed in the short-term store is held indefinitely.

In cognitive psychology, chunking is a process by which small individual pieces of a set of information are bound together to create a meaningful whole later on in memory. The chunks, by which the information is grouped, are meant to improve short-term retention of the material, thus bypassing the limited capacity of working memory and allowing the working memory to be more efficient. A chunk is a collection of basic units that are strongly associated with one another, and have been grouped together and stored in a person's memory. These chunks can be retrieved easily due to their coherent grouping. It is believed that individuals create higher-order cognitive representations of the items within the chunk. The items are more easily remembered as a group than as the individual items themselves. These chunks can be highly subjective because they rely on an individual's perceptions and past experiences, which are linked to the information set. The size of the chunks generally ranges from two to six items but often differs based on language and culture.

The Levels of Processing model, created by Fergus I. M. Craik and Robert S. Lockhart in 1972, describes memory recall of stimuli as a function of the depth of mental processing. Deeper levels of analysis produce more elaborate, longer-lasting, and stronger memory traces than shallow levels of analysis. Depth of processing falls on a shallow to deep continuum. Shallow processing leads to a fragile memory trace that is susceptible to rapid decay. Conversely, deep processing results in a more durable memory trace. There are three levels of processing in this model. Structural processing, or visual, is when we remember only the physical quality of the word E.g how the word is spelled and how letters look. Phonemic processing includes remembering the word by the way it sounds. E.G the word tall rhymes with fall. Lastly, we have semantic processing in which we encode the meaning of the word with another word that is similar of has similar meaning. Once the word is perceived, the brain allows for a deeper processing.

Declarative learning is acquiring information that one can speak about. The capital of a state is a declarative piece of information, while knowing how to ride a bike is not. Episodic memory and semantic memory are a further division of declarative information.

Memory has the ability to encode, store and recall information. Memories give an organism the capability to learn and adapt from previous experiences as well as build relationships. Encoding allows a perceived item of use or interest to be converted into a construct that can be stored within the brain and recalled later from long-term memory. Working memory stores information for immediate use or manipulation, which is aided through hooking onto previously archived items already present in the long-term memory of an individual.

In psychology, a dual process theory provides an account of how thought can arise in two different ways, or as a result of two different processes. Often, the two processes consist of an implicit (automatic), unconscious process and an explicit (controlled), conscious process. Verbalized explicit processes or attitudes and actions may change with persuasion or education; though implicit process or attitudes usually take a long amount of time to change with the forming of new habits. Dual process theories can be found in social, personality, cognitive, and clinical psychology. It has also been linked with economics via prospect theory and behavioral economics, and increasingly in sociology through cultural analysis.

Implicit cognition refers to cognitive processes that occur outside conscious awareness or conscious control. This includes domains such as learning, perception, or memory which may influence a person's behavior without their conscious awareness of those influences.

In mental memory, storage is one of three fundamental stages along with encoding and retrieval. Memory is the process of storing and recalling information that was previously acquired. Storing refers to the process of placing newly acquired information into memory, which is modified in the brain for easier storage. Encoding this information makes the process of retrieval easier for the brain where it can be recalled and brought into conscious thinking. Modern memory psychology differentiates between the two distinct types of memory storage: short-term memory and long-term memory. Several models of memory have been proposed over the past century, some of them suggesting different relationships between short- and long-term memory to account for different ways of storing memory.

<span class="mw-page-title-main">CLARION (cognitive architecture)</span>

Connectionist Learning with Adaptive Rule Induction On-line (CLARION) is a computational cognitive architecture that has been used to simulate many domains and tasks in cognitive psychology and social psychology, as well as implementing intelligent systems in artificial intelligence applications. An important feature of CLARION is the distinction between implicit and explicit processes and focusing on capturing the interaction between these two types of processes. The system was created by the research group led by Ron Sun.

In linguistics, grammaticality is determined by the conformity to language usage as derived by the grammar of a particular speech variety. The notion of grammaticality rose alongside the theory of generative grammar, the goal of which is to formulate rules that define well-formed, grammatical, sentences. These rules of grammaticality also provide explanations of ill-formed, ungrammatical sentences.

Metamemory or Socratic awareness, a type of metacognition, is both the introspective knowledge of one's own memory capabilities and the processes involved in memory self-monitoring. This self-awareness of memory has important implications for how people learn and use memories. When studying, for example, students make judgments of whether they have successfully learned the assigned material and use these decisions, known as "judgments of learning", to allocate study time.

Indirect memory tests assess the retention of information without direct reference to the source of information. Participants are given tasks designed to elicit knowledge that was acquired incidentally or unconsciously and is evident when performance shows greater inclination towards items initially presented than new items. Performance on indirect tests may reflect contributions of implicit memory, the effects of priming, a preference to respond to previously experienced stimuli over novel stimuli. Types of indirect memory tests include the implicit association test, the lexical decision task, the word stem completion task, artificial grammar learning, word fragment completion, and the serial reaction time task.

Procedural memory is a type of implicit memory which aids the performance of particular types of tasks without conscious awareness of these previous experiences.

In psychology, implicit memory is one of the two main types of long-term human memory. It is acquired and used unconsciously, and can affect thoughts and behaviours. One of its most common forms is procedural memory, which allows people to perform certain tasks without conscious awareness of these previous experiences; for example, remembering how to tie one's shoes or ride a bicycle without consciously thinking about those activities.

Serial reaction time (SRT) is a commonly used parameter in the measurement of unconscious learning processes. This parameter is operationalised through a SRT task, in which participants are asked to repeatedly respond to a fixed set of stimuli in which each cue signals that a particular response needs to be made. Unbeknownst to the participant, there are probabilities governing the occurrence of the cues as they appear in both a repeated sequence and randomised order, thus required responses following one cue have some predictability, influencing reaction-time. As a result, reaction-time to these cues becomes increasingly fast as subjects learn and utilise these transition probabilities.

Implicit learning is the learning of complex information in an unintentional manner, without awareness of what has been learned. According to Frensch and Rünger (2003) the general definition of implicit learning is still subject to some controversy, although the topic has had some significant developments since the 1960s. Implicit learning may require a certain minimal amount of attention and may depend on attentional and working memory mechanisms. The result of implicit learning is implicit knowledge in the form of abstract representations rather than verbatim or aggregate representations, and scholars have drawn similarities between implicit learning and implicit memory.

Unconscious cognition is the processing of perception, memory, learning, thought, and language without being aware of it.

Retrieval-induced forgetting (RIF) is a memory phenomenon where remembering causes forgetting of other information in memory. The phenomenon was first demonstrated in 1994, although the concept of RIF has been previously discussed in the context of retrieval inhibition.

Statistical learning is the ability for humans and other animals to extract statistical regularities from the world around them to learn about the environment. Although statistical learning is now thought to be a generalized learning mechanism, the phenomenon was first identified in human infant language acquisition.

Arthur S. Reber is an American cognitive psychologist. He is a Fellow of the American Association for the Advancement of Science (AAAS), the Association for Psychological Science (APS) and a Fulbright Fellow. He is known for introducing the concept of implicit learning and for using basic principles of evolutionary biology to show how implicit or unconscious cognitive functions differ in fundamental ways from those carried out consciously.

References

  1. Peter, Michelle; Chang, Franklin; Pine, Julian M.; Blything, Ryan; Rowland, Caroline F. (May 2015). "When and how do children develop knowledge of verb argument structure? Evidence from verb bias effects in a structural priming task". Journal of Memory and Language. 81: 1–15. doi:10.1016/j.jml.2014.12.002. hdl: 11858/00-001M-0000-002B-4649-3 . ISSN   0749-596X.
  2. Miller, G.A. (1958). "Free recall of redundant strings of letters". Journal of Experimental Psychology. 56 (6): 485–491. doi:10.1037/h0044933. PMID   13611173.
  3. 1 2 3 4 5 Reber, A.S. (1967). "Implicit learning of artificial grammars". Verbal Learning and Verbal Behavior. 5 (6): 855–863. doi:10.1016/s0022-5371(67)80149-x.
  4. 1 2 Mathews, R.C.; Buss, R. R.; Stanley, W. B.; Blanchard-Fields, F.; Cho, J. R.; Druhan, B. (1989). "Role of implicit and explicit processes in learning from examples: A synergistic effect". Journal of Experimental Psychology: Learning, Memory, and Cognition. 15 (6): 1083–1100. CiteSeerX   10.1.1.456.8747 . doi:10.1037/0278-7393.15.6.1083.
  5. 1 2 3 Brooks, L.R.; Vokey, J.R. (1991). "Abstract analogies and abstracted grammars: Comments on Reber (1989) and Mathews et al. (1989)". Journal of Experimental Psychology: General. 120 (3): 316–323. doi:10.1037/0096-3445.120.3.316.
  6. 1 2 3 4 Perruchet, P.; Pacteau, C. (1990). "Synthetic grammar learning: Implicit rule abstraction or explicit fragmentary knowledge". Journal of Experimental Psychology. 119 (3): 264–275. CiteSeerX   10.1.1.116.3120 . doi:10.1037/0096-3445.119.3.264.
  7. Altmann, G.M.T.; Dienes, Z.; Goode, A. (1995). "Modality Independence of Implicitly Learned Grammatical Knowledge". Journal of Experimental Psychology: Learning, Memory, and Cognition. 21 (4): 899–912. doi:10.1037/0278-7393.21.4.899.
  8. Seger, C.A. (1994). "Implicit learning". Psychological Bulletin. 115 (2): 163–196. doi:10.1037/0033-2909.115.2.163. PMID   8165269.
  9. Kapatsinski, V. (2009). "The Architecture of Grammar in Artificial Grammar Learning: Formal Biases in the Acquisition of Morphophonology and the Nature of the Learning Task". Indiana University: 1–260.
  10. 1 2 Vokey, J.R.; Brooks, L.R. (1992). "Salience of item knowledge in learning artificial grammar". Journal of Experimental Psychology: Learning, Memory, and Cognition. 18 (2): 328–344. doi:10.1037/0278-7393.18.2.328.
  11. 1 2 Servan-Schreiber, E.; Anderson, J.R. (1990). "Chunking as a mechanism of implicit learning". Journal of Experimental Psychology: Learning, Memory, and Cognition. 16 (4): 592–608. doi:10.1037/0278-7393.16.4.592.
  12. Pothos, E.M. (2007). "Theories of artificial grammar learning". Psychological Bulletin. 133 (2): 227–244. CiteSeerX   10.1.1.137.1619 . doi:10.1037/0033-2909.133.2.227. PMID   17338598.
  13. 1 2 Poznanski, Y.; Tzelgov, J. (2010). "What is implicit in implicit artificial grammar learning?". Quarterly Journal of Experimental Psychology. 63 (8): 1495–2015. doi:10.1080/17470210903398121. PMID   20063258. S2CID   28756388.
  14. Reber, A.S. (1969). "Transfer of syntactic structure in syntactic languages". Journal of Experimental Psychology. 81: 115–119. doi:10.1037/h0027454.
  15. McAndrews, M.P.; Moscovitch, M. (1985). "Rule-based and exemplar-based classification in artificial grammar learning". Memory & Cognition. 13 (5): 469–475. doi: 10.3758/bf03198460 . PMID   4088057.
  16. Reber, A.S. (1989). "Implicit Learning and Tacit Knowledge". Journal of Experimental Psychology. 118 (3): 219–235. CiteSeerX   10.1.1.207.6707 . doi:10.1037/0096-3445.118.3.219.
  17. Reber, A.S.; Allen, R. (1978). "Analogic abstraction strategies in synthetic grammar learning: A functionalist interpretation". Cognition. 6 (3): 189–221. doi:10.1016/0010-0277(78)90013-6. S2CID   53199118.
  18. Knowlton, B.J.; Squire, L.R. (1996). "Artificial Grammar Learning Depends on Implicit Acquisition of Both Abstract and Exemplar-Specific Information". Journal of Experimental Psychology: Learning, Memory, and Cognition. 22 (1): 169–181. doi:10.1037/0278-7393.22.1.169. PMID   8648284. S2CID   6465608.
  19. Hasher, L.; Zacks, R. (1979). "Automatic and effortful processes in memory". Journal of Experimental Psychology: General. 108 (3): 356–388. doi:10.1037/0096-3445.108.3.356.
  20. Schneider, W.; Dumais, S. T.; Shiffrin, R. M. (1984). "Automatic and controlled processing and attention". In R. Parasuraman & D. Davies (Eds.), Varieties of Attention. New York: Academic press: 1–17.
  21. Logan, G.D. (1988). "Automaticity, resources and memory: Theoretical controversies and practical implications". Human Factors. 30 (5): 583–598. doi:10.1177/001872088803000504. PMID   3065212. S2CID   43294231.
  22. Tzelgov, J. (1999). "Automaticity and processing without awareness" (PDF). Psyche. 5.
  23. Logan, G.D. (1980). "Attention and automaticity in Stroop and priming tasks: Theory and data". Cognitive Psychology. 12 (4): 523–553. doi:10.1016/0010-0285(80)90019-5. PMID   7418368. S2CID   15830267.
  24. Logan, G.D. (1985). "Executive control of thought and action". Acta Psychologica. 60 (2–3): 193–210. doi:10.1016/0001-6918(85)90055-1.
  25. Perlman, A.; Tzelgov, J. (2006). "Interaction between encoding and retrieval in the domain of sequence learning". Journal of Experimental Psychology: Learning, Memory, and Cognition. 32 (1): 118–130. doi:10.1037/0278-7393.32.1.118. PMID   16478345.
  26. Manza, L.; Zizak, D.; Reber, A.S. (1998). "Artificial grammar learning and the mere exposure effect: Emotional preference tasks and the implicit learning process". In Stadler, M.A. & Frensch, P.A. (Eds.), Handbook of Implicit Learning. Thousand Oaks, CA: Sage Publications, Inc.: 201–222.
  27. Wolff, J.G. (1982). "Language acquisition, data compression and generalization". Language & Communication. 2: 57–89. doi:10.1016/0271-5309(82)90035-0.
  28. 1 2 MacWhinney, B. (1987). Mechanisms of language acquisition. Hillsdale, NJ: Lawrence Erlbaum Associates, Inc. ISBN   9781317757405.
  29. 1 2 MacWhinney, B. (2008). "A Unified Model". In Robinson, P. & Ellis, N. (Eds.), Handbook of Cognitive Linguistics and Second Language Acquisition. Mahwah, NJ: Lawrence Erlbaum Associates.
  30. Christiansen, M.H.; Kelly, M.L.; Shillcock, R.C.; Greenfield, K. (2010). "Impaired artificial grammar learning in agrammatism". Cognition. 116 (3): 383–393. doi:10.1016/j.cognition.2010.05.015. PMID   20605017. S2CID   43834239.
  31. De Vries, M.H.; Barth, A.C.R.; Maiworm, S.; Knecht, S.; Zwisterlood, P.; Floel, A. (2010). "Electrical stimulation of Broca's area enhances implicit learning of artificial grammar". Cognitive Neuroscience. 22 (11): 2427–2436. CiteSeerX   10.1.1.469.3005 . doi:10.1162/jocn.2009.21385. PMID   19925194. S2CID   7010584.
  32. Petersson, K.M.; Vasiliki, F.; Hagoort, P. (2010). "What artificial grammar learning reveals about the neurobiology of syntax" (PDF). Brain & Language: 340–353.