Negative evidence in language acquisition

Last updated

In language acquisition, negative evidence is information concerning what is not possible in a language. Importantly, negative evidence does not show what is grammatical; that is positive evidence. In theory, negative evidence would help eliminate ungrammatical constructions by revealing what is not grammatical.

Contents

Direct negative evidence refers to comments made by an adult language-user in response to a learner's ungrammatical utterance. Indirect negative evidence refers to the absence of ungrammatical sentences in the language that the child is exposed to.

There is debate among linguists and psychologists about whether negative evidence can help children determine the grammar of their language. Negative evidence, if it is used, could help children rule out ungrammatical constructions in their language.

Direct negative evidence

Direct negative evidence in language acquisition consists of utterances that indicate whether a construction in a language is ungrammatical. [1]   Direct negative evidence differs from indirect negative evidence because it is explicitly presented to a language learner (e.g. a child might be corrected by a parent). Direct negative evidence can be further divided into explicit and implicit forms.

On the other hand, indirect negative evidence is used to determine ungrammatical constructions in a language by noticing the absence of such constructions. [1]

Explicit direct negative evidence

A corpus study found that explicit negative evidence was "very rare", and concluded that because parents do not reliably correct their children's grammatical errors, explicit negative evidence does not facilitate language learning. [2] Psychologist David McNeill argues that when parents correct children explicitly, the correction is unlikely to be helpful in learning grammar because it is a single correction that will most likely not be repeated, and therefore a child might not remember or even notice the correction. [3] This is demonstrated in the following exchange between a parent and child, which McNeill recorded:

Child: Nobody don't like me.
Mother: No, say, "nobody likes me."
Child: Nobody don't like me.
[This exchange is repeated several times]
Mother: No, now listen carefully. Say, "Nobody likes me."
Child: Oh! Nobody don't likes me. [3]

As this conversation reveals, children are seemingly unable to detect differences between their ungrammatical sentences and the grammatical sentences that their parents produce. Therefore, children typically cannot use explicit negative evidence to learn that an aspect of grammar, such as using double negatives in English, is ungrammatical. This example also shows that children can make incorrect generalizations about which grammatical principle a parent corrects, suggesting that there must be something other than explicit feedback which drives children to arrive at a correct grammar.

Implicit direct negative evidence

In the input

Implicit direct negative evidence occurs when a parent responds to a child's ungrammatical utterance in a way that indicates that the utterance was not grammatical. This differs from explicit direct negative evidence because the parent merely implies that the child's utterance is ungrammatical, while explicit direct negative evidence involves a parent unambiguously telling a child that a sentence they produced is ungrammatical. There are several types of implicit direct negative evidence which parents utilize in responses to children's ungrammatical utterances. [2] These forms include: repetitions, recasts, expansions, and requests for clarification. Repetitions occur when a parent repeats a child's utterance word for word, whereas recasts occur when a parent repeats a child's utterance while correcting the ungrammatical part of the sentence. Expansions are similar to recasts because they are potentially corrective utterances, but in expansions a parent also will lengthen the child's original utterance. Requests for clarification occur when a parent asks a question that can prompt a child to fix an ungrammatical sentence they previously said. Generally speaking, there is consensus that implicit direct negative evidence exists in the input, though there is debate about whether children can use implicit direct negative evidence to learn the grammar of their language. It is argued that parents frequently reformulate children's ungrammatical utterances. [4]

Utility

Some types of implicit direct negative evidence, such as reformulations, occur regularly in the input, thus making them potentially usable forms of evidence for language acquisition. [5] [6] Some studies have demonstrated that parents respond differently when children utter grammatical or ungrammatical utterances, which suggests that children can use this parental feedback to learn grammar. [5] [7] [4] Some evidence also supports the hypothesis that children actually use implicit direct negative evidence in practice (see section below). [4]

Though there have been a number of studies that support the hypothesis that children can use implicit direct negative evidence that exists in the input, there have also been studies which stand in stark contrast to this hypothesis. [8] Linguists who do not believe that implicit direct negative evidence is helpful for a language learner argue that studies supporting the utility of implicit direct negative evidence do not properly specify which types of utterances qualify as recasts. They criticize the fact that some types of implicit direct negative evidence are not necessarily corrective (i.e. parental responses that may qualify as implicit direct negative evidence can occur after either grammatical or ungrammatical utterances). Additionally, some of these linguists question how children would know to only pay attention to certain kinds of recasts and not others. [5]

Furthermore, Gary Marcus argues that implicit direct negative evidence in the input is insufficient for children to learn the correct grammar of their language. [9] He asserts that negative evidence does not explain why sentences are ungrammatical, thus making it difficult for children to learn why these sentences should be excluded from their grammar. He also argues that for children to be able to even use implicit direct negative evidence, they would need to receive negative feedback on a sentence 85 times in order to eliminate it from their vocabulary, but children do not repeat ungrammatical sentences nearly that often. Marcus also purports that implicit evidence is largely unavailable because the feedback differs from parent to parent, and is inconsistent in both the frequency with which it is offered and the kinds of errors it corrects. Other studies demonstrate that implicit negative evidence decreases over time, so that as children get older there is less feedback, making it less available and, consequentially, less likely to account for children's unlearning of grammatical errors. [10]

Usage

Assuming that implicit direct negative evidence is usable, there are some studies which demonstrate that children do use implicit direct negative evidence to correct their grammatical mistakes. For example, experiments show that children produce a greater number of grammatical sentences when parents provide them with any type of immediate implicit direct negative evidence, including recasts. This evidence supports claims that direct negative evidence assists a child in their language learning. [4] Chouinard also found that children are highly attentive to parental responses and that children respond to implicit correction in predictable ways. [5] Children tend to directly respond to these reformulations by either affirming the reformulation or disagreeing with their parent if the parent misunderstood the child's intended meaning, revealing that children can discern when parental feedback is meant to correct their grammatical errors. Additionally, children have been shown to correct their initial errors when a parent recasts the child's morphological error. [11]

However, other researchers have conducted studies that demonstrate that children do not need negative feedback in order to learn language. [12] This is evidenced in a case study in which a mute child was tested to see whether he could comprehend a grammar even though he had received no corrective feedback (since corrective feedback occurs as a response to ungrammatical sentences that children produce). Though the child did not produce any speech and therefore did not receive any negative feedback, researchers found that he was able to learn grammatical rules. Although this study does not answer whether negative evidence can be helpful for learning language, it does suggest that direct negative evidence is not needed to learn grammar. Another study also demonstrates that implicit negative evidence is a negative predictor of the rate at which children eliminated ungrammatical utterances from their speech. [13]

Direct negative evidence in language learning

Though there is no consensus regarding whether there is sufficient and usable implicit negative evidence in the input, if children are exposed to direct negative evidence, then they could use that evidence to validate hypotheses they have made about their grammar. On the other hand, if there is not sufficient usable negative evidence in the input, then there is a "no negative evidence" problem, which questions of how a language learner can learn language without negative evidence. This is a problem because if a child only hears grammatical sentences which are consistent with multiple grammars, then it would be impossible to determine which grammar is correct unless there was some other factor influencing what grammar a child ultimately hypothesizes to be correct.

Proponents of linguistic nativism suggest that the answer to the "no negative evidence" problem is that language knowledge that cannot be learned is innate. They argue that the language input is not rich enough for children to develop a fully developed grammar from the input alone. This view is referred to as the  poverty of the stimulus argument. The central idea of the poverty of the stimulus argument is that children could have multiple hypotheses about aspects of their grammar which are distinguishable only by negative evidence (or by hearing ungrammatical sentences and recognizing those sentences as ungrammatical). Supporters of the poverty of the stimulus argument assert that because the negative evidence that is needed to learn language by the input alone does not exist, children cannot learn certain aspects of grammar from the input alone, and therefore there must be some aspects of grammar which involve innate mechanisms. [14] [15]

Indirect negative evidence in language acquisition

Indirect negative evidence refers to using what's not in the input to make an inference about what's not possible. For example, when we see a dog bark, we are likely to think that dogs bark, not that every kind of animal barks. This is because we have never seen horses or fish or any other animal bark, so our hypothesis becomes that only dogs bark. [16] We use this same inference to assume that the sun will rise tomorrow, having seen it rise every other day so far. No evidence received indicates that the sun may not rise once every two thousand years, or only rises on years that are not 2086, but since all evidence seen so far is consistent with the universal generalization, we infer that the sun does indeed rise every day. [16] In language acquisition, indirect negative evidence may be used to constrain a child's grammar; if a child never hears a certain construction, the child concludes that it is ungrammatical. [1]

Utility of indirect negative evidence

Indirect negative evidence and word learning

Child and adult speakers rely on 'suspicious coincidences' when learning a new word meaning. [17] When learning a new word meaning, children and adults use only the first few instances of hearing that word in order to decide what it means, rapidly constricting their hypothesis if only used in a narrow context. In an experiment conducted by Fei Xu and Joshua Tenenbaum, 4-year-old participants learning a novel word 'fep' readily decided that it referred only to Dalmatians if only hearing it while shown pictures of Dalmatians; although they received no information that 'fep' was unable to refer to other kinds of dogs, the suspicious coincidence that they had never heard it in other contexts caused them to restrict their meaning to just one breed. [17]

Indirect negative evidence and syntax

Child language learners can use this same type of probabilistic inference to decide when and how verbs can be used. A word children hear often, like 'disappear', is more likely to be used than a less common word with a similar meaning, 'vanish'. Children studied said that the ungrammatical sentence "*We want to disappear our heads" was ungrammatical, but when given the same sentence with vanish, they were less sure of the grammaticality. [18] Given the frequency of 'disappear' in intransitive clauses, learners could infer that if it were possible in transitive clauses, they would have heard it in those contexts. Thus, the high frequency of the intransitive verb use leads to the inference that the transitive verb use is impossible. [19] This inference is less reliable with a less frequent verb like "vanish" because children have not heard enough instantiations of the verb to infer that it is never transitive.

It has been argued that children use indirect negative evidence to make probabilistic inferences about the syntax of the language they are acquiring. A 2004 study by Regier & Gahl produced a computational model which provides support for this argument. [16] They assert that children can use the absence of particular patterns in the input in order to conclude that such patterns are illicit. According to Regier and Gahl, young language learners form hypotheses about what is and isn't correct based on probabilistic inferences. As children are exposed to more and more examples of a certain phenomenon, their hypothesis space narrows. Notably, Regier and Gahl assert that this ability for probabilistic inference can be used in all sorts of general learning tasks, and not just linguistic ones. Regier and Gahl also present their model as evidence against an argument from the poverty of the stimulus, because their model illustrates that syntactic learning is possible from using the input alone, and does not necessarily require some innate linguistic knowledge of syntax.

However, probabilistic inferences based on indirect negative evidence can lead children to make an incorrect hypothesis about their language, which leads to errors in early language production. In his variational model, Charles Yang notes that based on indirect negative evidence, English-acquiring children could conclude that English is a topic-drop language, such as Chinese. A topic-drop language allows the subject and object to be dropped in a sentence as long as they are in topic. English does not allow topic-drop, as evidenced by the insertion of expletive subjects in sentences such as There is rain. Yang notes that in English child-directed speech, children very rarely hear expletive subjects. Yang asserts that this leads English-acquiring children to momentarily conclude that English is a topic-drop language. His assertion is supported by the parallels between the topic-drop errors made by English-acquiring children and the licit topic-drop sentences produced by Chinese-speaking adults. [20]

Some researchers argue that indirect negative evidence is unnecessary for language acquisition. For example, Abend et al. built a Bayesian inference model that mimics a child's acquisition of English, using only data from a single child in the CHILDES corpus. They found that the model successfully learned English word order, mappings between word labels and semantic meanings of words (i.e. word learning), and used surrounding syntax to infer the meaning of novel words. [21] They conclude that the success of this model shows that it is possible for children to acquire language on positive evidence alone, as the model did not make use of what was not in the input.

See also

Related Research Articles

Computational linguistics is an interdisciplinary field concerned with the computational modelling of natural language, as well as the study of appropriate computational approaches to linguistic questions. In general, computational linguistics draws upon linguistics, computer science, artificial intelligence, mathematics, logic, philosophy, cognitive science, cognitive psychology, psycholinguistics, anthropology and neuroscience, among others.

Language acquisition is the process by which humans acquire the capacity to perceive and comprehend language, as well as to produce and use words and sentences to communicate.

<span class="mw-page-title-main">Universal grammar</span> Theory of the biological component of the language faculty

Universal grammar (UG), in modern linguistics, is the theory of the innate biological component of the language faculty, usually credited to Noam Chomsky. The basic postulate of UG is that there are innate constraints on what the grammar of a possible human language could be. When linguistic stimuli are received in the course of language acquisition, children then adopt specific syntactic rules that conform to UG. The advocates of this theory emphasize and partially rely on the poverty of the stimulus (POS) argument and the existence of some universal properties of natural human languages. However, the latter has not been firmly established, as some linguists have argued languages are so diverse that such universality is rare, and the theory of universal grammar remains controversial among linguists.

A second language (L2) is a language spoken in addition to one's first language (L1). A second language may be a neighbouring language, another language of the speaker's home country, or a foreign language. A speaker's dominant language, which is the language a speaker uses most or is most comfortable with, is not necessarily the speaker's first language. For example, the Canadian census defines first language for its purposes as "the first language learned in childhood and still spoken", recognizing that for some, the earliest language may be lost, a process known as language attrition. This can happen when young children start school or move to a new language environment.

Rod Ellis is a Kenneth W. Mildenberger Prize-winning British linguist. He is currently a research professor in the School of Education, at Curtin University in Perth, Australia. He is also a professor at Anaheim University, where he serves as the Vice president of academic affairs. Ellis is a visiting professor at Shanghai International Studies University as part of China’s Chang Jiang Scholars Program and an emeritus professor of the University of Auckland. He has also been elected as a fellow of the Royal Society of New Zealand.

Second-language acquisition (SLA), sometimes called second-language learning — otherwise referred to as L2acquisition, is the process by which people learn a second language. Second-language acquisition is also the scientific discipline devoted to studying that process. The field of second-language acquisition is regarded by some but not everybody as a sub-discipline of applied linguistics but also receives research attention from a variety of other disciplines, such as psychology and education.

Poverty of the stimulus (POS) is the controversial argument from linguistics that children are not exposed to rich enough data within their linguistic environments to acquire every feature of their language. This is considered evidence contrary to the empiricist idea that language is learned solely through experience. The claim is that the sentences children hear while learning a language do not contain the information needed to develop a thorough understanding of the grammar of the language.

Semantic bootstrapping is a linguistic theory of child language acquisition which proposes that children can acquire the syntax of a language by first learning and recognizing semantic elements and building upon, or bootstrapping from, that knowledge. This theory proposes that children, when acquiring words, will recognize that words label conceptual categories, such as objects or actions. Children will then use these semantic categories as a cue to the syntactic categories, such as nouns and verbs. Having identified particular words as belonging to a syntactic category, they will then look for other correlated properties of those categories, which will allow them to identify how nouns and verbs are expressed in their language. Additionally, children will use perceived conceptual relations, such as Agent of an event, to identify grammatical relations, such as Subject of a sentence. This knowledge, in turn, allows the learner to look for other correlated properties of those grammatical relations.

In the field of psychology, nativism is the view that certain skills or abilities are "native" or hard-wired into the brain at birth. This is in contrast to the "blank slate" or tabula rasa view, which states that the brain has inborn capabilities for learning from the environment but does not contain content such as innate beliefs. This factor contributes to the ongoing nature versus nurture dispute, one borne from the current difficulty of reverse engineering the subconscious operations of the brain, especially the human brain.

Artificial grammar learning (AGL) is a paradigm of study within cognitive psychology and linguistics. Its goal is to investigate the processes that underlie human language learning by testing subjects' ability to learn a made-up grammar in a laboratory setting. It was developed to evaluate the processes of human language learning but has also been utilized to study implicit learning in a more general sense. The area of interest is typically the subjects' ability to detect patterns and statistical regularities during a training phase and then use their new knowledge of those patterns in a testing phase. The testing phase can either use the symbols or sounds used in the training phase or transfer the patterns to another set of symbols or sounds as surface structure.

Bootstrapping is a term used in language acquisition in the field of linguistics. It refers to the idea that humans are born innately equipped with a mental faculty that forms the basis of language. It is this language faculty that allows children to effortlessly acquire language. As a process, bootstrapping can be divided into different domains, according to whether it involves semantic bootstrapping, syntactic bootstrapping, prosodic bootstrapping, or pragmatic bootstrapping.

In linguistics, grammaticality is determined by the conformity to language usage as derived by the grammar of a particular speech variety. The notion of grammaticality rose alongside the theory of generative grammar, the goal of which is to formulate rules that define well-formed, grammatical, sentences. These rules of grammaticality also provide explanations of ill-formed, ungrammatical sentences.

In linguistics, the innateness hypothesis, also known as the nativist hypothesis, holds that humans are born with at least some knowledge of linguistic structure. On this hypothesis, language acquisition involves filling in the details of an innate blueprint rather than being an entirely inductive process. The hypothesis is one of the cornerstones of generative grammar and related approaches in linguistics. Arguments in favour include the poverty of the stimulus, the universality of language acquisition, as well as experimental studies on learning and learnability. However, these arguments have been criticized, and the hypothesis is widely rejected in other traditions such as usage-based linguistics. The term was coined by Hilary Putnam in reference to the views of Noam Chomsky.

The input hypothesis, also known as the monitor model, is a group of five hypotheses of second-language acquisition developed by the linguist Stephen Krashen in the 1970s and 1980s. Krashen originally formulated the input hypothesis as just one of the five hypotheses, but over time the term has come to refer to the five hypotheses as a group. The hypotheses are the input hypothesis, the acquisition–learning hypothesis, the monitor hypothesis, the natural order hypothesis and the affective filter hypothesis. The input hypothesis was first published in 1977.

Indirect memory tests assess the retention of information without direct reference to the source of information. Participants are given tasks designed to elicit knowledge that was acquired incidentally or unconsciously and is evident when performance shows greater inclination towards items initially presented than new items. Performance on indirect tests may reflect contributions of implicit memory, the effects of priming, a preference to respond to previously experienced stimuli over novel stimuli. Types of indirect memory tests include the implicit association test, the lexical decision task, the word stem completion task, artificial grammar learning, word fragment completion, and the serial reaction time task.

The main purpose of theories of second-language acquisition (SLA) is to shed light on how people who already know one language learn a second language. The field of second-language acquisition involves various contributions, such as linguistics, sociolinguistics, psychology, cognitive science, neuroscience, and education. These multiple fields in second-language acquisition can be grouped as four major research strands: (a) linguistic dimensions of SLA, (b) cognitive dimensions of SLA, (c) socio-cultural dimensions of SLA, and (d) instructional dimensions of SLA. While the orientation of each research strand is distinct, they are in common in that they can guide us to find helpful condition to facilitate successful language learning. Acknowledging the contributions of each perspective and the interdisciplinarity between each field, more and more second language researchers are now trying to have a bigger lens on examining the complexities of second language acquisition.

The interaction hypothesis is a theory of second-language acquisition which states that the development of language proficiency is promoted by face-to-face interaction and communication. Its main focus is on the role of input, interaction, and output in second language acquisition. It posits that the level of language that a learner is exposed to must be such that the learner is able to comprehend it, and that a learner modifying their speech so as to make it comprehensible facilitates their ability to acquire the language in question. The idea existed in the 1980s, and has been reviewed and expanded upon by a number of other scholars but is usually credited to Michael Long.

Syntactic bootstrapping is a theory in developmental psycholinguistics and language acquisition which proposes that children learn word meanings by recognizing syntactic categories and the structure of their language. It is proposed that children have innate knowledge of the links between syntactic and semantic categories and can use these observations to make inferences about word meaning. Learning words in one's native language can be challenging because the extralinguistic context of use does not give specific enough information about word meanings. Therefore, in addition to extralinguistic cues, conclusions about syntactic categories are made which then lead to inferences about a word's meaning. This theory aims to explain the acquisition of lexical categories such as verbs, nouns, etc. and functional categories such as case markers, determiners, etc.

Statistical learning is the ability for humans and other animals to extract statistical regularities from the world around them to learn about the environment. Although statistical learning is now thought to be a generalized learning mechanism, the phenomenon was first identified in human infant language acquisition.

Direct negative evidence is a term used in the study of the acquisition of language. It describes the attempts of competent speakers of a language to guide the grammatical use of novice speakers, such as children.

References

  1. 1 2 3 Lust, Barbara (2007). Child Language: Acquisition and Growth . Cambridge University Press. pp.  30–31. ISBN   978-0521449229.
  2. 1 2 Brown, R., & Hanlon, C. (1970). Derivational complexity and order of acquisition on child speech. In J. Hayes (Ed.), Cognition and the developmenf of language (pp. 11-53). New York: Wiley. 
  3. 1 2 McNeill, D. (1966) Developmental Psycholinguistics. in  The Genesis of Language, F. Smith and G. Miller (eds). MIT Press.
  4. 1 2 3 4 Saxton, Matthew. "Negative evidence and negative feedback: immediate effects on the grammaticality of child speech". First Language. 20 (60): 221–252.  doi : 10.1177/014272370002006001.
  5. 1 2 3 4 Chouinard, Michelle M.; Clark, Eve V. (2003/08). "Adult reformulations of child errors as negative evidence". Journal of Child Language. 30 (3): 637–669.  doi : 10.1017/s0305000903005701.
  6. Snow, C.E.; Ferguson, C. A., eds. (1977). Talking to Children: language input and acquisition. Cambridge Press.
  7. Penner, Sharon (April 1987). "Parental Responses to Grammatical and Ungrammatical Child Utterances". Child Development. 58, Issue 2: 376–385.
  8. Rhode, Douglas L.T.; Plaut, David C. (Spring 1999).  "Language acquisition in the absence of explicit negative evidence: how important is starting small?"  (PDF). Cognition. 72: 67–109
  9. Marcus, G. F. (January 1993). "Negative evidence in language acquisition". Cognition. 46 (1): 53–85.  PMID   8432090.
  10. Hirsh-Pasek, K., Treiman, R., & Schneiderman, M. (1984). Brown and Hanlon revisited: Mothers’ sensitivity to ungrammatical forms. Journal of Child Language, 11, 81-88.
  11. Farrar, Michael J.  "Negative evidence and grammatical morpheme acquisition". Developmental Psychology. 28 (1): 90–98.  doi : 10.1037%2F0012-1649.28.1.90
  12. Bohannon, John N.; MacWhinney, Brian; Snow, Catherine. "No negative evidence revisited: Beyond learnability or who has to prove what to whom". Developmental Psychology. 26 (2): 221–226.  doi : 10.1037%2F0012-1649.26.2.221.
  13. Morgan, J. L., Bonamo, K. M., & Travis, L. L. (1995). Negative evidence on negative evidence. Developmental Psychology, 31(2), 180-197. doi : 10.1037/0012-1649.31.2.180
  14. Lasnik, Howard; Lidz, Jeffrey L. (2016-12-22).  The Argument from the Poverty of the Stimulus.  doi : 10.1093/oxfordhb/9780199573776.013.10
  15. Baker, C. L. (1979). "Syntactic Theory and the Projection Problem". Linguistic Inquiry. 10 (4): 533–581.  doi : 10.2307/4178133.
  16. 1 2 3 Regier, Terry; Gahl, Susanne (2004). "Learning the unlearnable: the role of missing evidence". Cognition. 93 (2): 147–155. doi:10.1016/j.cognition.2003.12.003. PMID   15147936. S2CID   17024868.
  17. 1 2 Xu, Fei; Tenenbaum, Joshua B. (2007). "Word Learning as Bayesian Inference". Psychological Review. 114 (2): 245–272. CiteSeerX   10.1.1.57.9649 . doi:10.1037/0033-295X.114.2.245. PMID   17500627.
  18. Ambridge, Ben; Pine, J.M.; Rowland, C.F.; Young, C.R. (2008). "The effect of verb semantic class and verb frequency (entrenchment) on children's and adults' graded judgements of argument-structure overgeneralization errors". Cognition. 106 (1): 87–129. doi:10.1016/j.cognition.2006.12.015. hdl: 11858/00-001M-0000-002B-4C4F-7 . PMID   17316595. S2CID   2425407.
  19. Bowerman, Melissa (1988). Explaining Language Universals (PDF). Oxford: Basil Blackwell. pp. 73–101.
  20. Yang, Charles D. (2004). "Universal Grammar, statistics or both?". Trends in Cognitive Sciences. 8 (10): 451–456. doi:10.1016/j.tics.2004.08.006. ISSN   1364-6613. PMID   15450509. S2CID   14945080.
  21. Abend, Omri; Kwiatkowski, Tom; Smith, Nathaniel J.; Goldwater, Sharon; Steedman, Mark (2017). "Bootstrapping language acquisition" (PDF). Cognition. 164: 116–143. doi:10.1016/j.cognition.2017.02.009. PMID   28412593. S2CID   206866667.