Working memory is the ability to actively hold information in the mind needed to do complex tasks such as reasoning, comprehension and learning. Working memory tasks are those that require the goal-oriented active monitoring or manipulation of information or behaviors in the face of interfering processes and distractions. The cognitive processes involved include the executive and attention control of short-term memory which provide for the interim integration, processing, disposal, and retrieval of information. Working memory is a theoretical concept central both to cognitive psychology and neuroscience.
Theories exist both regarding the theoretical structure of working memory and the role of specific parts of the brain involved in working memory. Research identifies the frontal cortex, parietal cortex, anterior cingulate, and parts of the basal ganglia as crucial. The neural basis of working memory has been derived from lesion experiments in animals and functional imaging upon humans.
The term “working memory” was coined by Miller, Galanter, and Pribram, and was used in the 1960s in the context of theories that likened the mind to a computer. Atkinson and Shiffrin (1968) also used this term, “working memory” (p. 92) to describe their “short-term store.” What we now call working memory was referred to as a “short-term store” or short-term memory, primary memory, immediate memory, operant memory, or provisional memory. Short-term memory is the ability to remember information over a brief period of time (in the order of seconds). Most theorists today use the concept of working memory to replace or include the older concept of short-term memory, thereby marking a stronger emphasis on the notion of manipulation of information instead of passive maintenance.
The earliest mention of experiments on the neural basis of working memory can be traced back to over 100 years ago, when Hitzig and Ferrier described ablation experiments of the prefrontal cortex (PFC), they concluded that the frontal cortex was important for cognitive rather than sensory processes. In 1935 and 1936, Carlyle Jacobsen and colleagues were the first to show the deleterious effect of prefrontal ablation on delayed response.
There have been numerous models proposed regarding how working memory functions, both anatomically and cognitively. Of those, three that are well known are summarized below.
Baddeley and Hitch
Baddeley and Hitch (1974) introduced and made popular the multicomponent model of working memory. This theory proposes that two “slave systems” are responsible for short-term maintenance of information, and a “central executive” is responsible for the supervision of information integration and for coordinating the slave systems. One slave system, the phonological loop (PL), stores phonological information (i.e., the sound of language) and prevents its decay by continuously articulating its contents, thereby refreshing the information in a rehearsal loop. It can, for example, maintain a seven-digit telephone number for as long as one repeats the number to oneself again and again. The other slave system, the visuo-spatial sketch pad(VSSP), stores visual and spatial information. It can be used, for example, for constructing and manipulating visual images, and for the representation of mental maps. The sketch pad can be further broken down into a visual subsystem (dealing with, for instance, shape, colour, and texture), and a spatial subsystem (dealing with location). The central executive (see executive system) is, among other things, responsible for directing attention to relevant information, suppressing irrelevant information and inappropriate actions, and for coordinating cognitive processes when more than one task must be done at the same time.
Baddeley (2000) extended the model by adding a fourth component, the episodic buffer, which holds representations that integrate phonological, visual, and spatial information, and possibly information not covered by the slave systems (e.g., semantic information, musical information). The component is episodic because it is assumed to bind information into a unitary episodic representation. The episodic buffer resembles Tulving’s concept of episodic memory, but it differs in that the episodic buffer is a temporary store.
Cowan regards working memory not as a separate system, but as a part of long-term memory. Representations in working memory are a subset of the representations in long-term memory. Working memory is organized into two embedded levels. The first level consists of long-term memory representations that are activated. There can be many of these, there is no limit to activation of representations in long-term memory. The second level is called the focus of attention. The focus is regarded as capacity limited and holds up to four of the activated representations.
Oberauer has extended the Cowan model by adding a third component, a more narrow focus of attention that holds only one chunk at a time. The one-element focus is embedded in the four-element focus and serves to select a single chunk for processing. For example, you can hold four digits in mind at the same time in Cowan’s “focus of attention”. Now imagine that you wish to perform some process on each of these digits, for example, adding the number two to each digit. Separate processing is required for each digit, as most individuals can not perform several mathematical processes in parallel. Oberauer’s attentional component selects one of the digits for processing, and then shifts the attentional focus to the next digit, continuing until all of the digits have been processed.
Ericsson and Kintsch
Ericsson and Kintsch (1995) have argued that we use skilled memory in most everyday tasks. Tasks such as reading, for instance, require to maintain in memory much more than seven chunks – with a capacity of only seven chunks our working memory would be full after a few sentences, and we would never be able to understand the complex relations between thoughts expressed in a novel or a scientific text. We accomplish this by storing most of what we read in long-term memory, linking them together through retrieval structures. We need to hold only a few concepts in working memory, which serve as cues to retrieve everything associated to them by the retrieval structures. Anders Ericsson and Walter Kintsch refer to this set of processes as “long-term working memory”. Retrieval structures vary according to the domain of expertise, yet as suggested by Gobet they can be categorized in three typologies: generic retrieval structures, domain knowledge retrieval structures and the episodic text structures. The first corresponds to Ericsson and Kintsch’s ‘classic’ retrieval structure and the second to the elaborated memory structure. The first kind of structure is developed deliberately and is arbitrary (e.g. the method of loci), the second one is similar to patterns and schemas and the last one takes place exclusively during text comprehension. Concerning this last typology, Kintsch, Patel and Ericsson consider that every reader is able to form an episodic text structure during text comprehension, if the text is well written and if the content is familiar.
Working memory is generally considered to have limited capacity. The earliest quantification of the capacity limit associated with short-term memory was the “magical number seven” introduced by Miller (1956). He noticed that the memory span of young adults was around seven elements, called chunks, regardless whether the elements were digits, letters, words, or other units. Later research revealed that span does depend on the category of chunks used (e.g., span is around seven for digits, around six for letters, and around five for words), and even on features of the chunks within a category. For instance, span is lower for long words than for short words. In general, memory span for verbal contents (digits, letters, words, etc.) strongly depends on the time it takes to speak the contents aloud, and on the lexical status of the contents (i.e., whether the contents are words known to the person or not). Several other factors also affect a person’s measured span, and therefore it is difficult to pin down the capacity of short-term or working memory to a number of chunks. Nonetheless, Cowan (2001) has proposed that working memory has a capacity of about four chunks in young adults (and fewer in children and old adults).
Whereas most adults can repeat about seven digits in correct order, some individuals have shown impressive enlargements of their digit span – up to 80 digits. This feat is possible by extensive training on an encoding strategy by which the digits in a list are grouped (usually in groups of three to five) and these groups are encoded as a single unit (a chunk). To do so one must be able to recognize the groups as some known string of digits. One person studied by K. Anders Ericsson and his colleagues, for example, used his extensive knowledge of racing times from the history of sports. Several such chunks can then be combined into a higher-order chunk, thereby forming a hierarchy of chunks. In this way, only a small number of chunks at the highest level of the hierarchy must be retained in working memory. At retrieval, the chunks are unpacked again. That is, the chunks in working memory act as retrieval cues that point to the digits that they contain. It is important to note that practicing memory skills such as these does not expand working memory capacity proper. This can be shown by using different materials – the person who could recall 80 digits was not exceptional when it came to recalling words.
Measures and correlates
Working memory capacity can be tested by a variety of tasks. A commonly used measure is a dual-task paradigm combining a memory span measure with a concurrent processing task, sometimes referred to as “complex span”. Daneman and Carpenter invented the first version of this kind of task, the “reading span”, in 1980. Subjects read a number of sentences (usually between 2 and 6) and try to remember the last word of each sentence. At the end of the list of sentences, they repeat back the words in their correct order. Other tasks that don’t have this dual-task nature have also been shown to be good measures of working memory capacity. The question of what features a task must have to qualify as a good measure of working memory capacity is a topic of ongoing research.
Measures of working-memory capacity are strongly related to performance in other complex cognitive tasks such as reading comprehension, problem solving, and with any measures of the intelligence quotient. Some researchers have argued that working memory capacity reflects the efficiency of executive functions, most notably the ability to maintain a few task-relevant representations in the face of distracting irrelevant information. The tasks seem to reflect individual differences in ability to focus and maintain attention, particularly when other events are serving to capture attention. These effects seem to be a function of frontal brain areas.
Others have argued that the capacity of working memory is better characterized as the ability to mentally form relations between elements, or to grasp relations in given information. This idea has been advanced, among others, by Graeme Halford, who illustrated it by our limited ability to understand statistical interactions between variables. These authors asked people to compare written statements about the relations between several variables to graphs illustrating the same or a different relation, as in the following sentence: “If the cake is from France, then it has more sugar if it is made with chocolate than if it is made with cream, but if the cake is from Italy, then it has more sugar if it is made with cream than if it is made of chocolate.” This statement describes a relation between three variables (country, ingredient, and amount of sugar), which is the maximum most individuals can understand. The capacity limit apparent here is obviously not a memory limit (all relevant information can be seen continuously) but a limit on how many relationships are discerned simultaneously.
There are several hypotheses about the nature of the capacity limit. One is that there is a limited pool of cognitive resources needed to keep representations active and thereby available for processing, and for carrying out processes. Another hypothesis is that memory traces in working memory decay within a few seconds, unless refreshed through rehearsal, and because the speed of rehearsal is limited, we can maintain only a limited amount of information. Yet another idea is that representations held in working memory capacity interfere with each other.
There are several forms of interference discussed by theorists. One of the oldest ideas is that new items simply replace older ones in working memory. Another form of interference is retrieval competition. For example, when the task is to remember a list of 7 words in their order, we need to start recall with the first word. While trying to retrieve the first word, the second word, which is represented in close proximity, is accidentally retrieved as well, and the two compete for being recalled. Errors in serial recall tasks are often confusions of neighboring items on a memory list (so-called transpositions), showing that retrieval competition plays a role in limiting our ability to recall lists in order, and probably also in other working memory tasks. A third form of interference assumed by some authors is feature overwriting. The idea is that each word, digit, or other item in working memory is represented as a bundle of features, and when two items share some features, one of them steals the features from the other. The more items are held in working memory, and the more their features overlap, the more each of them will be degraded by the loss of some features.
Time-based resource sharing model
The theory most successful so far in explaining experimental data on the interaction of maintenance and processing in working memory is the “time-based resource sharing model”. This theory assumes that representations in working memory decay unless they are refreshed. Refreshing them requires an attentional mechanism that is also needed for any concurrent processing task. When there are small time intervals in which the processing task does not require attention, this time can be used to refresh memory traces. The theory therefore predicts that the amount of forgetting depends on the temporal density of attentional demands of the processing task – this density is called “cognitive load”. The cognitive load depends on two variables, the rate at which the processing task requires individual steps to be carried out, and the duration of each step. For example, if the processing task consists of adding digits, then having to add another digit every half second places a higher cognitive load on the system than having to add another digit every two seconds. Adding larger digits takes more time than adding smaller digits, and therefore cognitive load is higher when larger digits must be added. In a series of experiments, Barrouillet and colleagues have shown that memory for lists of letters depends on cognitive load, but not on the number of processing steps (a finding that is difficult to explain by an interference hypothesis) and not on the total time of processing (a finding difficult to explain by a simple decay hypothesis). One difficulty for the time-based resource-sharing model, however, is that the similarity between memory materials and materials processed also affects memory accuracy.
None of these hypotheses can explain the experimental data entirely. The resource hypothesis, for example, was meant to explain the trade-off between maintenance and processing: The more information must be maintained in working memory, the slower and more error prone concurrent processes become, and with a higher demand on concurrent processing memory suffers. This trade-off has been investigated by tasks like the reading-span task described above. It has been found that the amount of trade-off depends on the similarity of the information to be remembered and the information to be processed. For example, remembering numbers while processing spatial information, or remembering spatial information while processing numbers, impair each other much less than when material of the same kind must be remembered and processed. Also, remembering words and processing digits, or remembering digits and processing words, is easier than remembering and processing materials of the same category. These findings are also difficult to explain for the decay hypothesis, because decay of memory representations should depend only on how long the processing task delays rehearsal or recall, not on the content of the processing task. A further problem for the decay hypothesis comes from experiments in which the recall of a list of letters was delayed, either by instructing participants to recall at a slower pace, or by instructing them to say an irrelevant word once or three times in between recall of each letter. Delaying recall had virtually no effect on recall accuracy. The Interference theory seems to fare best with explaining why the similarity between memory contents and the contents of concurrent processing tasks affects how much they impair each other. More similar materials are more likely to be confused, leading to retrieval competition, and they have more overlapping features, leading to more feature overwriting. One experiment directly manipulated the amount of overlap of phonological features between words to be remembered and other words to be processed. Those to-be-remembered words that had a high degree of overlap with the processed words were recalled worse, lending some support to the idea of interference through feature overwriting.
Measures of performance on tests of working memory increase continuously between early childhood and adolescence, while the structure of correlations between different tests remains largely constant. Thus, the development of working memory can be described as quantitative growth rather than a qualitative change. Starting with work in the Neo-Piagetiaion tradition, theorists have argued that the growth of working-memory capacity is a major driving force of cognitive development. This hypothesis has received substantial empirical support from studies showing that the capacity of working memory is a strong predictor of cognitive abilities in childhood. Particularly strong evidence for a role of working memory for development comes from a longitudinal study showing that working-memory capacity at one age predicts reasoning ability at a later age Studies in the Neo-Piagetian tradition have added to this picture by analyzing the complexity of cognitive tasks in terms of the number of items or relations that have to be considered simultaneously for a solution. Across a broad range of tasks, children manage task versions of the same level of complexity at about the same age, consistent with the view that working memory capacity limits the complexity they can handle at a given age.
Working memory is among the cognitive functions most sensitive to decline in old age. Several explanations have been offered for this decline in psychology. One is the processing speed theory of cognitive aging by Tim Salthouse. Based on the finding of general slowing of cognitive processes as we grow older, Salthouse argues that slower processing leaves more time for working-memory contents to decay, thus reducing effective capacity. However, the decline of working-memory capacity cannot be entirely attributed to slowing because capacity declines more in old age than speed. Another proposal is the inhibition hypothesis advanced by Lynn Hasher and Rose Zacks. This theory assumes a general deficit in old age in the ability to inhibit irrelevant, or no-longer relevant, information. Therefore, working memory tends to be cluttered with irrelevant contents that reduce the effective capacity for relevant content. The assumption of an inhibition deficit in old age has received much empirical support but so far it is not clear whether the decline in inhibitory ability fully explains the decline of working-memory capacity. An explanation on the neural level of the decline of working memory and other cognitive functions in old age has been proposed by West. He argued that working memory depends to a large degree on the PFC, which deteriorates more than other brain regions as we grow old.
One theory of attention-deficit hyperactivity disorder states that ADHD can lead to deficits in working memory. Studies suggest that working memory can be improved by training in ADHD patients through computerized programs. This random controlled study has found that a period of working memory training increases a range of cognitive abilities and increases IQ test scores. Consequently, this study supports previous findings suggesting that working memory underlies general intelligence. Another study of the same group has shown that, after training, measured brain activity related to working memory increased in the prefrontal cortex, an area that many researchers have associated with working memory functions. It has been shown that working memory training leads to measurable density changes for cortical dopamine neuroreceptors in test persons.
A controversial study has shown that training with a working memory task (the dual n-back task) improves performance on a very specific fluid intelligence test in healthy young adults. The study’s conclusion that improving or augmenting the brain’s working memory ability increases fluid intelligence is backed by some and questioned by others. The study was replicated in 2010.
In Torkel Klingberg’s 2009 book The Overflowing Brain, he proposes that working memory is enhanced through exposure to excess neural activation. The brain map of an individual, he argues, can be altered by this activation to create a larger area of the brain activated by a particular type of sensory experience. An example would be that in learning to play guitar, the area activated by sensory impressions of the instrument is larger in the brain of a player than it is in a nonplayer.
There is evidence that optimal working memory performance links to the neural ability to focus attention on task-relevant information and ignore distractions, and that practice-related improvement in working memory is due to increasing these abilities.
Working memory performance may also be increased by high intensity exercise. A study was conducted with both sedentary and active females 18–25 years old in which the effects of short-term exercise to exhaustion on working memory was measured. While the working memory of the subjects decreased during and immediately after the exercise bouts, it was shown that the subjects’ working memory had an increase following recovery.
However a recent review paper has called into question much of the “success” of working memory training studies. Shipstead et al. (2010) point out that working memory training studies are plagued with poor experimental design. The majority of training studies utilize a no-contact control group making it impossible to determine whether any benefit of training is due to actual improvement or a Hawthorne effect.
Physiology and Psychopharmacology
The first insights into the neuronal and neurotransmitter basis of working memory came from animal research. The work of Jacobsen and Fulton in the 1930s first showed that lesions to the PFC impaired spatial working memory performance in monkeys. The later work of Fuster recorded the electrical activity of neurons in the PFC of monkeys while they were doing a delayed matching task. In that task, the monkey sees how the experimenter places a bit of food under one of two identical looking cups. A shutter is then lowered for a variable delay period, screening off the cups from the monkey’s view. After the delay, the shutter opens and the monkey is allowed to retrieve the food from under the cups. Successful retrieval in the first attempt – something the animal can achieve after some training on the task – requires holding the location of the food in memory over the delay period. Fuster found neurons in the PFC that fired mostly during the delay period, suggesting that they were involved in representing the food location while it was invisible. Later research has shown similar delay-active neurons also in the posterior parietal cortex, the thalamus, the caudate, and the globus pallidus. The work of Goldman-Rakic and others showed that principal sulcal, dorsolateral PFC interconnects with all of these brain regions, and that neuronal microcircuits within PFC are able to maintain information in working memory through recurrent excitatory glutamate networks of pyramidal cells that continue to fire throughout the delay period. These circuits are tuned by lateral inhibition from GABAergic interneurons. The neuromodulatory arousal systems markedly alter PFC working memory function; for example, either too little or too much dopamine or norepinephrine impairs PFC network firing and working memory performance.
Localization of brain functions in humans has become much easier with the advent of brain imaging methods (PET and fMRI). This research has confirmed that areas in the PFC are involved in working memory functions. During the 1990s much debate has centered on the different functions of the ventrolateral (i.e., lower areas) and the dorsolateral (higher) areas of the PFC. One view was that the dorsolateral areas are responsible for spatial working memory and the ventrolateral areas for non-spatial working memory. Another view proposed a functional distinction, arguing that ventrolateral areas are mostly involved in pure maintenance of information, whereas dorsolateral areas are more involved in tasks requiring some processing of the memorized material. The debate is not entirely resolved but most of the evidence supports the functional distinction.
Brain imaging has also revealed that working memory functions are by far not limited to the PFC. A review of numerous studies shows areas of activation during working memory tasks scattered over a large part of the cortex. There is a tendency for spatial tasks to recruit more right-hemisphere areas, and for verbal and object working memory to recruit more left-hemisphere areas. The activation during verbal working memory tasks can be broken down into one component reflecting maintenance, in the left posterior parietal cortex, and a component reflecting subvocal rehearsal, in the left frontal cortex (Broca’s area, known to be involved in speech production).
There is an emerging consensus that most working memory tasks recruit a network of PFC and parietal areas. A study has shown that during a working memory task the connectivity between these areas increases. Another study has demonstrated that these areas are necessary for working memory, and not simply activated accidentally during working memory tasks, by temporarily blocking them through transcranial magnetic stimulation (TMS), thereby producing an impairment in task performance.
A current debate concerns the function of these brain areas. The PFC has been found to be active in a variety of tasks that require executive functions. This has led some researchers to argue that the role of PFC in working memory is in controlling attention, selecting strategies, and manipulating information in working memory, but not in maintenance of information. The maintenance function is attributed to more posterior areas of the brain, including the parietal cortex. Other authors interpret the activity in parietal cortex as reflecting executive functions, because the same area is also activated in other tasks requiring executive attention but no memory.
Working memory has been suggested to involve two processes with different neuroanatomical locations in the frontal and parietal lobes. First, a selection operation that retrieves the most relevant item, and second an updating operation that changes the focus of attention made upon it. Updating the attentional focus has been found to involve the transient activation in the caudal superior frontal sulcus and posterior parietal cortex. While increasing demands on selection selectively changes activation in the rostral superior frontal sulcus and posterior cingulate/precuneus.
Articulating the differential function of brain regions involved in working memory is dependent on task able to distinguish these functions. Most brain imaging studies of working memory have used recognition tasks such as delayed recognition of one or several stimuli, or the n-back task, in which each new stimulus in a long series must be compared to the one presented n steps back in the series. The advantage of recognition tasks is that they require minimal movement (just pressing one of two keys), making fixation of the head in the scanner easier. Experimental research and research on individual differences in working memory, however, has used largely recall tasks (e.g., the reading span task, see below). It is not clear to what degree recognition and recall tasks reflect the same processes and the same capacity limitations.
A few brain imaging studies have been conducted with the reading span task or related tasks. Increased activation during these tasks was found in the PFC and, in several studies, also in the anterior cingulate cortex (ACC). People performing better on the task showed larger increase of activation in these areas, and their activation was correlated more over time, suggesting that their neural activity in these two areas was better coordinated, possibly due to stronger connectivity.
Effects of stress
Working memory is impaired by acute and chronic psychological stress. This phenomenon was first discovered in animal studies by Arnsten and colleagues, who have shown that stress-induced catecholamine release in PFC rapidly decreases PFC neuronal firing and impairs working memory performance through feedforward, intracellular signaling pathways. Exposure to chronic stress leads to more profound working memory deficits and additional architectural changes in PFC, including dendritic atrophy and spine loss, which can be prevented by inhibition of protein kinase C signaling. fMRI research has extended this research to humans, and confirms that reduced working memory caused by acute stress links to reduced activation of the PFC, and stress increased levels of catecholamines. Imaging studies of medical students undergoing stressful exams have also shown weakened PFC functional connectivity, consistent with the animal studies. The marked effects of stress on PFC structure and function may help to explain how stress can cause or exacerbate mental illness.
Much has been learned over the last two decades on where in the brain working memory functions are carried out. Much less is known on how the brain accomplishes short-term maintenance and goal-directed manipulation of information. The persistent firing of certain neurons in the delay period of working memory tasks shows that the brain has a mechanism of keeping representations active without external input.
Keeping representations active, however, is not enough if the task demands maintaining more than one chunk of information. In addition, the components and features of each chunk must be bound together to prevent them from being mixed up. For example, if a red triangle and a green square must be remembered at the same time, one must make sure that “red” is bound to “triangle” and “green” is bound to “square”. One way of establishing such bindings is by having the neurons that represent features of the same chunk fire in synchrony, and those that represent features belonging to different chunks fire out of sync. In the example, neurons representing redness would fire in synchrony with neurons representing the triangular shape, but out of sync with those representing the square shape. So far, there is no direct evidence that working memory uses this binding mechanism, and other mechanisms have been proposed as well. It has been speculated that synchronous firing of neurons involved in working memory oscillate with frequencies in the theta band (4 to 8 Hz). Indeed, the power of theta frequency in the EEG increases with working memory load, and oscillations in the theta band measured over different parts of the skull become more coordinated when the person tries to remember the binding between two components of information.
There is now extensive evidence that working memory is linked to key learning outcomes in literacy and numeracy. A longitudinal study confirmed that a child’s working memory at 5 years old is a better predictor of academic success than IQ.
In a large-scale screening study, one in ten children in mainstream classrooms were identified with working memory deficits. The majority of them performed very poorly in academic achievements, independent of their IQ. Without appropriate intervention, these children lag behind their peers. A recent study of 37 school-age children with significant learning disabilities has shown that working memory capacity at baseline measurement, but not IQ, predicts learning outcomes two years later. This suggests that working memory impairments are associated with low learning outcomes and constitute a high risk factor for educational under achievement for children. In children with learning disabilities such as dyslexia, ADHD, and developmental coordination disorder, a similar pattern is evident.
In a classroom, common characteristics of working memory impairment include a failure to remember instructions and an inability to complete learning activities. Without early diagnosis, working memory impairment negatively impacts a child’s performance throughout their scholastic career.
However, strategies that target the specific strengths and weaknesses of the student’s working memory profile are available for educators.
Research suggests a close link between the working memory capacities of a person and their ability to control the information from the environment that they can selectively enhance or ignore. Such attention allows for example for the voluntarily shifting in regard to goals of a person’s information processing to spatial locations or objects rather than ones that capture their attention due to their sensory saliency (such as an ambulance siren). The goal directing of attention is driven by “top-down” signals from the PFC that bias processing in posterior cortical areas and saliency capture by “bottom-up” control from subcortical structures and the primary sensory cortices. The ability to override sensory capture of attention differs greatly between individuals and this difference closely links to their working memory capacity. The greater a person’s working memory capacity, the greater their ability to resist sensory capture. The limited ability to override attentional capture is likely to result in the unnecessary storage of information in working memory, suggesting not only that having a poor working memory affects attention but that it can also limit the capacity of working memory even further.
Today there are hundreds of research laboratories around the world studying various aspects of working memory. There are numerous applications of working memory in the field, such as using working memory capacity to explain intelligence, success at emotion regulation, and other cognitive abilities, furthering the understanding of autism spectrum disorders, ADHD, motor dyspraxia, and improving teaching methods, educational attainment, and creating artificial intelligence based on the human brain.