The science of emotion has been using folk psychology categories derived from philosophy to search for the brain basis of emotion. The last two decades of neuroscience research have brought us to the brink of a paradigm shift in understanding the workings of the brain, however, setting the stage to revolutionize our understanding of what emotions are and how they work. In this article, we begin with the structure and function of the brain, and from there deduce what the biological basis of emotions might be. The answer is a brain-based, computational account called the theory of constructed emotion.
Ancient philosophers and physicians believed a human mind to be a collection of mental faculties. They divided the mind, not with an understanding of biology or the brain, but to capture the essence of human nature according to their concerns about truth, beauty and ethics. The faculties in question have morphed over the millennia, but generally speaking, they encompass mental categories for thinking (cognitions), feeling (emotions) and volition (actions, and in more modern versions, perceptions). These mental categories symbolize a cherished narrative about human nature in Western civilization: that emotions (our inner beast) and cognitions (evolution’s crowning achievement) battle or cooperate to control behavior.
The classical view of emotion (Figure 1) was forged in these ancient ideas. Affective neuroscience takes its inspiration from this faculty-based approach. Scientists begin with emotion concepts that are most recognizably English, such as anger, sadness, fear, and disgust, and search for their elusive biological essences (i.e. their neural signatures or fingerprints), usually in subcortical regions. This inductive approach assumes that the emotion categories we experience and perceive as distinct must also be distinct in nature.
Figure 1: The classical view of emotion. The classical view of emotion includes basic emotion theories, causal appraisal theories, and theories of emotion that rely on black-box functionalism. Each emotion faculty is assumed to have its own innate ‘essence’ that distinguishes it from all other emotions. This might be a Lockean essence (an underlying causal mechanism that all instances of an emotion category share, making them that kind of emotion and not some other kind of emotion, depicted by the circles in the figure). Lockean essences might be a biological, such as a set of dedicated neurons, or psychological, such as a set of evaluative mechanisms called ‘appraisals’. An emotion category is usually assumed to have a Platonic essence [a physical fingerprint that instances of that emotion share, but that other emotions do not, such a set of facial movements (an ‘expression’), a pattern of autonomic nervous system activity, and/or a pattern of appraisals]. Of course, no one is expecting complete invariance, but it is assumed that instances of a category are similar enough to be easily diagnosed as the same emotion using objective (perceiver-independent) measures alone. (A) is adapted from Davis (1992). (B) is adapted Anderson and Adolphs (2014). (C) is adapted from Barrett (2006a), which reviews the growing evidence that contracts the classical view of emotion.
A brain is a network of billions of communicating neurons, bathed in chemicals called neurotransmitters, which permit neurons to pass information to one another. The firing of a single neuron (or a small population of neurons) represents the presence or absence of some feature at a moment in time. However, a given neuron (or group of neurons) represents different features from moment to moment because many neurons synapse onto one (many-to-one connectivity), and a neuron’s receptive field depends on the information it receives.
Conversely, one neuron also synapses on many other neurons [one-to-many connectivity] to help implement instances of different psychological categories. As a consequence, neurons are multipurpose, even in subcortical regions like the amygdala.
When the brain is viewed as a massive network, rather than a single organ or a collection of ‘mental modules’, it becomes apparent that this one anatomic structure of neurons can create an astounding number of spatiotemporal patterns, making the brain a network of high complexity.
Natural selection prefers high complexity systems as they can reconfigure themselves into a multitude of different states.
The brain achieves complexity through ‘degeneracy’, the capacity for dissimilar representations (e.g. different sets of neurons) to give rise to instances of the same category (e.g. anger) in the same context (i.e. many-to-one mappings of structure to function).
Degeneracy is ubiquitous in biology, from the workings inside a single cell to distributed brain networks. Natural selection favors systems with degeneracy because they are high in complexity and robust to damage.
Degeneracy explains why Roger, the patient who lost his limbic circuitry to herpes simplex type I encephalitis, still experiences emotions and why monozygotic twins with fully calcified basolateral sectors of the amygdala [due to Urbach-Wiethe disease (UWD)] have markedly different emotional capacity, despite genetic and environmental similarity.
Degeneracy also explains how a characteristic can be highly heritable even without a single set of necessary and sufficient genes.
In emotion research, degeneracy means that instances of an emotion (e.g. fear) are created by multiple spatiotemporal patterns in varying populations of neurons. Therefore, it is unlikely that all instances of an emotion category share a set of core features (i.e. a single facial expression, autonomic pattern or set of neurons.
This observation is an example of population thinking, pioneered in Darwin’s On the Origin of Species. By observing the natural world, Darwin realized that biological categories, such as a species, are conceptual categories (highly variable instances, grouped together by ‘a goal’ rather than by similar features or a single, shared underlying cause.
My hypothesis, following Darwin’s insight, is that fear (or any other emotion) is a ‘category’ that is populated with highly variable instances.
The summary representation of any emotion category is an abstraction that need not exist in nature as applied to emotion concepts and categories, and, as applied to concepts and categories more generally.
The fact that human brains effortlessly and automatically construct such representations helps to explain why scientists continue to believe in the classical view and even propose it as an innovation.
A brain did not evolve for rationality, happiness or accurate perception. All brains accomplish the same core task: to efficiently ensure resources for physiological systems within an animal’s body (i.e. its internal milieu) so that an animal can grow, survive and reproduce. This balancing act is called ‘allostasis’.
Growth, survival and reproduction (and therefore gene transmission) require a continual intake of metabolic and other biological resources. Metabolic and other expenditures are required to plan and execute the physical movements necessary to acquire those resources in the first place (and to protect against threats and dangers).
Allostasis is not a condition of the body, but a process for how the brain regulates the body according to costs and benefits; ‘efficiency’ requires the ability to anticipate the body’s needs and satisfy them before they arise.
An animal thrives when it has sufficient resources to explore the world, and to consolidate the details of experience within the brain’s synaptic connections, making those experiences available to guide later decisions about future expenditures and deposits. Too much of a resource (e.g. obesity in mammals) or not enough is suboptimal.
Prolonged imbalances can lead to illness that remodels the brain and the sympathetic nervous system, with corresponding behavior changes.
Whatever else your brain is doing—thinking, feeling, perceiving, emoting—it is also regulating your autonomic nervous system, your immune system and your endocrine system as resources are spent in seeking and securing more resources.
All animal brains operate in the same manner (i.e. even insect brains coordinate visceral, immune and motor changes.
This regulation helps explain why, in mammals, the regions that are responsible for implementing allostasis (the amygdala, ventral striatum, insula, orbitofrontal cortex, anterior cingulate cortex, medial prefrontal cortex (mPFC), collectively called ‘visceromotor regions’) are usually assumed to contain the circuits for emotion.
In fact, many of these visceromotor regions are some of the most highly connected regions in the brain, and they exchange information with midbrain, brainstem, and spinal cord nuclei that coordinate autonomic, immune, and endocrine systems with one another, as well as with the systems that control skeletomotor movements and that process sensory inputs.
Therefore these regions are clearly multipurpose when it comes to constructing the mental events that we group into mental categories (see Figure 2).
Figure 2: Hubs in the human brain. (A) Hubs of the rich club, adapted from van den Heuvel and Sporns (2013). These regions are strongly interconnected with one another and it is proposed that they integrate information across the brain to create large-scale patterns of information flow (i.e. synchronized activity; van den Heuvel and Sporns, 2013). They are sometimes referred to as convergence or confluence zones (e.g. Damasio, 1989; Meyer and Damasio, 2009). (B) Results of a forward inference analysis, revealing ‘hot spots’ in the brain that show a better than chance increase in BOLD signal across 5633 studies from the Neurosynth database. Activations are thresholded at FWE P < 0.05. Limbic regions (i.e. agranular/dysgranular with descending projections to visceromotor control nuclei) include the cingulate cortex [midcingulate cortex (MCC), pregenual anterior cingulate cortex (pgACC)], ventromedial prefrontal cortex (vmPFC), supplementary motor and premotor areas (SMA and PMC), medial temporal lobe, the anterior insula (aINS) and ventrolateral prefrontal cortex (vlPFC) (e.g. Carrive and Morgan, 2012; Bar et al., 2016); for a discussion and additional references, see (Kleckner et al., in press). AG, angular gyrus; MC, motor cortex.
For a brain to effectively regulate its body in the world, it runs an internal model of that body in the world.
In psychology, we refer to this modeling as ‘embodied simulation’,e.g. see Figure 3).
An internal model is metabolic investment, implemented by intrinsic activity that, in humans, occupies 20% of our total energy consumed. Given these considerations, modeling the world ‘accurately’ in some detached, disembodied manner would be metabolically reckless. Instead, the brain models the world from the perspective of its body’s physiological needs.
As a consequence, a brain’s internal model includes not only the relevant statistical regularities in the extrapersonal world, but also the statistical regularities of the internal milieu. Collectively, the representation and utilization of these internal sensations is called ‘interoception’.
Recent research suggests that interoception is at the core of the brain’s internal model and arises from the process of allostasis. Interoceptive sensations are usually experienced as lower dimensional feelings of affect. As such, the properties of affect—valence and arousal — are basic features of consciousness that, importantly, are not unique to instances of emotion.
All animals run an internal model of their world for the purpose of allostasis (i.e. the notion of an internal model is species-general). Even single-celled organisms that lack a brain learn, remember, make predictions, and forage in service to allostasis.
The content of any internal model is species-specific, however, including only the parts of the animal’s physical surroundings that its brain has judged relevant for growth, survival and reproduction (i.e. a brain creates its affective niche in the present based on what has been relevant for allostasis in the past). Everything else is an extravagance that puts energy regulation at risk.
As an animal’s integrated physiological state changes constantly throughout the day, its immediate past determines the aspects of the sensory world that concern the animal in the present, which in turn influences what its niche will contain in the immediate future.
This observation prompts an important insight: neurons do not lie dormant until stimulated by the outside world, denoted as stimulus→response.
Ample evidence shows that ongoing brain activity influences how the brain processes incoming sensory information, and that neurons fire intrinsically within large networks without any need for external stimuli. The implications of these insights are profound: namely, it is very unlikely that perception, cognition, and emotion are localized in dedicated brain systems, with perception triggering emotions that battle with cognition to control behavior. This means classical accounts of emotion, which rely on this S→R narrative, are highly doubtful.
An increasingly popular hypothesis is that the brain’s simulations function as Bayesian filters for incoming sensory input, driving action and constructing perception and other psychological phenomena, including emotion.
Simulations are thought to function as prediction signals (also known as ‘top-down’ or ‘feedback’ signals, and more recently as ‘forward’ models) that continuously anticipate events in the sensory environment.
This hypothesis is variously called predictive coding, active inference, or belief propagation.
Without an internal model, the brain cannot transform flashes of light into sights, chemicals into smells and variable air pressure into music. You’d be experientially blind.
Thus, simulations are a vital ingredient to guide action and construct perceptions in the present.
They are embodied, whole brain representations that anticipate (i) upcoming sensory events both inside the body and out as well as (ii) the best action to deal with the impending sensory events. Their consequence for allostasis is made available in consciousness as affect.
I hypothesize that, using past experience as a guide, the brain prepares multiple competing simulations that answer the question, ‘what is this new sensory input most similar to?’.
Similarity is computed with reference to the current sensory array and the associated energy costs and potential rewards for the body. That is, simulation is a partially completed pattern that can classify (categorize) sensory signals to guide action in the service of allostasis.
Each simulation has an associated action plan. Using Bayesian logic, a brain uses pattern completion to decide among simulations and implement one of them, based on predicted maintenance of physiological efficiency across multiple body systems (e.g. need for glucose, oxygen, salt etc.).
From this perspective, unanticipated information from the world (prediction error) functions as feedback for embodied simulations (also known as ‘bottom-up’ or, confusingly, ‘feedforward’ signals).
Error signals track the difference between the predicted sensations and those that are incoming from the sensory world (including the body’s internal milieu).
Once these errors are minimized, simulations also serve as inferences about the causes of sensory events and plans for how to move the body (or not) to deal with them.
By modulating ongoing motor and visceromotor actions to deal with upcoming sensory events, a brain infers their likely causes.
In predictive coding, as we will see, sensory predictions arise from motor predictions; simulations arise as a function of visceromotor predictions (to control your autonomic nervous system, your neuroendocrine system, and your immune system) and voluntary motor predictions, which together anticipate and prepare for the actions that will be required in a moment from now.
These observations reinforce the idea that the stimulus→response model of the mind is incorrect.
For a given event, perception follows (and is dependent on) action, not the other way around.
Therefore, all classical theories of emotion are called into question, even those that explain emotion as iterative stimulus →response sequences.
Figure 3: Neural activity during simulation. N = 16 (data from Wilson-Mendenhall et al., 2013). Participants listened with eyes closed to multimodal descriptions rich in sensory details and imagined each real-world scenario as if it was actually happening to them (i.e. the experiences were high in subjective realism). Contrast presented is scenario immersion > resting baseline; maps are FDR corrected P < 0.05. Left image, x = 1; right image, x = −42. Heightened neural activity in primary visual cortex (not labeled), somatosensory cortex (SSC), and MC during scenario immersion replicated prior simulation research (McNorgan, 2012) and established the validity of the paradigm. Notice that simulation was associated with an increase in BOLD response within primary interoceptive cortex (i.e. the pINS), in the sensory integration network of lateral orbitofrontal cortex (lOFC) (Ongur et al., 2003) and in the thalamus; increased BOLD responses were also seen, as expected, in limbic and paralimbic regions such as the vmPFC, the aINS, the temporal pole (TP), SMA and vlPFC, as well as in the hypothalamus and the subcortical nuclei that control the internal milieu. PAG, periacquiductal gray; PBN, parabrachial nucleus.
The mechanistic details of predictive coding provide yet another deep insight: a brain implements its internal model with ‘concepts’ that ‘categorize’ sensations to give them meaning.
Predictions are concepts (see Figure 4).
Completed predictions are categorizations that maintain physiological regulation, guide action and construct perception. The meaning of a sensory event includes visceromotor and motor action plans to deal with that event.
As detailed in Figure 5, meaning does not trigger action, but results from it. This makes classical appraisal theories highly doubtful, because they assume that a response derives from a stimulus that is evaluated for its meaning.
Appraisals as descriptions of the world, however, are produced by categorization with concepts.
Figure 4: The brain is a concept generator. (A) Brodmann areas are shaded to depict their degree of laminar organization, including the insula (bottom right). The brain’s computational architecture is depicted (adapted from Barbas, 2015), where prediction signals flow from the deep layers of less granular regions (cell bodies depicted with triangles) to the upper layers of more granular regions; this, can also be thought of concept construction [as described in Barrett (2017)]. I hypothesize that agranular (i.e. limbic) cortices generatively combine past experiences to initiate the construction of embodied concepts; multimodal summaries cascade to sensory and motor systems to create the simulations that will become motor plans and perceptions. Prediction error processing, in turn, is akin to concept learning. The upper layers of cortex compress prediction errors and reduce error dimensionality, eventually creating multimodal summaries, by virtue of a cytoarchitectural gradient: prediction error flows from the upper layers of primary sensory and motor regions (highly granular cortex) populated with many small pyramidal cells with few connections towards less granular heteromodal regions (including limbic cortices) with fewer but larger pyramidal cells having many connections (Finlay and Uchiyama, 2015). (B) Evidence of conceptual processing in the default mode network: Multimodal summaries for emotion concepts [adapted from Skerry and Saxe (2015), Figure 1B]; summary representations of sensory-motor properties (color, shape, visual motion, sound and physical manipulation [Fernandino et al. (2016), Figure 5]; and, semantic processing [adapted from Binder and Desai (2011), Figure 2]. (C) Regions that consistently increase activity during emotional experience (green), emotion regulation (blue), and their overlap (red) [as appears in Clark-Polner et al. (2016); adapted from Buhle et al. (2014) and Satpute et al. (2015)]. Overlaps are observed in the aIns, vlPFC, the MCC, SMA and posterior superior temporal sulcus. Studies of emotional experience show consistent increase in activity that is consistent with manipulating predictions (i.e. the default mode and salience networks), whereas reappraisal instructions appear to manipulate the modification of those predictions (i.e. the frontoparietal and salience networks). (D) Intensity maps for five emotion categories examined by Wager et al. (2015). Maps represent the expected activations or population centers, given a specific emotion category. Maps also reflect expected co-activation patterns. Notice that population centers for all emotion categories can be found within the default mode and salience networks. These are probabilistic summaries, not brain states for emotion Adapted from Wager et al. (2015).
Traditionally, a ‘category’ is a population of events or objects that are treatedas similar because they all serve a particular goal in some context; a ‘concept’ is the population of representations that correspond to those events or objects.
I hypothesize that in assembling populations of predictions, each one having some probability of being the best fit to the current circumstances (i.e., Bayesian priors), the brain is constructing concepts or what Barsalou refers to as ‘ad hoc’ concepts.
In the language of the brain, a concept is a group of distributed ‘patterns’ of activity across some population of neurons. Incoming sensory evidence, as prediction error, helps to select from or modify this distribution of predictions, because certain simulations will better fit the sensory array (i.e. they will have stronger priors), with the end result that incoming sensory events are categorized as similar to some set of past experiences.
This, in effect, is the original formulation of the conceptual act theory of emotion: the brain uses emotion concepts to categorize sensations to construct an instance of emotion. That is, the brain constructs meaning by correctly anticipating (predicting and adjusting to) incoming sensations.
Sensations are categorized so that they are (i) actionable in a situated way and therefore (ii) meaningful, based on past experience.
When past experiences of emotion (e.g. happiness) are used to categorize the predicted sensory array and guide action, then one experiences or perceives that emotion (happiness).
In other words, an instance of emotion is constructed the same way that all other perceptions are constructed, using the same well-validated neuroanatomical principles for information flow within the brain.
Barbas and colleagues’ structural model of corticocortical connections provides specific hypotheses about how concepts categorize incoming sensory inputs to guide action and create perception, and in doing so fills the computational and neural gaps in my initial theoretical formulation of the theory, providing novel hypotheses about how a brain constructs emotional events.
The first key observation is that prediction signals are carried via ‘feedback’ connections that originate in cortical regions with the least well-developed laminar structure, referred to as ‘agranular’.
Agranular regions are cytoarchitecturally arranged to send but not receive prediction signals within the cerebral cortex. Another name for agranular cortices is ‘limbic’.
Limbic cortices, such as the anterior cingulate cortex and the ventral portion of the anterior insula (aINS), allostatically control physiology by relaying descending prediction signals to the internal milieu via a system of subcortical regions, including the central nucleus of the amygdala, the ventral and dorsal striatum, and the central pattern generators across hypothalamus, the parabrachial nucleus, periaqueductal grey, and the solitary nucleus (see Figure 5A and B).
Cortical regions with a dysgranular structure, which are referred to as limbic or paralimbic, also issue descending prediction signals to the body’s internal milieu [e.g. midcingulate cortex, mPFC (MCC), ventrolateral prefrontal cortex (vlPFC), premotor cortex (PMC), etc., see Figure 5A and B]. My hypothesis is that these ‘visceromotor’ regions of the brain that are responsible for implementing allostasis, and that are usually assigned an emotional function, are ‘driving’ the perception signals, i.e. the ‘concepts’, that constitute the brain’s internal model, in conjunction with the hippocampus.
Figure 5: A depiction of predictive coding in the human brain. (A) Key limbic and paralimbic cortices (in blue) provide cortical control the body’s internal milieu. Primary MC is depicted in red, and primary sensory regions are in yellow. For simplicity, only primary visual, interoceptive and somatosensory cortices are shown; subcortical regions are not shown. (B) Limbic cortices initiate visceromotor predictions to the hypothalamus and brainstem nuclei (e.g. PAG, PBN, nucleus of the solitary tract) to regulate the autonomic, neuroendocrine, and immune systems (solid lines). The incoming sensory inputs from the internal milieu of the body are carried along the vagus nerve and small diameter C and Aδ fibers to limbic regions (dotted lines). Comparisons between prediction signals and ascending sensory input results in prediction error that is available to update the brain’s internal model. In this way, prediction errors are learning signals and therefore adjust subsequent predictions. (C) Efferent copies of visceromotor predictions are sent to MC as motor predictions (solid lines) and prediction errors are sent from MC to limbic cortices (dotted lines). (D) Sensory cortices receive sensory predictions from several sources. They receive efferent copies of visceromotor predictions (black lines) and efferent copies of motor predictions (red lines). Sensory cortices with less well developed lamination (e.g. primary interoceptive cortex) also send sensory predictions to cortices with more well-developed granular architecture (e.g. in this figure, somatosensory and primary visual cortices, gold lines). For simplicity’s sake, prediction errors are not depicted in panel D. sgACC, subgenual anterior cingulate cortex; vmPFC, ventromedial prefrontal cortex; pgACC, pregenual anterior cingulate cortex; dmPFC, dorsomedial prefrontal cortex; MCC, midcingulate cortex; vaIns, ventral anterior insula; daIns, dorsal anterior insula and includes ventrolateral prefrontal cortex; SMA, supplementary motor area; PMC, premotor cortex m/pIns, mid/posterior insula (primary interoceptive cortex); SSC, somatosensory cortex; V1, primary visual cortex; and MC, motor cortex (for relevant neuroanatomy references, see Kleckner et al., in press).
A concept is not only the descending prediction signals that control the viscera. It also includes the efferent copies of those signals that cascade to primary motor cortex (MC) as skeletomotor prediction signals, as well as to all primary sensory cortices as sensory prediction signals (see Figure 5C and D, respectively.
Following the evidence for how the cytoarchitectural gradients in the cortical sheet predict information flow across cortical regions, prediction signals flow from deep layers of limbic cortices and terminate in the upper layers of cortical regions with more developed (i.e. more granular) structure, such as gustatory and olfactory cortex, primary MC, primary interoceptive cortex, and the primary visual, auditory and somatosensory regions.
Because MC has a laminar organization that is less well developed than primary visual, auditory, somatosensory and interoceptive sensory regions, I hypothesize that MC sends efferent copies to those sensory regions as sensory predictions (see Figure 5D, red lines).
Furthermore, because of their differential laminar development, I hypothesize that primary interoceptive cortex in mid-to-posterior dorsal insula forwards sensory predictions to visual, auditory and somatosensory cortices (propagating across either a single or multiple synapses; Figure 5D, gold lines).
The skeletomotor prediction signals prepare the body for movement, the interoceptive prediction signals initiate a change in affect (i.e. the expected sensory consequences of allostatic changes within the body’s internal milieu), and the extrapersonal sensory prediction signals prepare upcoming perceptions.
This hypothesis is consistent not only with over three decades of tract tracing studies in non-human animals, but also with engineering design principles (i.e. compute locally, and relay only the information that is needed to assemble a larger pattern.
Predictions literally change the firing of primary sensory and motor neurons, even though the incoming sensory input has not yet arrived (and may never arrive). Accordingly, all action and perception are created with concepts.
All concepts contribute to allostasis and represent changes in affect, not just those that construct the events that feel affectively intense or are created with emotion concepts.
To consider how this works, try this thought experiment: in the past, you have experienced diverse instances of happiness, may be lying outdoors on a sunny day, finishing a strenuous workout, hugging a close friend, eating a piece of delectable chocolate or winning a competition.
Each instance is different from every other, and when the brain creates a concept of happiness to categorize and make sense of the upcoming sensory events, it constructs a population of simulations (as potential actions and perceptions) according to the rules of Bayes’ Theorem, whose priors reflect their similarity to the current situation (before the evidence is taken into account).
The similarity need not be perceptual—it can be goal-based. So the brain constructs an on-line concept of happiness, not in absolute terms, but with reference to a particular goal in the situation (to be with friends, to enjoy a meal, to accomplish a task), all in the service of allostasis.
This implies that ‘happiness’ has a specific meaning, but its specific meaning changes from one instance to the next.
As prediction signals cascade across the synapses of a brain, incoming sensory signals arriving to the brain (i.e. from the external environment and the internal periphery) simultaneously allow for computations of prediction error that are encoded to update the internal model (correcting visceromotor and motor action plans, as well as sensory representations; see Figure 5, dotted lines).
Viscerosensory prediction errors arise from physiologic changes within the internal milieu and ascend via vagal and small diameter afferents in the dorsal horn of the spinal cord, through the nucleus of the solitary tract, the parabrachial nucleus, the periaqueductal gray and finally to the ventral posterior thalamus, before arriving in granular layer IV of the primary interoceptive insular cortex.
Notice that, in the context of this framework, perception (i.e. the ‘meaning’ of sensory inputs) is constructed with reference to allostasis, and sensory prediction errors are treated, at a very basic level, as information that guides a ‘predicted’ visceromotor and motor action plan.
Prediction errors also arise within the amygdala, the basal ganglia, and the cerebellum and are forwarded to the cortex to correct its internal model.
I hypothesize that information from the amygdala to the cortex is not ‘emotional’ per se, but signals uncertainty about the predicted sensory input (via the basolateral complex) and helps to adjust allostasis (via the central nucleus) as a result.
The arousal signals that are associated with increases in amygdala activity can be considered a learning signal. Similarly, prediction errors from the ventral striatum to the cortex (referred to as ‘reward prediction errors) convey information about sensory inputs that impact allostasis more than expected (i.e. that this information should be encoded and consolidated in the cortex, and acted upon in the moment).
Dopamine is associated with engaging in vigorous action and learning that is necessary to achieve the rewards that maintain efficient allostasis (or restore it in the event of disruption), rather than playing a necessary or sufficient role in rewards themselves.
Other neuromodulators, such as opioids, seem to be more intrinsic to reward in that regard.
The cerebellum models prediction errors from the periphery and relays them to cortex to modify motor predictions [i.e. it predicts the sensory consequences of a motor command much faster than actual sensory prediction errors can manage, and helps the cortex reduce the sensory consequences caused by one’s own movements].
The same may be true for visceromotor predictions, given the connectivity between the cerebellum and the cingulate cortex, hypothalamus and amygdala .
This would give the cerebellum a major role in allostasis, concept generation, and the construction of emotion.
A brain implements an internal model of the world with concepts because it is metabolically efficient to do so. Even before birth, a brain begins to build its internal model by processing prediction error from the body and the world.
Prediction errors (i.e. unanticipated sensory inputs) cascade in a feedforward cortical sweep, originating in the upper layers of cortices with more developed laminar organization and terminating in the deep layers of cortices with less well-developed lamination.
As information flows from sensory regions (whose upper layers contain many smaller pyramidal neurons with fewer connections) to limbic and other heteromodal regions in frontal cortex (whose upper layers contain fewer but larger pyramidal neurons with many more connections, see Figure 4A), it is compressed and reduced in dimensionality.
This dimension reduction allows the brain to represent a lot of information with a smaller population of neurons, reducing redundancy and increasing efficiency, because smaller populations of neurons are summarizing statistical regularities in the spiking patterns in larger populations with in the sensory and motor regions.
Additional efficiency is achieved because conceptually similar representations reuse neural populations during simulation. As a result, different predictions are separable, but are not spatially separate (i.e. multimodal summaries are organized in a continuous neural territory that reflects their similarity to one another).
Therefore, the hypothesis is that all new learning (e.g. the processing of prediction error) is concept learning, because the brain is condensing redundant firing patterns into more efficient (and cost-effective) multimodal summaries.
This information is available for later use by limbic cortices as they generatively initiate prediction signals, constructed as low-dimensional, multimodal summaries (i.e. ‘abstractions’); these summaries, consolidated from prior encoding of prediction errors, become more detailed and particular as they propagate out to more architecturally granular sensory and motor regions to complete embodied concept generation.
In a keynote address in 2006, I first proposed that several of the brain’s intrinsic networks (what would come to be called the default mode, salience, and frontoparietal control networks) are domain-general or multi-use networks that are involved in constructing emotional episodes.
Building on the findings so far, as well as the anatomical distribution of limbic cortices within the brain (see Figure 5), I have refined these hypotheses (see Figure 6).
I hypothesize, as others do, that the default mode network is necessary for the brain’s internal model. Regardless of the other mental categories mapped to default mode network activity, the simulations initiated within this network cascade to create concepts that eventually categorize sensory inputs and guide movements in the service of allostasis. This hypothesis is partially consistent with the hypothesis that the default mode network represents semantic concepts (see Figure 4B).
I hypothesize that the default mode network hosts ‘part’ of their patterns, but simulations are more than just multimodal sensorimotor summaries; they are fully embodied brain states.
They emerge as default mode summaries cascade out to primary sensory and motor regions to become detailed and particularized [i.e. to modulate the spiking patterns of sensory and motor neurons.
Figure 6: A large-scale system for allostasis and interoception in the human brain. (A) The system implementing allostasis and interoception is composed of two large-scale intrinsic networks (shown in red and blue) that are interconnected by several hubs (shown in purple; for coordinates, see Kleckner et al., in press). Hubs belonging to the ‘rich club’ are labeled. These maps were constructed with resting state BOLD data from 280 participants, binarized at p < 10−5, and then replicated on a second sample of 270 participants. vaIns, ventral anterior insula; MCC, midcingulate cortex; PHG, parahippocampal gyrus; PostCG, postcentral gyrus; PAG, periaqueductal gray; PBN, parabrachial nucleus; NTS, the nucleus of the solitary tract; vStriat., ventral striatum; Hypothal, hypothalamus. (B) Reliable subcortical connections, thresholded P < 0.05 uncorrected, replicated in 270 participants.
I further hypothesize that the salience network tunes the internal model by predicting which prediction errors to pay attention to [i.e. those errors that are likely to be allostatically relevant and therefore worth the cost of encoding and consolidation; called precision signals.
Specifically, I hypothesize that precision signals optimize the sampling of the sensory periphery for allostasis, and they are sent to every sensory system in the brain. They directly alter the gain on neurons that compute prediction error from incoming sensory input (i.e. they apply attention) to signal the degree of confidence in the predictions (i.e. the priors), confidence in the reliability or quality of incoming sensory signals, and/or predicted relevance for allostasis.
Unexpected sensory inputs that are anticipated to have resource implications (i.e. are likely to impact survival, offering reward or threat, or are of uncertain value) will be treated as ‘signal’ and learned (i.e. encoded) to better predict energy needs in the future, with all other prediction error treated as ‘noise’ and safely ignored.
Limbic regions within the salience network may also indirectly signal the precision of incoming sensory inputs via their modulation of the reticular nucleus that encircles that thalamus and controls the sensory input that reaches the cortex via thalamocortical pathways.
My hypothesis, then, is that cortical limbic regions within the salience network are at the core of the brain’s ability to adjust its internal model to the conditions of the sensory periphery, again in the service of allostasis (e.g. see Figure 6) This is consistent with the salience network’s role in attention regulation.
In addition, I hypothesize that neurons with the frontoparietal control network sculpt and maintain simulations for longer than the several hundred milliseconds it takes to process imminent prediction errors), and they may also help to suppress or inhibit simulations whose priors are very low (because those priors are influenced not only by the current sensory array, but also by what the brain predicts for the future). It pays to be flexible, to be able to construct and use patterns that extend over longer periods of time (different animals have different timescales that are relevant to their behavioral repertoire and ecological niche).
It’s also valuable to learn on a single trial, without being guided by recurring statistical regularities in the world, particularly if you reside in a quickly changing environment. As a prediction generator, the brain is constructing simulations (as concepts) across many different timescales (i.e. integrating information across the few moments that constitute an event, but also across longer time frames at various scales.
Therefore, a brain may be pattern matching to categorize not only on short processing timescales of milliseconds but also on much longer timescales (seconds to minutes to hours or even longer).
The lesson here, for the science of emotion, is that the brain does not process individual stimuli—it processes events across temporal windows.
Emotion perception is event perception, not object perception.
Now we can see how a multi-level, constructionist view like the theory of constructed emotion offers an approach to understanding the brain basis of emotion that is consistent with emerging computational and evolutionary biological views of the nervous system.
A brain can be thought of as running an internal model that controls central pattern generators in the service of allostasis. An internal model runs on past experiences, implemented as concepts.
A concept is a collection of embodied, whole brain representations that predict what is about to happen in the sensory environment, what the best action is to deal with impending events, and their consequences for allostasis (the latter is made available to consciousness as affect).
Unpredicted information (i.e. prediction error) is encoded and consolidated whenever it is predicted to result in a physiological change in state of perceiver (i.e. whenever it impacts allostasis).
Once prediction error is minimized, a prediction becomes a perception or an experience. In doing so, the prediction explains the cause of sensory events and directs action; i.e. it categorizes the sensory event. In this way, the brain uses past experience to construct a categorization [a situated conceptualization] that best fits the situation to guide action.
The brain continually constructs concepts and creates categories to identify what the sensory inputs are, infers a causal explanation for what caused them, and drives action plans for what to do about them.
When the internal model creates an emotion concept, the eventual categorization results in an instance of emotion.
This hypothesis is consistent with the conceptual innovations in Darwin’s On the Origin of Species. Some of the psychological constructs used in the theory of constructed emotion are species-general (e.g. allostasis, interoception, affect, and concept), while others require the capacity for certain types of concepts and are more species-specific (e.g. emotion concepts).
It is necessary to understand which constructs are species-general vs. species-specific to solve the puzzle of the biological basis of emotion.
Mistaking one for the other is a category error that interferes with scientific progress.
Constructionism, as a scientific paradigm, makes different assumptions than the classical paradigm, asks different questions, and requires different methods and analytic procedures than those of the classical view (whose methods are ill-suited to testing it). As a consequence, constructionism is often profoundly misunderstood .
With these observations in mind, here is a partial list of claims I am not making, to avoid further confusion:
• I am not saying that emotions are illusions. I’m saying that emotion categories don’t have distinct, dedicated neural essences. Emotion categories are as real as any other conceptual categories that require a human perceiver for existence, such as ‘money’ (i.e. the various objects that have served as currency throughout human history share no physical similarities; Barrett, 2012, 2017).
• I am not saying that all neurons do everything (a.k.a. equipotentiality). I am suggesting that a given neuron does more than one thing (has more than one receptive field), and that there are no emotion-specific neurons.
• I am not claiming that networks are Lego blocks with a static configuration and an essential function. I am suggesting that, when it comes to understanding the physical basis of psychological categories, it is necessary to focus on ensembles of neurons rather than individual neurons. A neuron does not function on its own, and many neurons are part of more than one network. Moreover, networks function via degeneracy, meaning that a given network has a repertoire of functional configurations (i.e. functional motifs) that is constrained by its anatomical structure (i.e. its structural motif).
• I am not claiming that subcortical regions are irrelevant to emotion. I hypothesize that an instance of emotion is a brain state that makes the sensory array meaningful, and in so doing engages the pattern generators for whatever actions are functional in the context, given a person’s current state.
• I am not saying that the default mode and salience networks implement allostasis and therefore should not be mapped to other psychological categories. I am claiming that these (and other) domain-general networks can be mapped to many psychological categories at the same time.
• I am not saying that concepts are stored in the default mode network. I’m saying that the default mode network represents efficient, multimodal summaries, from which a cascade of predictions issues through the entire cortical sheet, terminating in primary sensory and motor regions. The whole cascade is an instance of a concept.
• I am not saying that emotions are deliberate, nor denying that automaticity exists. I am saying that in humans, actual executive control (e.g. via the frontoparietal control network in primates) and the experience of feeling in control are not synonymous (Barrett et al., 2004). All animal brains create concepts to categorize sensory inputs and guide action in an obligatory and automatic way, outside of awareness. Automaticity and control are different brain modes (each of which can be achieved with a variety of network configurations), not two battling brain systems.
• I am not saying that non-human animals are emotionless. I’m saying that emotion is perceiver-dependent, so questions about the nature of emotion must include a perceiver. ‘Is the fly fearful?’ is not a scientific question, but ‘Does a human perceive fear in the fly?’ and ‘Does the fly feel fear?’ can be answered scientifically (and the answers are ‘yes’ and ‘no’). Notice that I am not claiming that a fly feels nothing; it may feel affect (Barrett, 2017).
Scientists must abandon essentialism and study emotions in all their variety. We must not merely focus on the few stereotypes that have been stipulated based on a very selective reading of Darwin.
We must assume variability to be the norm, rather than a nuisance to be explained after the fact. It will never be possible to measure an emotion by merely measuring facial muscle movements, changes in autonomic nervous system signals, or neural firing within the periaqueductal gray or the amygdala.
To understand the nature of emotion, we must also model the brain systems that are necessary for making meaning of physical changes in the body and in the world.
This article is a mere sketch of a much larger scientific landscape. The theory of constructed emotion proposes that emotions should be modeled holistically, as whole brain-body phenomena in context.
My key hypothesis is that the dynamics of the default mode, salience and frontoparietal control networks form the computational core of a brain’s dynamic internal working model of the body in the world, entraining sensory and motor systems to create multi-sensory representations of the world at various time scales from the perspective of someone who has a body, all in the service of allostasis.
In other words, allostasis (predictively regulating the internal milieu) and interoception (representing the internal milieu) are at the anatomical and functional core of the nervous system.
These insights offer a range of new hypotheses—e.g. that reappraisal and other regulation processes are accomplished with predictions that categorize sensory inputs and control action with concepts (see Figure 4C).
The theory of constructed emotion also views the distinction between the central and peripheral nervous systems as historical rather than as scientifically accurate. For example, ascending interoceptive signals bring sensory prediction errors from the internal milieu to the brain via lamina I and vagal afferent pathways, and they are anatomically positioned to be modulated by descending visceromotor predictions that control the internal milieu.
This suggests the hypothesis that concepts (i.e. prediction signals) act like a volume dial to influence the processing of prediction errors before they even reach the brain.
This provides new hypotheses about the chronification of pain that considers pain and emotion as two sides of the same coin, rather than separate phenomena that influence one another.
Emotions are constructions of the world, not reactions to it. This insight is a game changer for the science of emotion. It dissolves many of the debates that remained mired in philosophical confusion, and allows us to better understand the value of non-human animal models, without resorting to the perils of essentialism and anthropomorphism.
It provides a common framework for understanding mental, physical, and neurodegenerative disorders, and collapses the artificial boundaries between cognitive, affective, and social neurosciences.
Ultimately, the theory of constructed emotion equips scientists with new conceptual tools to solve the age-old mysteries of how a human nervous system creates a human mind.
Emotions are constructions of the world, not reactions to it!
LF Barrett (2017)
Agranular: Cerebral cortex with the least developed laminar organization involving no definable layer IV, and no clear distinction between the neurons in layers II and III.
Allostasis: Regulating the internal milieu by anticipating physiological needs and preparing to meet them before they arise.
Concept: Traditionally, a category is a group of instances that are similar for some function or purpose; a concept is the mental representations of those category members. In the theory of constructed emotion, a concept is a collection of embodied, whole brain representations that predicts what is about to happen in the sensory environment, what the best action is to deal with these impending events, and their consequences for allostasis.
Degeneracy: Degeneracy refers to the capacity for biologically dissimilar systems or processes to give rise to identical functions. Degeneracy is different from redundancy (which is inefficient and to be avoided).
Dysgranular: Cerebral cortex with a moderately developed laminar organization involving a rudimentary layer IV and better developed layers II and III.
Hub: A group of the brain’s most inter-connected neurons. The hubs with the most dense connections are referred to as ‘rich club’ hubs, and include visceromotor regions, as well as other heteromodal regions. They are thought to function as a high-capacity backbone for synchronizing neural activity, integrating information (and segregating noise) across the entire brain.
Internal Milieu: An integrated sensory representation of the physiological state of the body.
Laminar Organization: The architectural organization of neurons in a cortical column.
Naïve Realism: The belief that one’s senses provide an accurate and objective representation of the world.
Pattern Generators: Groups of neurons (i.e. nuclei) that implement the sequences of actions for coordinated behaviors like feeding, running, and mating. An action is a single movement but a behavior is an event. Pattern generators are in the hypothalamus and down in the brainstem near their effector muscles and organs (Sterling and Laughlin, 2015; Swanson, 2005).
Visceromotor: Internal movements involving autonomic, neuroendocrine, and immune systems
Salience Network: The salience network is theorized to mediate switching between the default mode network and central executive network.
Lisa Feldman Barrett, The theory of constructed emotion: an active inference account of interoception and categorization, Social Cognitive and Affective Neuroscience, Volume 12, Issue 1, January 2017, Pages 1–23, https://doi.org/10.1093/scan/nsw154
This website uses cookies.
Tilføj din kommentar her - Feedback er altid velkomment!