Kumar et al. report their findings from an unusual opportunity that presented itself when a retired London schoolteacher, Sylvia, reported to her doctors that she increasingly was hearing music, as if it were completely real, in the absence of a source for the music. (People with musical hallucinations usually are psychologically normal — except for the music they are sure someone is playing. ) Sylvia volunteered for a study by Kumar et al. that made use of the fact that real music can sometimes quiet the imaginary music, in effect masking music hallucination. Playing Bach for 30 seconds was used to damp down the hallucinations while the teacher’s brain activity was being monitored by MEG (magnetic recordings), and when the real music stopped the teacher reported the strength of hallucinations as they returned. The brain regions becoming more active as hallucinations returned were the same as those activated by listening to real music. From Zimmer’s review of this work, a suggested model for what is happening:
Our brains… generate predictions about what is going to happen next, using past experiences as a guide. When we hear a sound, for example — particularly music — our brains guess at what it is and predict what it will sound like in the next instant. If the prediction is wrong — if we mistook a teakettle for an opera singer — our brains quickly recognize that we are hearing something else and make a new prediction to minimize the error….people with musical hallucinations often have at least some hearing loss. Sylvia, for example, needed hearing aids after getting a viral infection two decades ago.
The model of our brain as a prediction-generating machine
…could explain why some people with hearing loss develop musical hallucinations. With fewer auditory signals entering the brain, their error detection becomes weaker. If the music-processing brain regions make faulty predictions, those predictions only grow stronger until they feel like reality.
…could explain why real music provides temporary relief for musical hallucinations: the incoming sounds reveal the brain’s prediction errors. And it may also explain why people are prone to hallucinate music, and not other familiar sounds.
Here is the Kumar et al. abstract:
The physiological basis for musical hallucinations (MH) is not understood. One obstacle to understanding has been the lack of a method to manipulate the intensity of hallucination during the course of experiment. Residual inhibition, transient suppression of a phantom percept after the offset of a masking stimulus, has been used in the study of tinnitus. We report here a human subject whose MH were residually inhibited by short periods of music. Magnetoencephalography (MEG) allowed us to examine variation in the underlying oscillatory brain activity in different states. Source-space analysis capable of single-subject inference defined left-lateralised power increases, associated with stronger hallucinations, in the gamma band in left anterior superior temporal gyrus, and in the beta band in motor cortex and posteromedial cortex. The data indicate that these areas form a crucial network in the generation of MH, and are consistent with a model in which MH are generated by persistent reciprocal communication in a predictive coding hierarchy.
Judson Brewer and collaborators perform fMRI scans of experienced practitioners of loving kindness meditation, which fosters feelings of selfless love for others. Their abstract (below) notes their observations (but doesn’t emphasize one of their more interesting findings: that the tranquility of selfless love without expectation of reward lowers activation of the areas activated by romantic love - which are the same reward areas activated by cocaine.)
Loving kindness is a form of meditation involving directed well-wishing, typically supported by the silent repetition of phrases such as “may all beings be happy,” to foster a feeling of selfless love. Here we used functional magnetic resonance imaging to assess the neural substrate of loving kindness meditation in experienced meditators and novices. We first assessed group differences in blood oxygen level-dependent (BOLD) signal during loving kindness meditation. We next used a relatively novel approach, the intrinsic connectivity distribution of functional connectivity, to identify regions that differ in intrinsic connectivity between groups, and then used a data-driven approach to seed-based connectivity analysis to identify which connections differ between groups. Our findings suggest group differences in brain regions involved in self-related processing and mind wandering, emotional processing, inner speech, and memory. Meditators showed overall reduced BOLD signal and intrinsic connectivity during loving kindness as compared to novices, more specifically in the posterior cingulate cortex/precuneus (PCC/PCu), a finding that is consistent with our prior work and other recent neuroimaging studies of meditation. Furthermore, meditators showed greater functional connectivity during loving kindness between the PCC/PCu and the left inferior frontal gyrus, whereas novices showed greater functional connectivity during loving kindness between the PCC/PCu and other cortical midline regions of the default mode network, the bilateral posterior insula lobe, and the bilateral parahippocampus/hippocampus. These novel findings suggest that loving kindness meditation involves a present-centered, selfless focus for meditators as compared to novices.
Nearly 10 years ago, Vanderbilt University cognitive neuroscientist Randolph Blake and his postdoc Duje Tadin needed to give their study participants the experience of complete darkness. They were testing their new transcranial magnetic stimulator (TMS) and developing protocols for a series of experiments involving the generation of phosphenes—light experienced by subjects when there is none. So the researchers ordered high-end blindfolds, designed to block all light from reaching the eyes.
When the blindfolds arrived, Blake tried one out. “I can’t remember what prompted me to do it, but on a lark, I put them on myself first and waved my hand in front of my eyes,” he recalls, “and had this faint sense that I could see my hand moving.”
Tadin then tried it and had the same experience. The two replicated the mini experiment in the TMS lab, a small, dark room on the sixth floor. And again, both researchers could just barely see their hands through the blindfolds. “You could see this faint shadow, this faint impression of something moving back and forth in rhythm with your motions,” Blake says. But, when Blake waved his hand in front of Tadin’s blindfolded face, Tadin saw nothing. “That got us excited,” Blake says.
The duo traipsed around the building eagerly blindfolding their colleagues and asking them to report what they saw. “About half reported seeing something,” Blake says.
To test what was happening, however, the researchers knew they needed to come up with a better way to characterize what people were actually seeing. “What we discovered was an inherently subjective experience,” says Tadin. “There’s no easy way to ascertain that I’m telling the truth.” Unable to think of a reasonable way to measure the phenomenon, they set the project aside.
A few years later, running his own lab at the University of Rochester, Tadin told the story to graduate student Kevin Dieter, who encouraged Tadin to give the project another shot. They devised a conservative experimental setup in which they attempted to control the subjects’ expectations: they told study participants that one blindfold had little, imperceptible holes that might allow them to see through, while another blindfold would successfully keep out all light. (Both blindfolds were, in fact, totally lightproof.) A subject’s experience with the first blindfold could then guide his expectations for a second trial using the other blindfold. Specifically, if he had seen something with the first blindfold, he would certainly not expect to see anything with the second one. But even under these conditions, nearly 50 percent of subjects reported having at least a “visual sensation of motion” while wearing the second blindfold.
The results hold “implications for how our different sensory systems work together,” Tadin says. Dealing only with subjective reports, however, still made him uneasy. So he turned to an eye-tracking device—used without the blindfolds but in complete darkness—to detect the movement of subjects’ eyes as they viewed the hand they reported seeing. People cannot move their eyes smoothly unless they have a visual target to lock on to, Tadin explains. If they just thought they saw their hand, jerky eye movements should reveal the truth.
To Tadin’s amazement, the eye movements suggested that the visual perception was indeed real: people who reported seeing their hands moving in the dark exhibited eye movements that were twice as smooth as those of subjects who reported seeing nothing (Psychological Science, doi:10.1177/0956797613497968, 2013).
Interestingly, people with synesthesia—who often see letters of the alphabet, numbers, or days of the week in specific colors, or associate particular sounds with visual stimuli—tended to score higher on Tadin’s blindfold experiment in terms of how much they saw. “They were literally off the chart,” he says. One synesthete produced such smooth eye movements that Tadin at first thought the data were erroneous: “Her smooth eye movements were almost perfect.” Research has suggested that synesthetes exhibit higher levels of cross-brain connectivity, which may play a role in the generation of the visual perception as a result of the kinesthetic input.
Regardless of the underlying neural mechanism, Tadin suspects that there are likely other examples of how the senses blend together—in synesthetes and in people with normal sensory experiences. In 2005, for example, when Norimichi Kitagawa at NTT Communication Science Laboratories in Japan and his colleagues recorded the sounds generated inside the ear of a dummy head by brushing the outside of the ear with a paintbrush, then played those sounds to participants who received no ear strokes, many reported feeling a tickling sensation (Japanese Journal of Psychonomic Science, 24:121-22, 2005). “This phenomenon [of ‘seeing’ one’s own movements] may be just the tip of the iceberg,” Tadin says.
Milner (see citation below) reviews the evidence that the visual-motor control is not conscious.
Visual perception starts at the back of the optical lobe and moves forward in the cortex as processing proceeds. There are two tracks along which visual perception proceeds, called the dorsal stream and the ventral stream. The two streams have few interconnections. The dorsal stream runs from the primary visual cortex to the superior occipito-parietal cortex near the top the the head. The ventral stream runs from the primary visual cortex to the inferior occipito-temporal cortex at the side of the head. Their functions, as far as is known, differ. “The dorsal stream’s principal role is to provide real-time ‘bottom-up’ visual guidance of our movements online. In contrast, the ventral stream, in conjunction with top-down information from visual and semantic memory, provides perceptual representations that can serve recognition, visual thought, planning and memory offline…we have proposed that the visual products of dorsal stream1 processing are not available to conscious awareness—that they exist only as evanescent raw materials to provide the unconscious moment-to-moment sensory calibration of our movements.”
The researchers used three methods in their studies: patients with lesions in their visual system, patients suffering from visual extinction, and fMRI experiments.
One patient had part of their ventral streams destroyed – they could reach and grasp objects that they were not conscious of. The opposite was true of other patients with damage to their dorsal streams – they had difficulties grasping objects that they were consciously aware of.
Visual extinction is a form of spatial neglect. The patient fails to detect a stimulus presented on the side of space opposite the brain damage when and only when there is simultaneously a stimulus on the good side. By carefully arranging an experimental setup, a patient with visual extinction took account of an obstacle that they were not conscious of when reaching for an object. Avoiding an obstacle depends of the dorsal stream because patients with damage to the dorsal stream did not adjust their reaching movements in the presence of obstacles.
There is visual feedback during reaching. “Under normal viewing conditions, the brain continuously registers the visual locations of both the reaching hand and the target, incorporating these two visual elements within a single ‘loop’ that operates like a servomechanism to progressively reduce their mutual separation in space (the ‘error signal’) as the movement unfolds. When the need to use such visual feedback is increased by the occasional introduction of unnoticed perturbations in the location of the target during the course of a reach, a healthy subject will make the necessary adjustments to the parameters of his or her movement quite seamlessly. ..In contrast, a patient with damage to the dorsal stream was quite unable to take such target changes on board: she first had to complete the reach towards the original location, before then making a post hoc switch to the new target location…It thusseems very likely that the ability to exploit the error signal between hand and target during reaching is dependent on the integrity of the dorsal stream. ”
The phenomenon of binocular rivalry where the subject has different images projected to the two retinas and is alternatively conscious of one or the other image has been studied with fMRI. It is possible to see which image is conscious by the activity in the ventral stream. But the dorsal stream is able to act on information even if it is not being processed by the ventral stream and therefore not consciously available.
The authors do point out that they are not saying that the dorsal stream plays no role in conscious perception. It may for example have some control over attention.
In the conclusion, they say “according to the model, such ventral-stream processing plays no causal role in the real-time visual guidance of the action, despite our strong intuitive inclination to believe otherwise (what Clark calls ‘the assumption of experienced-based control’). According to the Milner & Goodale model, that real-time guidance is provided through continuous visual monitoring by the dorsal stream of those very same visual inputs that we experience by courtesy of our ventral stream. ”
A.D. Milner (2012). Is visual processing in the dorsal stream accessible to consciousness? Proc R Soc B, 2289-2298 DOI: 10.1098/rspb.2011.2663
Musical hallucinations are most commonly found in people who have suffered hearing loss or deafness. But why they happen is unknown. In a new paper in Cortex, British neuroscientists Kumar et al claim to have found A brain basis for musical hallucinations
Using magnetoencephalography (MEG), the authors investigate brain activity in a patient, a 66 year old woman who had been hearing phantom ‘piano melodies’ for almost two years, after she had suddenly become partly deaf. She was an amateur keyboard player, and was able to write down the tunes she ‘heard’:The same melody – sometimes a real tune, sometimes ‘made up’ – would repeat for hours at a time, and it could get annoying. However, she had discovered that listening to certain pieces of real music provided temporary relief; the hallucinations would stop during the piece, and only restart after a certain lag-period of several seconds.
Kumar et al made use of this fact to compare brain activity when hallucinations were ‘on’ and ‘off’ – they recorded MEG data before and after playing 15 seconds of Bach, one of the hallucination-blocking composers. Immediately after each Bach burst, hallucinations were low, while 60 seconds later they had returned.
Clever… however there’s a serious problem in this procedure: it can’t separate the effects of hallucinations from the effects of stopping listening to real music, nor from the expectation of future real music (the timing of which was predictable).
The obvious solution would have been to also include bursts of some music that didn’tblock hallucinations, as a control condition. The patient herself reported that some music didn’t. This would dramatically increase the inferences one could draw from the data. Some MEG data from healthy control participants hearing the same music would also help to establish specificity. This limitation isn’t acknowledged.
Anyway, Kumar et al report increased gamma band activity in the left aSTG area, part of the auditory cortex. They say that
The area that shows higher activity during musical hallucination coincides with an area implicated in the normal perception of melody using fMRI.
However, strangely, the actual Bach music did not produce significant changes in activity in this area, or anywhere else in the brain. Only imaginary music caused real brain waves; Kumar et al say that this has been seen in other studies and
Some other changes in the beta frequency band were found in the motor cortex and posteromedial cortex/precuneus. Neither of these is thought of as a ‘music area’. To be honest, I don’t think these results shed much light on the phenomenon.
The second half of the paper is rather different, providing a theoretical overview of musical hallucination. This section could almost be a paper in itself. The authors argue that
Our hypothesis is that peripheral hearing loss reduces the signal-to-noise ratio of incoming auditory stimuli and the brain responds by decreasing sensory precision or post-synaptic gain…
A recurrent loop of communication is thus established which is no longer informed, or entrained, by precise bottom-up sensory prediction errors… it is constrained only by a need to preserve the internal consistency between hierarchical representations of music.
This reciprocal communication between an area in music perception and area/s involved in higher music cognition (motor cortex and precuneus) with no constraint from the sensory input gives rise to musical hallucinations.
Kumar S, Sedley W, Barnes GR, Teki S, Friston KJ, & Griffiths TD (2013). A brain basis for musical hallucinations. Cortex PMID: 24445167
Juvenile mantis shrimp. Image credit: Roy L. Caldwell
Mantis shrimp have a type of vision unlike any other animal on the planet—that much was known. But now scientists have determined, at a cellular level, how it is that these foot-long crustaceans see the world. And it stems from their unique photoreceptors.
In general, photoreceptors absorb light and convert it into electrical signals, which are then sent to the brain for interpretation. Each photoreceptor is specific to a particular wavelength of light, which the brain translates into a color. Your dog has two kinds of photoreceptors: blue and green. You have three: blue, green and red. Our eyes can see these colors and every combination or variation thereof.
Scientists say that in order to see every color under the sun, an animal needs four to seven different types of photoreceptors. Why, then, does the mantis shrimp have a whopping 12 different kinds of photoreceptors in their eyeballs?
The researchers say it’s because mantis shrimp photoreceptors work in a unique way, completely unlike like the rest of ours.
Researchers reached this conclusion after playing a reward game with mantis shrimp. They would shine two different colored lights simultaneously at the shrimp. Pinching their claws at the source of one color, let’s say yellow, would result in the shrimp getting a treat. Choosing the blue one meant no treat. This drill was repeated until the shrimp learned to pick the yellow light and were able to do so consistently. Then the researchers began to switch the other colors up, making the shrimp choose between red and yellow, then orange and yellow, etc.
Surprisingly, when the colors got too close to one another (i.e. yellow and a shade of orange akin to macaroni and cheese) the shrimp couldn’t tell them apart. Even with their 12 kinds of photoreceptors, shrimp could only distinguish colors on the light spectrum that were at least 25 nanometers apart. By comparison, humans, with a measly three kinds of photoreceptors, can distinguish colors separated by as little as one nanometer.
From this the researchers deduced that the mantis shrimp’s whole visual system operates differently than our own. As they describe in theirpaper published in Science today, further investigation showed that shrimp don’t take the time to send visual information to the brain and wait for it to distinguish between subtle color differences like we do. The shrimp just skips over this step altogether.
Each of the shrimp’s 12 photoreceptors is essentially set to a different sensitivity. Their eyes scan a scene and are able to instantly recognize when something falls into its reddish category, without having to ask their brain if it’s seeing brick red or scarlet. In the colorful and fast-paced circus that is its coral reef home, avoiding that little bit of a processing delay could be the difference between life and death, even for a foot-long crustacean.
The same goes for their dinner. This unique kind of vision could be a mantis shrimp’s hidden weapon in capturing prey—well, alongside its ability to swing its front claws at the speed of a .22 caliber bullet to bludgeon, spear or dismember an unwitting victim.
If you want to gaze deep into these crazy eyes (not to mention witnessing a mantis shrimp’s eye-grooming techniques!) check out this video. Dramatic mood music included.
Olaf Blanke (whose work on projecting ourselves outside our bodies I’ve mentioned previously) and collaborators extend their studies on body perception and self consciousness to show that signals from both the inside and the outside of the body are fundamental in determining our self consciousness:
Prominent theories highlight the importance of bodily perception for self-consciousness, but it is currently not known whether bodily perception is based on interoceptive or exteroceptive signals or on integrated signals from these anatomically distinct systems. In the research reported here, we combined both types of signals by surreptitiously providing participants with visual exteroceptive information about their heartbeat: A real-time video image of a periodically illuminated silhouette outlined participants’ (projected, “virtual”) bodies and flashed in synchrony with their heartbeats. We investigated whether these “cardio-visual” signals could modulate bodily self-consciousness and tactile perception. We report two main findings. First, synchronous cardio-visual signals increased self-identification with and self-location toward the virtual body, and second, they altered the perception of tactile stimuli applied to participants’ backs so that touch was mislocalized toward the virtual body. We argue that the integration of signals from the inside and the outside of the human body is a fundamental neurobiological process underlying self-consciousness.
Experimental setup for the body conditions. Participants (a) stood with their backs facing a video camera placed 200 cm behind them (b). The video showing the participant’s body (his or her “virtual body”) was projected in real time onto a head-mounted display. An electrocardiogram was recorded, and R peaks were detected in real time (c), triggering a flashing silhouette outlining the participant’s virtual body (d). The display made it appear as though the virtual body was standing 200 cm in front of the participant (e). After each block, participants were passively displaced 150 cm backward to the camera and were instructed to walk back to the original position.
(Image caption: A daydreaming brain: the yellow areas depict the default mode network from three different perspectives; the coloured fibres show the connections amongst each other and with the remainder of the brain.)
The structure of the human brain is complex, reminiscent of a circuit diagram with countless connections. But what role does this architecture play in the functioning of the brain? To answer this question, researchers at the Max Planck Institute for Human Development in Berlin, in cooperation with colleagues at the Free University of Berlin and University Hospital Freiburg, have for the first time analysed 1.6 billion connections within the brain simultaneously. They found the highest agreement between structure and information flow in the “default mode network,” which is responsible for inward-focused thinking such as daydreaming.
Everybody’s been there: You’re sitting at your desk, staring out the window, your thoughts wandering. Instead of getting on with what you’re supposed to be doing, you start mentally planning your next holiday or find yourself lost in a thought or a memory. It’s only later that you realize what has happened: Your brain has simply “changed channels”—and switched to autopilot.
For some time now, experts have been interested in the competition among different networks of the brain, which are able to suppress one another’s activity. If one of these approximately 20 networks is active, the others remain more or less silent. So if you’re thinking about your next holiday, it is almost impossible to follow the content of a text at the same time.
To find out how the anatomical structure of the brain impacts its functional networks, a team of researchers at the Max Planck Institute for Human Development in Berlin, in cooperation with colleagues at the Free University of Berlin and the University Hospital Freiburg, have analysed the connections between a total of 40,000 tiny areas of the brain. Using functional magnetic resonance imaging, they examined a total of 1.6 billion possible anatomical connections between these different regions in 19 participants aged between 21 and 31 years. The research team compared these connections with the brain signals actually generated by the nerve cells.
Their results showed the highest agreement between brain structure and brain function in areas forming part of the “default mode network“, which is associated with daydreaming, imagination, and self-referential thought. “In comparison to other networks, the default mode network uses the most direct anatomical connections. We think that neuronal activity is automatically directed to level off at this network whenever there are no external influences on the brain,” says Andreas Horn, lead author of the study and researcher in the Center for Adaptive Rationality at the Max Planck Institute for Human Development in Berlin.
Living up to its name, the default mode network seems to become active in the absence of external influences. In other words, the anatomical structure of the brain seems to have a built-in autopilot setting. It should not, however, be confused with an idle state. On the contrary, daydreaming, imagination, and self-referential thought are complex tasks for the brain.
“Our findings suggest that the structural architecture of the brain ensures that it automatically switches to something useful when it is not being used for other activities,” says Andreas Horn. “But the brain only stays on autopilot until an external stimulus causes activity in another network, putting an end to the daydreaming. A buzzing fly, a loud bang in the distance, or focused concentration on a text, for example.”
The researchers hope that their findings will contribute to a better understanding of brain functioning in healthy people, but also of neurodegenerative disorders such as Alzheimer’s disease and psychiatric conditions such as schizophrenia. In follow-up studies, the research team will compare the brain structures of patients with neurological disorders with those of healthy controls.
Previous research suggests that people construct mental time lines to represent and reason about time. However, is the ability to represent space truly necessary for representing events along a mental time line? Our results are the first to demonstrate that deficits in spatial representation (as a function of left hemispatial neglect) also result in deficits in representing events along the mental time line. Specifically, we show that patients with left hemispatial neglect have difficulty representing events that are associated with the past and, thus, fall to the left on the mental time line. These results demonstrate that representations of space and time share neural underpinnings and that representations of time have specific spatial properties (e.g., a left and a right side). Furthermore, it appears that intact spatial representations are necessary for at least some types of temporal representation.
deercr0ssing asked: I hope speculation is encouraged. I'm slightly scientifically literate, but I lack any meaningful qualifications. I'm obsessed with the concept of consciousness. I have two hypothesizes that I'm wondering if you could provide any insight to prove/disprove their possibility. First: I tend to think of neurons as finite fragments of information, but storage nonetheless. But the actual conscious takes place between the synapses. Second: Conscious, on some scale, must have quantum properties.. right?
Speculation is always encouraged! It is nice to hear someone else is also obsessed with consciousness. I want to warn you now that I am passionate about this topic, and so will probably answer in a long winded manor.
First, the problem of information is one of the most fundamental problems the world of neuroscience must answer. That is, what is the basic information unit that is being transferred? It does seem to have something to do with neurons, but exactly what is being transferred? Information could be contained in the electrochemical signals (typically action potentials) between neurons. However the probability of an action potential occurring is regulated by a large range of factors, such as membrane permeability, neurotransmitter concentration, receptor concentration, etc. One level lower, the information could be contained in the diffusion of water across neurons and glia cells. Even lower, it could be the transfer of electrons through various biological constraints. The same can be said about storage. The concurrent firing of action potentials leads to changes in the synapses, myelin, cell walls, and can even lead to additional synapses (or neurons). Exactly what is being stored is where is not entirely clear.
Ultimately I think all of the above (and much more) can be thought of as information. Likewise, consciousness probably involves the transfer of information in many other ways, not just the synapses. As a science, we are slowly moving away from the idea that there must be ‘finite fragments of information’, which most likely stemmed from our experience with computers where there is fundamental discrete information - a bit. (Personal note: this is the main reason why I am currently focused on investigating the process of consciousness, instead of the mechanism.)
Second, I am not sure what you mean by quantum properties. Ultimately, everything is based off of quantum properties, but I see no reason that consciousness requires specific quantum relationships. I often hear that entanglement may be the solution to consciousness, where atoms or molecules can transfer information instantaneously. As far as I know, there is no reason at all to require entanglement in consciousness, though the media loves it. In my opinion, talking about quantum properties for consciousness is interesting, but does not tell us anything new about consciousness and merely adds convolution. Right now, we need to focus on what we know, and what we can test.
For example, we now know that access consciousness (if you don’t know what that is, look up Ned Block) requires a feedback loop. In vision, light enters the retina, is sent first to the visual cortex, then the frontal cortices, and then back to the visual cortex (disclaimer: this is way oversimplified). If we use TMS to knock out the feedforward stream from the visual cortex to the frontal cortex, we still consciously perceive the stimulus. However, if we knock out the feedback stream from the frontal cortices to the visual cortex, we lose conscious awareness. These sort of experiments lead to a deeper understanding, and until we tease apart everything we can about the process of consciousness, I don’t think we can say anything about how it occurs.
I hope this answered your questions. If not, or if something is not clear, please let me know! The more collaboration we have on the topic, the more likely we will come to a solution.
How is it that art moves us? What is happening when we react to the aesthetic in our lives? Vessel, Starr and Rubin (citation below) used fMRI and some specifically chosen paintings to investigate the aesthetic experience.
They wanted to separate the experience of being moved by art from the sensory stimuli of art, so that they could look at the highly personal part of appreciation. To do this they assembled a set of over a 100 images of paintings hanging in art museums/galleries (and therefore agreed to be ‘art’) but that had not had a great deal of public exposure so that they were new to the subjects. They also insured that they had a wide range of dates, styles, methods etc. because they wanted the subjects to differ somewhat in the pictures they were moved by. The subjects were shown all the pictures in random order, during fMRI scanning. They were asked to rate each picture in terms of how much it ‘moved them’. They rated the images 1 to 4 with 4 being the most moving, but they were not given any detailed instruction on criteria for the rating except just the idea of ‘how much the piece of art moved them’.
The results showed that subjects were making highly personal assessments. Paintings that got 4s also got 1s; ratings were not highly coordinated. There were less 4s then would be expected in comparison to similar rating experiments, as if the subjects were reserving a 4 for a somewhat special reaction. There were three patterns in the fMRI recordings. There were areas that showed no difference in activity for different ratings. For example, the activity in the visual cortex was more or less the same for a 1,2,3 or 4 rated picture. There were areas that showed a linear change with rating. The occipitotemporal cortex and some subcortical areas showed this linear rise in activity with the rating. Most interestingly there were some areas in the anterior cortex where there was a jump in activity between the 1,2 and 3 ratings and the 4 ratings. It seemed that only the paintings rated 4 activated these areas. This may have been the special criteria that explained why 4s were slightly rarer than expected. They were more a difference in kind than degree.
What were the functions of the areas identified by the scans? They were areas associated with the default mode network. As the task for subjects in the scanner was a ‘task’ that involved attention to external stimuli, it would be assumed the the default mode network would have deactivated. The default mode involves self-referential activity rather than activity driven by external inputs. But this is not a hard and fast rule. Some tasks required particular self-reference and areas of the default mode can be added to the task-positive network. The area of the default mode network that had the most activation (or lack of deactivation) in the 4 rated scans was the medial prefrontal cortex (MPFC).
“Ventral portions of the MPFC are involved in affective decision making processes, including (but not restricted to) encoding the subjective value of future rewards and assessing the emotional salience of stimuli. The anterior and dorsal portions of MPFC are active in tasks involving self-knowledge such as making judgments about oneself as well as about close others (family and friends), self- relevant moral decision-making and in “theory of mind” tasks that require gauging others’ perspectives .”
And the timing of the MPFC rise in activity is similar to our reaction to our name.“This is reminiscent of the MPFC recovery from deactivation observed when a highly self-relevant stimulus such as one’s own name is presented in a stream of self-irrelevant stimulation, as in the “cocktail party effect”.”
In other words, it was when the picture touched the self-identity of the subject rather than just their reaction to visual stimuli that they had the feeling of being specially moved. And no doubt this is why the ratings of a painting were so individual – why the same painting could get a 1 from one subject and a 4 from another.
“We propose that certain artworks can “resonate” with an individual’s sense of self in a manner that has well-defined physiological correlates and consequences: the neural representations of those external stimuli obtain access to the neural substrates and processes concerned with the self—namely to regions of the DMN (default mode network). This access, which other external stimuli normally do not obtain, allows the representation of the artwork to interact with the neural processes related to the self, affect them, and possibly even be incorporated into them (i.e., into the future, evolving representation of self).”
The paper ends with this observation:
“…if our self identity is to be influenced by the world we inhabit, it may be that similar moments should occur with greater frequency than would be expected based on the current conceptualization of the DMN as a network that is invariably suppressed during mental activity which is directed at the external world. It may be that our findings are just the “tip of the iceberg”—i.e., that instances of resonance between external stimuli and internal, self-related processing are more commonplace in daily life than what has so far been captured in fMRI experiments in the laboratory. By that view, much of our existence may be well-served by switching between periods of dominance of externally-directed (“task-positive”) brain networks over the DMN and vice versa, but those periods are punctuated by significant moments when our brains detect a certain “harmony” between the external world and our internal representation of the self—allowing the two systems to co-activate, interact, influence and reshape each other. ”
With evidence growing that meditation can have beneficial health effects, scientists have sought to understand how these practices physically affect the body.
A new study by researchers in Wisconsin, Spain, and France reports the first evidence of specific molecular changes in the body following a period of mindfulness meditation.
The study investigated the effects of a day of intensive mindfulness practice in a group of experienced meditators, compared to a group of untrained control subjects who engaged in quiet non-meditative activities. After eight hours of mindfulness practice, the meditators showed a range of genetic and molecular differences, including altered levels of gene-regulating machinery and reduced levels of pro-inflammatory genes, which in turn correlated with faster physical recovery from a stressful situation.
"To the best of our knowledge, this is the first paper that shows rapid alterations in gene expression within subjects associated with mindfulness meditation practice," says study author Richard J. Davidson, founder of the Center for Investigating Healthy Minds and the William James and Vilas Professor of Psychology and Psychiatry at the University of Wisconsin-Madison.
"Most interestingly, the changes were observed in genes that are the current targets of anti-inflammatory and analgesic drugs," says Perla Kaliman, first author of the article and a researcher at the Institute of Biomedical Research of Barcelona, Spain (IIBB-CSIC-IDIBAPS), where the molecular analyses were conducted.
The study was published in the journal Psychoneuroendocrinology.
Mindfulness-based trainings have shown beneficial effects on inflammatory disorders in prior clinical studies and are endorsed by the American Heart Association as a preventative intervention. The new results provide a possible biological mechanism for therapeutic effects.
The results show a down-regulation of genes that have been implicated in inflammation. The affected genes include the pro-inflammatory genes RIPK2 and COX2 as well as several histone deacetylase (HDAC) genes, which regulate the activity of other genes epigenetically by removing a type of chemical tag. What’s more, the extent to which some of those genes were downregulated was associated with faster cortisol recovery to a social stress test involving an impromptu speech and tasks requiring mental calculations performed in front of an audience and video camera.
Perhaps surprisingly, the researchers say, there was no difference in the tested genes between the two groups of people at the start of the study. The observed effects were seen only in the meditators following mindfulness practice. In addition, several other DNA-modifying genes showed no differences between groups, suggesting that the mindfulness practice specifically affected certain regulatory pathways.
However, it is important to note that the study was not designed to distinguish any effects of long-term meditation training from those of a single day of practice. Instead, the key result is that meditators experienced genetic changes following mindfulness practice that were not seen in the non-meditating group after other quiet activities — an outcome providing proof of principle that mindfulness practice can lead to epigenetic alterations of the genome.
Previous studies in rodents and in people have shown dynamic epigenetic responses to physical stimuli such as stress, diet, or exercise within just a few hours.
"Our genes are quite dynamic in their expression and these results suggest that the calmness of our mind can actually have a potential influence on their expression," Davidson says.
"The regulation of HDACs and inflammatory pathways may represent some of the mechanisms underlying the therapeutic potential of mindfulness-based interventions," Kaliman says. "Our findings set the foundation for future studies to further assess meditation strategies for the treatment of chronic inflammatory conditions."
The research – from Sam Deadwyler’s team at Wake Forest University (and funded byDARPA) really is pretty amazing – if it pans out.
Four Rhesus macaques were trained to perform a short-term delayed-match-to-samplememory task, involving remembering the position and shape of an icon on a screen, and then picking it out from a line-up up to 40 seconds later. The task is difficult. Even though the monkeys were trying hard to succeed, in order to earn a tasty juice reward, they made a lot of mistakes.
…until they got a helping hand:
An array of electrodes was implanted into the hippocampus, able to both record neural activity and stimulate it. Using a mathematical model called “MIMO”, the authors first determined the pattern of activity that was seen when each animal correctly performed each type of trial (“strong code”), and when they failed (“weak code”).
Then, by monitoring activity on a trial-by-trial basis, they were able to predict whether a monkey was going to succeed or not. If it was set for failure, they stimulated the hippocampus to reproduce the ‘correct’ activity pattern – they injected the “strong code” that was missing.
The MIMO stimulation (red line) improved memory performance compared to no stimulation (blue) and a ‘scrambled’ stimulation (green) containing no useful information, a crucial control condition. Interestingly, the scrambled stimulation did not impairperformance. Information processing in the hippocampus must be fairly robust to ‘noise’. One for the computational modellers to ponder.
Two years ago, the same team reported the success of this system in rats. But implementing it in non-human primates is a big step up the ladder of brain complexity (although the anatomy of the hippocampus is fairly constant across species.)
Deadwyler et al have also published the results of a different neuroprosthesis in monkeys, one aimed at the prefrontal cortex. In that paper, the ‘helping hand’ helped the monkeys make the correct choice when selecting their response. This time, it was the actual encoding of memory that was boosted.
So that’s cool. A relatively simple algorithm was able to do the hippocampus’s job better than it did it itself. But I wonder, is this because the delayed-match-to-sample memory task is so artificial and repetitive – i.e. unlike the kinds of tasks that the hippocampus evolved to perform? Would a neuroprothesis be able to boost a monkey’s memory ‘in the wild’?
Which begs the really big question – how might this help humans?
Hampson RE, Song D, Opris I, Santos LM, Shin DC, Gerhardt GA, Marmarelis VZ, Berger TW, & Deadwyler SA (2013). Facilitation of memory encoding in primate hippocampus by a neuroprosthesis that promotes task-specific neural firing Journal of Neural Engineering, 10 DOI: 10.1088/1741-2560/10/6/066013
Walking through the aisles of my local health food store, the Williamson Street Co-op, I’ve often been tempted by the claims of exotic yoghurts and “probiotic” drinks like Kefir, that contain strains of Lactobacillus and a number of other “good” bacteria. It turns out a number of these bugs produce and release into our gut neuroactive compounds such as GABA, an inhibitory neurotransmitter, and serotonin, a mood regulator. Dinan et al. do a review article on what they term psychobiotics (organisms that alleviate psychiatric illness)
Here, we define a psychobiotic as a live organism that, when ingested in adequate amounts, produces a health benefit in patients suffering from psychiatric illness. As a class of probiotic, these bacteria are capable of producing and delivering neuroactive substances such as gamma-aminobutyric acid and serotonin, which act on the brain-gut axis. Preclinical evaluation in rodents suggests that certain psychobiotics possess antidepressant or anxiolytic activity. Effects may be mediated via the vagus nerve, spinal cord, or neuroendocrine systems. So far, psychobiotics have been most extensively studied in a liaison psychiatric setting in patients with irritable bowel syndrome, where positive benefits have been reported for a number of organisms including Bifidobacterium infantis. Evidence is emerging of benefits in alleviating symptoms of depression and in chronic fatigue syndrome. Such benefits may be related to the anti-inflammatory actions of certain psychobiotics and a capacity to reduce hypothalamic-pituitary-adrenal axis activity. Results from large scale placebo-controlled studies are awaited.
The very word ‘willpower’ implies a metaphor: that actions (and inhibition of actions) are a matter of conscious will and that they require the use of a resource or source of power. What powers the will is willpower. This is a sort of folk psychology – it takes a special sort of effort to have self-control, make a decision, solve a problem or resolve conflict. People vary in how much of this special effort they can sustain and it is limited. Will is like a muscle and it can tire, but if ‘exercised’ it can become stronger. Baumeister and others investigated this view of willpower experimentally. This metaphor is supported by showing that different tasks that were thought to require willpower interfered with one another. This phenomenon was called “ego depletion”. (I find that name hints at a Freudian picture.) It also appeared that tasks associated with willpower required glucose and this might be the limited fuel. This was a nice clear picture – the metaphor was holding up. But – this is one of those metaphors that is true if you believe it. If you believe that willpower is required to do hard mental work, that it is limited and can be used up, then that is what you will find.
But then the doubts came. Job and others showed the ego depletion works only if the subject believes the theory and Clarkson and others showed that the subject had to believe that they were short of energy for sugar to be limiting. It seems that gargling sugar water is as effective swallowing it. Some people think that physical exercise depletes willpower and for them it does. Others believe that exercise is mentally invigorating and surprise, it is. This history is reviewed by Brass (see citation below).
Doubts have also been shown in the area of conscious will as opposed to decisions and other ‘will’-requiring tasks having to be conscious. So both the will and the power in willpower are now suspect.
Brass and others also outline another way to look at willpower. The brain compares the predicted reward of doing something with the predicted effort. This is what affects what people decide to do, manage to do, and manage not to do. So instead of calling it willpower, we now can call it self-control and leave the old baggage behind. People vary in what they bring to the table when making the comparison of reward to effort. That is really what is involved in some people being able to resist temptation and others not. They include different values in the assessment of reward versus effort. The interference between tasks is thought to be due to the tasks requiring the same set of brain regions, and those areas not being good at doing two things at the same time.
Interestingly, most of the tasks that are described as drawing on willpower are tasks that involve the mPFC (medial pre-frontal cortex), and in particular the ACC (anterior cingulate cortex) . … The research outlined here suggests that the mPFC, and in particular the ACC, might be a central node in the neural circuit related to willpower. From what we know about the ACC, however, it is not plausible to assume that it provides a common resource, but rather that it has a kind of regulatory function determining the level of effort that is invested in a task. In a recent position paper, Holroyd and Yeung argued that the ACC is involved in choosing between different behavioural options and determining the level of effort that is invested in executing the chosen behavioural option. This description is consistent with the idea that the ACC implements a regulatory mechanism that determines the intentional investment in a specific response option or task. Accordingly, there is strong evidence for construing willpower as a regulatory function that can be related to specific brain structures in the mPFC. While such a regulatory mechanism is evidently required in situations of self-control and complex choice, we argue that any kind of intentional decision draws to some degree on this mechanism.
Brass M, Lynn MT, Demanet J, & Rigoni D (2013). Imaging volition: what the brain can tell us about the will. Experimental brain research. Experimentelle Hirnforschung. Experimentation cerebrale, 229 (3), 301-12 PMID: 23515626