Subtitled “Conscious Motor Intention Increases the Excitability of Target-Specific Motor Circuits”, the article’s abstract was no less bold, concluding that:
These results indicate that conscious intentions govern motor function… until today, it was unclear whether conscious motor intention exists prior to movement, or whether the brain constructs such an intention after movement initiation.
The authors, Zschorlich and Köhling of the University of Rostock, Germany, are weighing into a long-standing debate in philosophy, psychology, and neuroscience, concerning the role of consciousness in controlling our actions.
To simplify, one school of thought holds that (at least some of the time), our intentions or plans control our actions. Many people would say that this is what common sense teaches us as well.
But there’s an alternative view, in which our consciously-experienced intentions are not causes of our actions but are actually products of them, being generated after the action has already begun. This view is certainly counterintuitive, and many find it disturbing as it seems to undermine ‘free will’.
That’s the background. Zschorlich and Köhling say that they’ve demonstrated that conscious intentions do exist, prior to motor actions, and that these intentions are accompanied by particular changes in brain activity. They claim to have done this using transcranial magnetic stimulation (TMS), a way of causing a localized modulation of brain electrical activity.
TMS of the motor cortex can cause muscle twitches, because this part of the brain controls our muscles. In 14 healthy volunteers, Zschorlich and Köhling aimed TMS at the area responsible for controlling movements of the left arm. Importantly, they adjusted the strength of the pulse so that it was only just strong enough to cause a tiny twitch (as measured using electrodes overthe muscles of the left wrist themselves).
Remarkably, however, they found that if people were ‘consciously intending’ to flex their wrist, the same weak TMS pulse prompted a strong flexion response. Whereas if the volunteer was intending to extend their wrist, the very same pulse caused an extension movement.
Here’s an example from one representative subject, showing the differences in muscle activity in the flexing (FCR) and extending (ECR) muscles of the wrist following the TMS pulses:
The authors hypothesize that the brain’s ‘intention network’ prepares desired actions by increasing the excitability of the cells in the motor cortex that can produce the movement intended. On this view, a weak TMS pulse provides just enough extra activation to trigger those pre-excited cells into firing, while being too weak to activate cells that govern other movements.
It’s an interesting model and these are striking results, from a beautifully simple experiment. My only concern is that it might be too simple. There was no control condition for the TMS: every TMS pulse was real.
It would have been better to have used a control, either a ‘sham’ pulse, or a real TMS pulse over a different part of the brain. I say that because – unless I’m missing something here – we don’t actually know that the TMS pulse was triggering the wrist movements. The volunteers got to trigger the TMS themselves:
Volunteers were asked to develop an intention […] and to trigger the TMS with the right index finger if the urge to move was greatest before any overt motor output at the wrist.
As far as I can see, volunteers could simply have been pressing the TMS button and then moving their wrist of their own accord. Ironically, they might not have consciously intended to do this; they might have really believed that their movements were being externally triggered (by the TMS) even though they themselves were generating them. This can happen: it’s called the ideomotor phenomenon, and is probably the explanation for why people believe in ‘dowsing’ amongst other things.
All we know for sure, as I understand it, is that 1. their right hand pushed a button, 2. TMS happened, and 3. their left wrist moved. We don’t know that 2 caused 3. A control TMS condition would have allowed us to know whether the TMS was really involved – and, perhaps, whether conscious intention or unconscious ideomotor acts were governing those errant wrists.
Zschorlich VR, & Köhling R (2013). How thoughts give rise to action – conscious motor intention increases the excitability of target-specific motor circuits. PloS ONE, 8 (12) PMID: 24386291
Famed amnesia case, K.C. died last week. Having lost both hippocampuses after a motorcycle accident, he was somehow able to hold on to some memories, though “devoid of all context and emotion”… and his identity.
That’s actually a common theme in the neuroscience of accidents. It’s easy to see the victims of brain damage as reduced or diminished, and they are in some ways. But much of what they feel from moment to moment is exactly what you or I feel, and there’s almost nothing short of death that can make you forget who you are. Amid all the fascinating injuries in neuroscience history, you’ll come across a lot of tales of woe and heartbreak. But there’s an amazing amount of resiliency in the brain, too. [via]
In what strikes me in the most unlikely venue, The Huffington Post, new age guru (also savvy businessman and marketer) Deepak Chopra offers what seems to an equivalent to the “teach the controversy” arguments of the creationists. The title “‘Collision Course’ in the Science of Consciousness: Grand Theories to Clash at Tucson Conference” suggests that there are two grand theories when in fact there are not. Massive evidence supports the idea that consciousness is accounted for by complex interactions between nerve cells, and Chopra does a nice summary of two central researchers taking this approach:
Christof Koch now teams with psychiatrist and neuroscientist Giulio Tononi in applying principles of integrated information, computation and complexity to the brain’s neuronal and network-level electrochemical activities. In their view, consciousness depends on a system’s ability to integrate complex information, to compute particular states from among possible states according to algorithms. Deriving a measure of complex integration from EEG signals termed ‘phi’, they correlate consciousness with critically complex levels of ‘phi’.
Regarding the ‘hard problem’, Koch, Tononi and their physicist colleague Max Tegmark have embraced a form of panpsychism in which consciousness is a property of matter. Simple particles are conscious in a simple way, whereas such particles, when integrated in complex computation, become fully conscious (the ‘combination problem’ in panpsychism philosophy). Tegmark has termed conscious matter ‘perceptronium’, and his alliance with Koch and Tononi is Crick’s legacy and a major force in the present-day science of consciousness. Their view of neurons as fundamental units whose complex synaptic interactions account for consciousness, also supports widely-publicized, and well-funded ‘connectome’ and ‘brain mapping’ projects hoping to capture brain function in neuronal network architecture.
I can see absolutely nothing but gibberish in the vague array alternatives to this sort of approach mentioned by Chopra, Penrose, Hameroff and others: non-computational, quantum superpositional, connected to spacetime geometry, involving coherent cellular microtubule states. Elegant hand waving perhaps, but where is the model? How is it to be tested?
Mindfulness meditation produces personal experiences that are not readily interpretable by scientists who want to study its psychiatric benefits in the brain. At a conference near Boston April 5, 2014, Brown University researchers will describe how they’ve been able to integrate mindfulness experience with hard neuroscience data to advance more rigorous study.
Mindfulness is always personal and often spiritual, but the meditation experience does not have to be subjective. Advances in methodology are allowing researchers to integrate mindfulness experiences with brain imaging and neural signal data to form testable hypotheses about the science — and the reported mental health benefits — of the practice.
A team of Brown University researchers, led by junior Juan Santoyo, will present their research approach at 2:45 p.m on Saturday, April 5, 2014, at the 12th Annual International Scientific Conference of the Center for Mindfulness at the University of Massachusetts Medical School. Their methodology employs a structured coding of the reports meditators provide about their mental experiences. That can be rigorously correlated with quantitative neurophysiological measurements.
“In the neuroscience of mindfulness and meditation, one of the problems that we’ve had is not understanding the practices from the inside out,” said co-presenter Catherine Kerr, assistant professor (research) of family medicine and director of translational neuroscience in Brown’s Contemplative Studies Initiative. “What we’ve really needed are better mechanisms for generating testable hypotheses – clinically relevant and experience-relevant hypotheses.”
Now researchers are gaining the tools to trace experiences described by meditators to specific activity in the brain.
“We’re going to [discuss] how this is applicable as a general tool for the development of targeted mental health treatments,” Santoyo said. “We can explore how certain experiences line up with certain patterns of brain activity. We know certain patterns of brain activity are associated with certain psychiatric disorders.”
Structuring the spiritual
At the conference, the team will frame these broad implications with what might seem like a small distinction: whether meditators focus on their sensations of breathing in their nose or in their belly. The two meditation techniques hail from different East Asian traditions. Carefully coded experience data gathered by Santoyo, Kerr, and Harold Roth, professor of religious studies at Brown, show that the two techniques produced significantly different mental states in student meditators.
“We found that when students focused on the breath in the belly their descriptions of experience focused on attention to specific somatic areas and body sensations,” the researchers wrote in their conference abstract. “When students described practice experiences related to a focus on the nose during meditation, they tended to describe a quality of mind, specifically how their attention ‘felt’ when they sensed it.”
The ability to distill a rigorous distinction between the experiences came not only from randomly assigning meditating students to two groups – one focused on the nose and one focused on the belly – but also by employing two independent coders to perform standardized analyses of the journal entries the students made immediately after meditating.
This kind of structured coding of self-reported personal experience is called “grounded theory methodology.” Santoyo’s application of it to meditation allows for the formation of hypotheses.
For example, Kerr said, “Based on the predominantly somatic descriptions of mindfulness experience offered by the belly-focused group, we would expect there to be more ongoing, resting-state functional connectivity in this group across different parts of a large brain region called the insula that encodes visceral, somatic sensations and also provides a readout of the emotional aspects of so-called ‘gut feelings’.”
Unifying experience and the brain
The next step is to correlate the coded experiences data with data from the brain itself. A team of researchers led by Kathleen Garrison at Yale University, including Santoyo and Kerr, did just that in a paper in Frontiers in Human Neuroscience in August 2013. The team worked with deeply experienced meditators to correlate the mental states they described during mindfulness with simultaneous activity in the posterior cingulate cortex (PCC). They measured that with real-time functional magnetic resonance imaging.
They found that when meditators of several different traditions reported feelings of “effortless doing” and “undistracted awareness” during their meditation, their PCC showed little activity, but when they reported that they felt distracted and had to work at mindfulness, their PCC was significantly more active. Given the chance to observe real-time feedback on their PCC activity, some meditators were even able to control the levels of activity there.
“You can observe both of these phenomena together and discover how they are co-determining one another,” Santoyo said. “Within 10 one-minute sessions they were able to develop certain strategies to evoke a certain experience and use it to drive the signal.”
A theme of the conference, and a key motivator in Santoyo and Kerr’s research, is connecting such research to tangible medical benefits. Meditators have long espoused such benefits, but support from neuroscience and psychiatry has been considerably more recent.
In a February 2013 paper in Frontiers in Human Neuroscience, Kerr and colleagues proposed that much like the meditators could control activity in the PCC, mindfulness practitioners may gain enhanced control over sensory cortical alpha rhythms. Those brain waves help regulate how the brain processes and filters sensations, including pain, and memories such as depressive cognitions.
Santoyo, whose family emigrated from Colombia when he was a child, became inspired to investigate the potential of mindfulness to aid mental health beginning in high school. Growing up in Cambridge and Somerville, Mass., he observed the psychiatric difficulties of the area’s homeless population. He also encountered them while working in food service at Cambridge hospital.
“In low-income communities you always see a lot of untreated mental health disorders,” said Santoyo, who meditates regularly and helps to lead a mindfulness group at Brown. He is pursuing a degree in neuroscience and contemplative science. “The perspective of contemplative theory is that we learn about the mind by observing experience, not just to tickle our fancy but to learn how to heal the mind.”
It’s a long path, perhaps, but Santoyo and his collaborators are walking it with progress.
Perhaps the most plausible suggestion for why music is universal in human societies is that it plays a central role in emotional social signaling that could have promoted group cohesion. Clark et al.comment on new work by Mas-Herrero et al. who have now documented a group of healthy people who, while responding to typical rewarding stimuli, appear to have a specific musical anhedonia, deriving no pleasure from music even though perceiving it normally. They cannot experience the intensely pleasurable shivers down the spine or ‘chills’ that are specific to and reliably triggered by particular musical features like the resolution of tonal ambiguity. These active a distributed brain network including phylogenetically ancient limbic, stratal and midbrain structures also engaged by cocaine and sex. Clips from Clark et al.:
The musical anhedonia found by Mas-Herreo et al. is specific for musical reward assignment, rather than attributable to any deficiency in perceiving or recognising music or musical emotions. It is rooted in reduced autonomic reactivity rather than simply cognitive mislabelling. Moreover, it is not attributable to more general hedonic blunting, because musically anhedonic individuals show typical responses to other sources of biological and non-biological (monetary) reward. The most parsimonious interpretation of the new findings is that there are music-specific brain reward systems to which individuals show different levels of access….specific brain substrates for music coding … implies that these evolved in response to some biological imperative. But what might that have been?
The answer may lie in the kinds of puzzles that music helped our hominid ancestors to solve. Arguably the most complex, ambiguous and puzzling patterns we are routinely required to analyse are the mental states and motivations of other people, with clear implications for individual success in the social milieu. Music can model emotional mental states and failure to deduce such musical mental states correlates with catastrophic inter-personal disintegration in the paradigmatic acquired disorder of the human social brain, frontotemporal dementia …Furthermore, this music cognition deficit implicates cortical areas engaged in processing both musical reward and ‘theory of mind’ (our ability to infer the mental states of other people). Our hominid ancestors may have coded surrogate mental states in the socially relevant form of vocal sound patterns. By allowing social routines to be abstracted, rehearsed and potentially modified without the substantial cost of enacting the corresponding scenarios, such coding may have provided an evolutionary mechanism by which specific brain linkages assigned biological reward value to precursors of music.
These new insights into musical anhedonia raise many intriguing further questions. What is its neuroanatomical basis? The strong prediction would lie with mesolimbic dopaminergic circuitry, but functional neuroimaging support is sorely needed.
Here is the summary from the Mas-Herrero paper:
Music has been present in all human cultures since prehistory, although it is not associated with any apparent biological advantages (such as food, sex, etc.) or utility value (such as money). Nevertheless, music is ranked among the highest sources of pleasure, and its important role in our society and culture has led to the assumption that the ability of music to induce pleasure is universal. However, this assumption has never been empirically tested. In the present report, we identified a group of healthy individuals without depression or generalized anhedonia who showed reduced behavioral pleasure ratings and no autonomic responses to pleasurable music, despite having normal musical perception capacities. These persons showed preserved behavioral and physiological responses to monetary reward, indicating that the low sensitivity to music was not due to a global hypofunction of the reward network. These results point to the existence of specific musical anhedonia and suggest that there may be individual differences in access to the reward system.
ScienceDaily (here) has an item on an interesting paper: Loretxu Bergouignan, Lars Nyberg, and H. Henrik Ehrsson. Out-of-body–induced hippocampal amnesia. Proceedings of the National Academy of Sciences, March 10, 2014.
Our feeling of our bodies is important to storing/retrieving episodic memories. The experimenters had subjects using virtual reality googles which either left them with their own bodies or forced an ‘out-of-body’ illusion. The subjects could remember the events that happened when their body image was not disturbed. When they tried to remember events that happened when they felt out of their bodies – they had difficulty. Henrik Ehrsson is quoted as saying,“The fMRI scans further revealed a crucial difference in activity in the portion of the temporal lobe — the hippocampus — that is known to be central for episodic memories. When they tried to remember what happened during the interrogations experienced out-of-body, activity in the hippocampus was eliminated, unlike when they remembered the other situations. However, we could see activity in the frontal lobe cortex, so they were really making an effort to remember.”
I am inclined to think that memory is a question of saving experiences that may be useful. We know that the hippocampus associates our location with events in memory and that it tracks the timing or ordering of events. There is also often a mood and emotional colouring to remembered events. And extremely important is the sense of how much is invested and how much ownership is taken in events. We remember effort. We remember errors. We remember hard decisions. We remember good places and people and we remember bad ones too. To put it simply, we remember what may be useful. What happened when our bodies were not involved is not very useful – it might as well be someone else’s event.
I remember things that happened to other people and I can picture them happening to me. But I know that it did not happen to me. My body was not there. Those memories started as words in a story being told to me and they carry that lack of first-hand involvement. What happens with an experience that has neither our own body’s involvement nor someone else’s body? Perhaps it is – no identifiable agent – no memory.
Kumar et al. report their findings from an unusual opportunity that presented itself when a retired London schoolteacher, Sylvia, reported to her doctors that she increasingly was hearing music, as if it were completely real, in the absence of a source for the music. (People with musical hallucinations usually are psychologically normal — except for the music they are sure someone is playing. ) Sylvia volunteered for a study by Kumar et al. that made use of the fact that real music can sometimes quiet the imaginary music, in effect masking music hallucination. Playing Bach for 30 seconds was used to damp down the hallucinations while the teacher’s brain activity was being monitored by MEG (magnetic recordings), and when the real music stopped the teacher reported the strength of hallucinations as they returned. The brain regions becoming more active as hallucinations returned were the same as those activated by listening to real music. From Zimmer’s review of this work, a suggested model for what is happening:
Our brains… generate predictions about what is going to happen next, using past experiences as a guide. When we hear a sound, for example — particularly music — our brains guess at what it is and predict what it will sound like in the next instant. If the prediction is wrong — if we mistook a teakettle for an opera singer — our brains quickly recognize that we are hearing something else and make a new prediction to minimize the error….people with musical hallucinations often have at least some hearing loss. Sylvia, for example, needed hearing aids after getting a viral infection two decades ago.
The model of our brain as a prediction-generating machine
…could explain why some people with hearing loss develop musical hallucinations. With fewer auditory signals entering the brain, their error detection becomes weaker. If the music-processing brain regions make faulty predictions, those predictions only grow stronger until they feel like reality.
…could explain why real music provides temporary relief for musical hallucinations: the incoming sounds reveal the brain’s prediction errors. And it may also explain why people are prone to hallucinate music, and not other familiar sounds.
Here is the Kumar et al. abstract:
The physiological basis for musical hallucinations (MH) is not understood. One obstacle to understanding has been the lack of a method to manipulate the intensity of hallucination during the course of experiment. Residual inhibition, transient suppression of a phantom percept after the offset of a masking stimulus, has been used in the study of tinnitus. We report here a human subject whose MH were residually inhibited by short periods of music. Magnetoencephalography (MEG) allowed us to examine variation in the underlying oscillatory brain activity in different states. Source-space analysis capable of single-subject inference defined left-lateralised power increases, associated with stronger hallucinations, in the gamma band in left anterior superior temporal gyrus, and in the beta band in motor cortex and posteromedial cortex. The data indicate that these areas form a crucial network in the generation of MH, and are consistent with a model in which MH are generated by persistent reciprocal communication in a predictive coding hierarchy.
Judson Brewer and collaborators perform fMRI scans of experienced practitioners of loving kindness meditation, which fosters feelings of selfless love for others. Their abstract (below) notes their observations (but doesn’t emphasize one of their more interesting findings: that the tranquility of selfless love without expectation of reward lowers activation of the areas activated by romantic love - which are the same reward areas activated by cocaine.)
Loving kindness is a form of meditation involving directed well-wishing, typically supported by the silent repetition of phrases such as “may all beings be happy,” to foster a feeling of selfless love. Here we used functional magnetic resonance imaging to assess the neural substrate of loving kindness meditation in experienced meditators and novices. We first assessed group differences in blood oxygen level-dependent (BOLD) signal during loving kindness meditation. We next used a relatively novel approach, the intrinsic connectivity distribution of functional connectivity, to identify regions that differ in intrinsic connectivity between groups, and then used a data-driven approach to seed-based connectivity analysis to identify which connections differ between groups. Our findings suggest group differences in brain regions involved in self-related processing and mind wandering, emotional processing, inner speech, and memory. Meditators showed overall reduced BOLD signal and intrinsic connectivity during loving kindness as compared to novices, more specifically in the posterior cingulate cortex/precuneus (PCC/PCu), a finding that is consistent with our prior work and other recent neuroimaging studies of meditation. Furthermore, meditators showed greater functional connectivity during loving kindness between the PCC/PCu and the left inferior frontal gyrus, whereas novices showed greater functional connectivity during loving kindness between the PCC/PCu and other cortical midline regions of the default mode network, the bilateral posterior insula lobe, and the bilateral parahippocampus/hippocampus. These novel findings suggest that loving kindness meditation involves a present-centered, selfless focus for meditators as compared to novices.
Nearly 10 years ago, Vanderbilt University cognitive neuroscientist Randolph Blake and his postdoc Duje Tadin needed to give their study participants the experience of complete darkness. They were testing their new transcranial magnetic stimulator (TMS) and developing protocols for a series of experiments involving the generation of phosphenes—light experienced by subjects when there is none. So the researchers ordered high-end blindfolds, designed to block all light from reaching the eyes.
When the blindfolds arrived, Blake tried one out. “I can’t remember what prompted me to do it, but on a lark, I put them on myself first and waved my hand in front of my eyes,” he recalls, “and had this faint sense that I could see my hand moving.”
Tadin then tried it and had the same experience. The two replicated the mini experiment in the TMS lab, a small, dark room on the sixth floor. And again, both researchers could just barely see their hands through the blindfolds. “You could see this faint shadow, this faint impression of something moving back and forth in rhythm with your motions,” Blake says. But, when Blake waved his hand in front of Tadin’s blindfolded face, Tadin saw nothing. “That got us excited,” Blake says.
The duo traipsed around the building eagerly blindfolding their colleagues and asking them to report what they saw. “About half reported seeing something,” Blake says.
To test what was happening, however, the researchers knew they needed to come up with a better way to characterize what people were actually seeing. “What we discovered was an inherently subjective experience,” says Tadin. “There’s no easy way to ascertain that I’m telling the truth.” Unable to think of a reasonable way to measure the phenomenon, they set the project aside.
A few years later, running his own lab at the University of Rochester, Tadin told the story to graduate student Kevin Dieter, who encouraged Tadin to give the project another shot. They devised a conservative experimental setup in which they attempted to control the subjects’ expectations: they told study participants that one blindfold had little, imperceptible holes that might allow them to see through, while another blindfold would successfully keep out all light. (Both blindfolds were, in fact, totally lightproof.) A subject’s experience with the first blindfold could then guide his expectations for a second trial using the other blindfold. Specifically, if he had seen something with the first blindfold, he would certainly not expect to see anything with the second one. But even under these conditions, nearly 50 percent of subjects reported having at least a “visual sensation of motion” while wearing the second blindfold.
The results hold “implications for how our different sensory systems work together,” Tadin says. Dealing only with subjective reports, however, still made him uneasy. So he turned to an eye-tracking device—used without the blindfolds but in complete darkness—to detect the movement of subjects’ eyes as they viewed the hand they reported seeing. People cannot move their eyes smoothly unless they have a visual target to lock on to, Tadin explains. If they just thought they saw their hand, jerky eye movements should reveal the truth.
To Tadin’s amazement, the eye movements suggested that the visual perception was indeed real: people who reported seeing their hands moving in the dark exhibited eye movements that were twice as smooth as those of subjects who reported seeing nothing (Psychological Science, doi:10.1177/0956797613497968, 2013).
Interestingly, people with synesthesia—who often see letters of the alphabet, numbers, or days of the week in specific colors, or associate particular sounds with visual stimuli—tended to score higher on Tadin’s blindfold experiment in terms of how much they saw. “They were literally off the chart,” he says. One synesthete produced such smooth eye movements that Tadin at first thought the data were erroneous: “Her smooth eye movements were almost perfect.” Research has suggested that synesthetes exhibit higher levels of cross-brain connectivity, which may play a role in the generation of the visual perception as a result of the kinesthetic input.
Regardless of the underlying neural mechanism, Tadin suspects that there are likely other examples of how the senses blend together—in synesthetes and in people with normal sensory experiences. In 2005, for example, when Norimichi Kitagawa at NTT Communication Science Laboratories in Japan and his colleagues recorded the sounds generated inside the ear of a dummy head by brushing the outside of the ear with a paintbrush, then played those sounds to participants who received no ear strokes, many reported feeling a tickling sensation (Japanese Journal of Psychonomic Science, 24:121-22, 2005). “This phenomenon [of ‘seeing’ one’s own movements] may be just the tip of the iceberg,” Tadin says.
Milner (see citation below) reviews the evidence that the visual-motor control is not conscious.
Visual perception starts at the back of the optical lobe and moves forward in the cortex as processing proceeds. There are two tracks along which visual perception proceeds, called the dorsal stream and the ventral stream. The two streams have few interconnections. The dorsal stream runs from the primary visual cortex to the superior occipito-parietal cortex near the top the the head. The ventral stream runs from the primary visual cortex to the inferior occipito-temporal cortex at the side of the head. Their functions, as far as is known, differ. “The dorsal stream’s principal role is to provide real-time ‘bottom-up’ visual guidance of our movements online. In contrast, the ventral stream, in conjunction with top-down information from visual and semantic memory, provides perceptual representations that can serve recognition, visual thought, planning and memory offline…we have proposed that the visual products of dorsal stream1 processing are not available to conscious awareness—that they exist only as evanescent raw materials to provide the unconscious moment-to-moment sensory calibration of our movements.”
The researchers used three methods in their studies: patients with lesions in their visual system, patients suffering from visual extinction, and fMRI experiments.
One patient had part of their ventral streams destroyed – they could reach and grasp objects that they were not conscious of. The opposite was true of other patients with damage to their dorsal streams – they had difficulties grasping objects that they were consciously aware of.
Visual extinction is a form of spatial neglect. The patient fails to detect a stimulus presented on the side of space opposite the brain damage when and only when there is simultaneously a stimulus on the good side. By carefully arranging an experimental setup, a patient with visual extinction took account of an obstacle that they were not conscious of when reaching for an object. Avoiding an obstacle depends of the dorsal stream because patients with damage to the dorsal stream did not adjust their reaching movements in the presence of obstacles.
There is visual feedback during reaching. “Under normal viewing conditions, the brain continuously registers the visual locations of both the reaching hand and the target, incorporating these two visual elements within a single ‘loop’ that operates like a servomechanism to progressively reduce their mutual separation in space (the ‘error signal’) as the movement unfolds. When the need to use such visual feedback is increased by the occasional introduction of unnoticed perturbations in the location of the target during the course of a reach, a healthy subject will make the necessary adjustments to the parameters of his or her movement quite seamlessly. ..In contrast, a patient with damage to the dorsal stream was quite unable to take such target changes on board: she first had to complete the reach towards the original location, before then making a post hoc switch to the new target location…It thusseems very likely that the ability to exploit the error signal between hand and target during reaching is dependent on the integrity of the dorsal stream. ”
The phenomenon of binocular rivalry where the subject has different images projected to the two retinas and is alternatively conscious of one or the other image has been studied with fMRI. It is possible to see which image is conscious by the activity in the ventral stream. But the dorsal stream is able to act on information even if it is not being processed by the ventral stream and therefore not consciously available.
The authors do point out that they are not saying that the dorsal stream plays no role in conscious perception. It may for example have some control over attention.
In the conclusion, they say “according to the model, such ventral-stream processing plays no causal role in the real-time visual guidance of the action, despite our strong intuitive inclination to believe otherwise (what Clark calls ‘the assumption of experienced-based control’). According to the Milner & Goodale model, that real-time guidance is provided through continuous visual monitoring by the dorsal stream of those very same visual inputs that we experience by courtesy of our ventral stream. ”
A.D. Milner (2012). Is visual processing in the dorsal stream accessible to consciousness? Proc R Soc B, 2289-2298 DOI: 10.1098/rspb.2011.2663
Musical hallucinations are most commonly found in people who have suffered hearing loss or deafness. But why they happen is unknown. In a new paper in Cortex, British neuroscientists Kumar et al claim to have found A brain basis for musical hallucinations
Using magnetoencephalography (MEG), the authors investigate brain activity in a patient, a 66 year old woman who had been hearing phantom ‘piano melodies’ for almost two years, after she had suddenly become partly deaf. She was an amateur keyboard player, and was able to write down the tunes she ‘heard’:The same melody – sometimes a real tune, sometimes ‘made up’ – would repeat for hours at a time, and it could get annoying. However, she had discovered that listening to certain pieces of real music provided temporary relief; the hallucinations would stop during the piece, and only restart after a certain lag-period of several seconds.
Kumar et al made use of this fact to compare brain activity when hallucinations were ‘on’ and ‘off’ – they recorded MEG data before and after playing 15 seconds of Bach, one of the hallucination-blocking composers. Immediately after each Bach burst, hallucinations were low, while 60 seconds later they had returned.
Clever… however there’s a serious problem in this procedure: it can’t separate the effects of hallucinations from the effects of stopping listening to real music, nor from the expectation of future real music (the timing of which was predictable).
The obvious solution would have been to also include bursts of some music that didn’tblock hallucinations, as a control condition. The patient herself reported that some music didn’t. This would dramatically increase the inferences one could draw from the data. Some MEG data from healthy control participants hearing the same music would also help to establish specificity. This limitation isn’t acknowledged.
Anyway, Kumar et al report increased gamma band activity in the left aSTG area, part of the auditory cortex. They say that
The area that shows higher activity during musical hallucination coincides with an area implicated in the normal perception of melody using fMRI.
However, strangely, the actual Bach music did not produce significant changes in activity in this area, or anywhere else in the brain. Only imaginary music caused real brain waves; Kumar et al say that this has been seen in other studies and
Some other changes in the beta frequency band were found in the motor cortex and posteromedial cortex/precuneus. Neither of these is thought of as a ‘music area’. To be honest, I don’t think these results shed much light on the phenomenon.
The second half of the paper is rather different, providing a theoretical overview of musical hallucination. This section could almost be a paper in itself. The authors argue that
Our hypothesis is that peripheral hearing loss reduces the signal-to-noise ratio of incoming auditory stimuli and the brain responds by decreasing sensory precision or post-synaptic gain…
A recurrent loop of communication is thus established which is no longer informed, or entrained, by precise bottom-up sensory prediction errors… it is constrained only by a need to preserve the internal consistency between hierarchical representations of music.
This reciprocal communication between an area in music perception and area/s involved in higher music cognition (motor cortex and precuneus) with no constraint from the sensory input gives rise to musical hallucinations.
Kumar S, Sedley W, Barnes GR, Teki S, Friston KJ, & Griffiths TD (2013). A brain basis for musical hallucinations. Cortex PMID: 24445167
Juvenile mantis shrimp. Image credit: Roy L. Caldwell
Mantis shrimp have a type of vision unlike any other animal on the planet—that much was known. But now scientists have determined, at a cellular level, how it is that these foot-long crustaceans see the world. And it stems from their unique photoreceptors.
In general, photoreceptors absorb light and convert it into electrical signals, which are then sent to the brain for interpretation. Each photoreceptor is specific to a particular wavelength of light, which the brain translates into a color. Your dog has two kinds of photoreceptors: blue and green. You have three: blue, green and red. Our eyes can see these colors and every combination or variation thereof.
Scientists say that in order to see every color under the sun, an animal needs four to seven different types of photoreceptors. Why, then, does the mantis shrimp have a whopping 12 different kinds of photoreceptors in their eyeballs?
The researchers say it’s because mantis shrimp photoreceptors work in a unique way, completely unlike like the rest of ours.
Researchers reached this conclusion after playing a reward game with mantis shrimp. They would shine two different colored lights simultaneously at the shrimp. Pinching their claws at the source of one color, let’s say yellow, would result in the shrimp getting a treat. Choosing the blue one meant no treat. This drill was repeated until the shrimp learned to pick the yellow light and were able to do so consistently. Then the researchers began to switch the other colors up, making the shrimp choose between red and yellow, then orange and yellow, etc.
Surprisingly, when the colors got too close to one another (i.e. yellow and a shade of orange akin to macaroni and cheese) the shrimp couldn’t tell them apart. Even with their 12 kinds of photoreceptors, shrimp could only distinguish colors on the light spectrum that were at least 25 nanometers apart. By comparison, humans, with a measly three kinds of photoreceptors, can distinguish colors separated by as little as one nanometer.
From this the researchers deduced that the mantis shrimp’s whole visual system operates differently than our own. As they describe in theirpaper published in Science today, further investigation showed that shrimp don’t take the time to send visual information to the brain and wait for it to distinguish between subtle color differences like we do. The shrimp just skips over this step altogether.
Each of the shrimp’s 12 photoreceptors is essentially set to a different sensitivity. Their eyes scan a scene and are able to instantly recognize when something falls into its reddish category, without having to ask their brain if it’s seeing brick red or scarlet. In the colorful and fast-paced circus that is its coral reef home, avoiding that little bit of a processing delay could be the difference between life and death, even for a foot-long crustacean.
The same goes for their dinner. This unique kind of vision could be a mantis shrimp’s hidden weapon in capturing prey—well, alongside its ability to swing its front claws at the speed of a .22 caliber bullet to bludgeon, spear or dismember an unwitting victim.
If you want to gaze deep into these crazy eyes (not to mention witnessing a mantis shrimp’s eye-grooming techniques!) check out this video. Dramatic mood music included.
Olaf Blanke (whose work on projecting ourselves outside our bodies I’ve mentioned previously) and collaborators extend their studies on body perception and self consciousness to show that signals from both the inside and the outside of the body are fundamental in determining our self consciousness:
Prominent theories highlight the importance of bodily perception for self-consciousness, but it is currently not known whether bodily perception is based on interoceptive or exteroceptive signals or on integrated signals from these anatomically distinct systems. In the research reported here, we combined both types of signals by surreptitiously providing participants with visual exteroceptive information about their heartbeat: A real-time video image of a periodically illuminated silhouette outlined participants’ (projected, “virtual”) bodies and flashed in synchrony with their heartbeats. We investigated whether these “cardio-visual” signals could modulate bodily self-consciousness and tactile perception. We report two main findings. First, synchronous cardio-visual signals increased self-identification with and self-location toward the virtual body, and second, they altered the perception of tactile stimuli applied to participants’ backs so that touch was mislocalized toward the virtual body. We argue that the integration of signals from the inside and the outside of the human body is a fundamental neurobiological process underlying self-consciousness.
Experimental setup for the body conditions. Participants (a) stood with their backs facing a video camera placed 200 cm behind them (b). The video showing the participant’s body (his or her “virtual body”) was projected in real time onto a head-mounted display. An electrocardiogram was recorded, and R peaks were detected in real time (c), triggering a flashing silhouette outlining the participant’s virtual body (d). The display made it appear as though the virtual body was standing 200 cm in front of the participant (e). After each block, participants were passively displaced 150 cm backward to the camera and were instructed to walk back to the original position.
(Image caption: A daydreaming brain: the yellow areas depict the default mode network from three different perspectives; the coloured fibres show the connections amongst each other and with the remainder of the brain.)
The structure of the human brain is complex, reminiscent of a circuit diagram with countless connections. But what role does this architecture play in the functioning of the brain? To answer this question, researchers at the Max Planck Institute for Human Development in Berlin, in cooperation with colleagues at the Free University of Berlin and University Hospital Freiburg, have for the first time analysed 1.6 billion connections within the brain simultaneously. They found the highest agreement between structure and information flow in the “default mode network,” which is responsible for inward-focused thinking such as daydreaming.
Everybody’s been there: You’re sitting at your desk, staring out the window, your thoughts wandering. Instead of getting on with what you’re supposed to be doing, you start mentally planning your next holiday or find yourself lost in a thought or a memory. It’s only later that you realize what has happened: Your brain has simply “changed channels”—and switched to autopilot.
For some time now, experts have been interested in the competition among different networks of the brain, which are able to suppress one another’s activity. If one of these approximately 20 networks is active, the others remain more or less silent. So if you’re thinking about your next holiday, it is almost impossible to follow the content of a text at the same time.
To find out how the anatomical structure of the brain impacts its functional networks, a team of researchers at the Max Planck Institute for Human Development in Berlin, in cooperation with colleagues at the Free University of Berlin and the University Hospital Freiburg, have analysed the connections between a total of 40,000 tiny areas of the brain. Using functional magnetic resonance imaging, they examined a total of 1.6 billion possible anatomical connections between these different regions in 19 participants aged between 21 and 31 years. The research team compared these connections with the brain signals actually generated by the nerve cells.
Their results showed the highest agreement between brain structure and brain function in areas forming part of the “default mode network“, which is associated with daydreaming, imagination, and self-referential thought. “In comparison to other networks, the default mode network uses the most direct anatomical connections. We think that neuronal activity is automatically directed to level off at this network whenever there are no external influences on the brain,” says Andreas Horn, lead author of the study and researcher in the Center for Adaptive Rationality at the Max Planck Institute for Human Development in Berlin.
Living up to its name, the default mode network seems to become active in the absence of external influences. In other words, the anatomical structure of the brain seems to have a built-in autopilot setting. It should not, however, be confused with an idle state. On the contrary, daydreaming, imagination, and self-referential thought are complex tasks for the brain.
“Our findings suggest that the structural architecture of the brain ensures that it automatically switches to something useful when it is not being used for other activities,” says Andreas Horn. “But the brain only stays on autopilot until an external stimulus causes activity in another network, putting an end to the daydreaming. A buzzing fly, a loud bang in the distance, or focused concentration on a text, for example.”
The researchers hope that their findings will contribute to a better understanding of brain functioning in healthy people, but also of neurodegenerative disorders such as Alzheimer’s disease and psychiatric conditions such as schizophrenia. In follow-up studies, the research team will compare the brain structures of patients with neurological disorders with those of healthy controls.