People who are aware they are asleep when they are dreaming have better than average problem-solving abilities, new research has discovered.
Experts from the University of Lincoln, UK, say that those who experience ‘lucid dreaming’ – a phenomena where someone who is asleep can recognise that they are dreaming – can solve problems in the waking world better than those who remain unaware of the dream until they wake up.
The concept of lucid dreaming was explored in the 2010 film Inception, where the dreamers were able to spot incongruities within their dream. It is thought some people are able to do this because of a higher level of insight, meaning their brains detect they are in a dream because events would not make sense otherwise. This cognitive ability translates to the waking world when it comes to finding the solution to a problem by spotting hidden connections or inconsistencies, researchers say.
The research was carried out by Dr Patrick Bourke, Senior Lecturer at the Lincoln School of Psychology and his student Hannah Shaw. It is the first empirical study demonstrating the relationship between lucid dreaming and insight.
He said: “It is believed that for dreamers to become lucid while asleep, they must see past the overwhelming reality of their dream state, and recognise that they are dreaming.
“The same cognitive ability was found to be demonstrated while awake by a person’s ability to think in a different way when it comes to solving problems.”
The study examined 68 participants aged between 18 and 25 who had experienced different levels of lucid dreaming, from never to several times a month. They were asked to solve 30 problems designed to test insight. Each problem consisted of three words and a solution word.
Each of the three words could be combined with the solution word to create a new compound word.
For example with the words ‘sand’, ‘mile’ and ‘age’, the linking word would be ‘stone’.
Results showed that frequent lucid dreamers solved 25 per cent more of the insight problems than the non-lucid dreamers.
Miss Shaw, who conducted the research as part of her undergraduate dissertation, said the ability to experience lucid dreams is something that can be learned. “We aren’t entirely sure why some people are naturally better at lucid dreaming than others, although it is a skill which can be taught,” said Hannah.
“For example you can get into the habit of asking yourself “is this a dream?”. If you do this during the day when you are awake and make it a habit then it can transfer to when you are in a dream.”
The cause of neuronal death in Parkinson’s disease is still unknown, but a new study proposes that neurons may be mistaken for foreign invaders and killed by the person’s own immune system, similar to the way autoimmune diseases like type I diabetes, celiac disease, and multiple sclerosis attack the body’s cells. The study was published April 16, 2014, in Nature Communications.
(Image caption: Four images of a neuron from a human brain show that neurons produce a protein (in red) that can direct an immune attack against the neuron (green). Credit: Carolina Cebrian.)
“This is a new, and likely controversial, idea in Parkinson’s disease; but if true, it could lead to new ways to prevent neuronal death in Parkinson’s that resemble treatments for autoimmune diseases,” said the study’s senior author, David Sulzer, PhD, professor of neurobiology in the departments of psychiatry, neurology, and pharmacology at Columbia University College of Physicians & Surgeons.
The new hypothesis about Parkinson’s emerges from other findings in the study that overturn a deep-seated assumption about neurons and the immune system.
For decades, neurobiologists have thought that neurons are protected from attacks from the immune system, in part, because they do not display antigens on their cell surfaces. Most cells, if infected by virus or bacteria, will display bits of the microbe (antigens) on their outer surface. When the immune system recognizes the foreign antigens, T cells attack and kill the cells. Because scientists thought that neurons did not display antigens, they also thought that the neurons were exempt from T-cell attacks.
“That idea made sense because, except in rare circumstances, our brains cannot make new neurons to replenish ones killed by the immune system,” Dr. Sulzer says. “But, unexpectedly, we found that some types of neurons can display antigens.”
Cells display antigens with special proteins called MHCs. Using postmortem brain tissue donated to the Columbia Brain Bank by healthy donors, Dr. Sulzer and his postdoc Carolina Cebrián, PhD, first noticed—to their surprise—that MHC-1 proteins were present in two types of neurons. These two types of neurons—one of which is dopamine neurons in a brain region called the substantia nigra—degenerate during Parkinson’s disease.
To see if living neurons use MHC-1 to display antigens (and not for some other purpose), Drs. Sulzer and Cebrián conducted in vitro experiments with mouse neurons and human neurons created from embryonic stem cells. The studies showed that under certain circumstances—including conditions known to occur in Parkinson’s—the neurons use MHC-1 to display antigens. Among the different types of neurons tested, the two types affected in Parkinson’s were far more responsive than other neurons to signals that triggered antigen display.
The researchers then confirmed that T cells recognized and attacked neurons displaying specific antigens.
The results raise the possibility that Parkinson’s is partly an autoimmune disease, Dr. Sulzer says, but more research is needed to confirm the idea.
“Right now, we’ve showed that certain neurons display antigens and that T cells can recognize these antigens and kill neurons,” Dr. Sulzer says, “but we still need to determine whether this is actually happening in people. We need to show that there are certain T cells in Parkinson’s patients that can attack their neurons.”
If the immune system does kill neurons in Parkinson’s disease, Dr. Sulzer cautions that it is not the only thing going awry in the disease. “This idea may explain the final step,” he says. “We don’t know if preventing the death of neurons at this point will leave people with sick cells and no change in their symptoms, or not.”
Although choosing to do something because the perceived benefit outweighs the financial cost is something people do daily, little is known about what happens in the brain when a person makes these kinds of decisions. Studying how these cost-benefit decisions are made when choosing to consume alcohol, University of Georgia associate professor of psychology James MacKillop identified distinct profiles of brain activity that are present when making these decisions.
"We were interested in understanding how the brain makes decisions about drinking alcohol. Particularly, we wanted to clarify how the brain weighs the pros and cons of drinking," said MacKillop, who directs the Experimental and Clinical Psychopharmacology Laboratory in the UGA Franklin College of Arts and Sciences.
The study combined functional magnetic resonance imaging and a bar laboratory alcohol procedure to see how the cost of alcohol affected people’s preferences. The study group included 24 men, age 21-31, who were heavy drinkers. Participants were given a $15 bar tab and then were asked to make decisions in the fMRI scanner about how many drinks they would choose at varying prices, from very low to very high. Their choices translated into real drinks, at most eight that they received in the bar immediately after the scan. Any money not spent on drinks was theirs to keep.
The study applied a neuroeconomic approach, which integrates concepts and methods from psychology, economics and cognitive neuroscience to understand how the brain makes decisions. In this study, participants’ cost-benefit decisions were categorized into those in which drinking was perceived to have all benefit and no cost, to have both benefits and costs, and to have all costs and no benefits. In doing so, MacKillop could dissect the neural mechanisms responsible for different types of cost-benefit decision-making.
"We tried to span several levels of analysis, to think about clinical questions, like why do people choose to drink or not drink alcohol, and then unpack those choices into the underlying units of the brain that are involved," he said.
When participants decided to drink in general, activation was seen in several areas of the cerebral cortex, such as the prefrontal and parietal cortices. However, when the decision to drink was affected by the cost of alcohol, activation involved frontostriatal regions, which are important for the interplay between deliberation and reward value, suggesting suppression resulting from greater cognitive load. This is the first study of its kind to examine cost-benefit decision-making for alcohol and was the first to apply a framework from economics, called demand curve analysis, to understanding cost-benefit decision making.
"The brain activity was most differentially active during the suppressed consumption choices, suggesting that participants were experiencing the most conflict," MacKillop said. "We had speculated during the design of the study that the choices not to drink at all might require the most cognitive effort, but that didn’t seem to be the case. Once people decided that the cost of drinking was too high, they didn’t appear to experience a great deal of conflict in terms of the associated brain activity."
These conflicted decisions appeared to be represented by activity in the anterior insula, which has been linked in previous addiction studies to the motivational circuitry of the brain. Not only encoding how much people crave or value drugs, this portion of the brain is believed to be responsible for processing interceptive experiences, a person’s visceral physiological responses.
"It was interesting that the insula was sensitive to escalating alcohol costs especially when the costs of drinking outweighed the benefits," MacKillop said. "That means this could be the region of the brain at the intersection of how our rational and irrational systems work with one another. In general, we saw the choices associated with differential brain activity were those choices in the middle, where people were making choices that reflect the ambivalence between cost and benefits. Where we saw that tension, we saw the most brain activity."
While MacKillop acknowledges the impact this research could have on neuromarketing-or understanding how the brain makes decisions about what to buy-he is more interested in how this research can help people with alcohol addictions.
"These findings reveal the distinct neural signatures associated with different kinds of consumption preferences. Now that we have established a way of studying these choices, we can apply this approach to better understanding substance use disorders and improving treatment," he said, adding that comparing fMRI scans from alcoholics with those of people with normal drinking habits could potentially tease out brain patterns that show what is different between healthy and unhealthy drinkers. "In the past, we have found that behavioral indices of alcohol value predict poor treatment prognosis, but this would permit us to understand the neural basis for negative outcomes."
The research was published in the journal Neuropsychopharmacology March 3. A podcast highlighting this work is available at http://www.nature.com/multimedia/podcast/npp/npp_030314_alcohol.mp3.
Collaborating scientists at The Scripps Research Institute (TSRI), the National Institutes of Health (NIH) and the University of Camerino in Italy have published new findings on a system in the brain that naturally moderates the effects of stress. The findings confirm the importance of this stress-damping system, known as the nociceptin system, as a potential target for therapies against anxiety disorders and other stress-related conditions.
“We were able to demonstrate the ability of this nociceptin anti-stress system to prevent and even reverse some of the cellular effects of acute stress in an animal model,” said biologist Marisa Roberto, associate professor in TSRI’s addiction research department, known as the Committee on the Neurobiology of Addictive Disorders.
Roberto was a principal investigator for the study, which appears in the January 8, 2014 issue of the Journal of Neuroscience.
A Variety of Effects
Nociceptin, which is produced in the brain, belongs to the family of opioid neurotransmitters. But the resemblance essentially ends there. Nociceptin binds to its own specific receptors called NOP receptors and doesn’t bind well to other opioid receptors. The scientists who discovered it in the mid-1990s also noted that when nociceptin is injected into the brains of mice, it doesn’t kill pain—it makes pain worse.
The molecule was eventually named for this “nociceptive” (pain-producing) effect. However, subsequent studies demonstrated that, by activating its corresponding receptor NOP, nociceptin acted as an antiopioid and not only affected pain perception, but also blocked the rewarding properties of opioids such as morphine and heroin.
Perhaps of greatest interest, several studies in rodents have found evidence that nociceptin can act in the amygdala, a part of the brain that controls basic emotional responses, to counter the usual anxiety-producing effects of acute stress. There have been hints, too, that this activity occurs automatically as part of a natural stress-damping feedback response.
Scientists have wanted to know more about the anti-stress activity of the nociceptin/NOP system, in part because it might offer a better way to treat stress-related conditions. The latter are common in modern societies, including post-traumatic stress disorder as well as the drug-withdrawal stress that often defeats addicts’ efforts to kick the habit.
Reducing the Stress Reaction
For the new study, Roberto and her collaborators looked in more detail at the nociceptin/NOP system in the central amygdala.
First, Markus Heilig’s laboratory at the National Institute on Alcohol Abuse and Alcoholism (NIAAA), part of the NIH, measured the expression of NOP-coding genes in the central amygdala in rats. Heilig’s team found strong evidence that stress changes the activity of nociceptin/NOP in this region, indicating that the system does indeed work as a feedback mechanism to damp the effects of stress. In animals subjected to a standard laboratory stress condition, NOP gene activity rose sharply, as if to compensate for the elevated stress.
Roberto and her laboratory at TSRI then used a separate technique to measure the electrical activity of stress-sensitive neurons in the central amygdala. As expected, this activity rose when levels of the stress hormone CRF rose and started out at even higher levels in the stressed rats. But this stress-sensitive neuronal activity could be dialed down by adding nociceptin. The stress-blocking effect was especially pronounced in the restraint-stressed rats—probably due to their stress-induced increase in NOP receptors.
Finally, biologist Roberto Ciccocioppo and his laboratory at the University of Camerino conducted a set of behavioral experiments showing that injections of nociceptin specifically into the rat central amygdala powerfully reduced anxiety-like behaviors in the stressed rats, but showed no behavioral effect in non-stressed rats.
The three sets of experiments together demonstrate, said Roberto, that “stress exposure leads to an over-activation of the nociceptin/NOP system in the central amygdala, which appears to be an adaptive feedback response designed to bring the brain back towards normalcy.”
In future studies, she and her colleagues hope to determine whether this nociceptin/NOP feedback system somehow becomes dysfunctional in chronic stress conditions. “I suspect that chronic stress induces changes in amygdala neurons that can contribute to the development of some anxiety disorders,” said Roberto.
Compounds that mimic nociceptin by activating NOP receptors—but, unlike nociceptin, could be taken in pill form—are under development by pharmaceutical companies. Some of these appear to be safe and well tolerated in lab animals and may soon be ready for initial tests in human patients, Ciccocioppo said.
This week struck me as a particularly exhausting one when it came to that certain brand of provocatively-headlined-but-probably-not-what-you-think-it-is science news that we know and
As usual, it’s the science media click-machine that’s to blame, which is a polite way of saying that there exists a gaping void of careful, cautious, skeptical, dare I say scientific science writing out there amidst the great internet knowledge machine. It’d desperately hard to get people to read your articles or watch your videos, but that doesn’t mean that it’s okay to disengage the gravity of reason and drift off into the aether of just-so stories.
PHD Comics has summed up this vicious form of the science news cycle very well:
It’s not all bad, of course. There’s some real diamonds that we can regularly depend on to shine through amid the soiled throngs of intellectual beggars out there, and I, along with others, try to highlight their work regularly. I shall do so again here.
Here, I present two cases of “science things that were badly reported” and some links to better explanations. As usual, the defendants come from that tenuous intersection of neuroscience and behavior, because studying the brain is hard stuff, folks.
1) Mice Can Inherit Memories: No they can’t. Well, maybe they can (although I doubt it), but that’s not at all what this widely-reported paper in Nature Neuroscience says. The poor authors of that study are probably at home, drinking, wondering how, after years of hard work, their paper about how mice may pass on sensitivity to smells got so twisted. Headlines ranged from declaring this the source of human phobias to saying that Assassin’s Creed is based in real science.
What the researchers did was to condition some male mice to associate a smell (cherry blossoms) with a mild electric shock, which is mean, because that’s a nice smell! Naturally, the mice began to avoid the odor. The weird part is that their offspring, even two generations down the line, also seemed to avoid that specific cherry blossom odor, without ever encountering it before (and without their dads showing them). The dads’ noses all had more of the cells that smell that odor, as did the noses of their offspring. This did not happen with female mice and their offspring.
These kind of things aren’t supposed to be possible in a single generation. A mouse dad shouldn’t smell something, become afraid of it, and then be able to pass on a change to his kids. That’s precisely the kind of thing that got Lamarck and his giraffe necks laughed at more than a century ago. But it is possible that these mice were transmitting some sort of epigenetic change.
It’s possible that there was an epigenetic change passed down. But it’s not for sure. Beyond that, the way that statistics are applied to mouse behavior studies make it possible that the differences they see are just due to sample sizes, or not including certain controls, or some other random factor like that the humidity on a particular day happened to make the mice very jumpy. There’s also the fact that there is no known way for nerve cell changes or chemical responses within the olfactory bulb to be communicated to the testes, where sperm are made (there’s literally a blood-testis barrier to prevent that kind of thing).
Read this instead: At National Geographic, Virginia Hughes goes through the research in great detail, including comments from several people in the field who remain, shall we say, less than convinced. Extraordinary claims call for extraordinary evidence, and that’s lacking, at least in part. “More work needed” as they say!
2) Men and women’s brains are wired differently, therefore men are better at reading maps. That’s almost a verbatim headline from this news outlet. It speaks of “hardwired differences” (our brains are not hardwired) and is loaded with brainsplaining and neurosexism. This story is frustrating notsomuch because of the science, which is so-so, but because it is being misapplied by the media to reinforce cutsie-pie stories about what men are good at and what women are good at and never the twain shall meet and boy is it funny how men and women argue over getting lost?! GUFFAW!
Read this instead: At Discover, Neuroskeptic explains why the spatial resolution of the techniques used are like making a road atlas, while on the moon, using a pair of binoculars, and how the only real difference here may be that men’s brains are just slightly bigger than women’s (which doesn’t account for any noticeable difference in abilities, but can mess with scans a lot). And if you’d like a nice introduction to the idea of neurosexism and pigeonholing gender-based brain research into outdated social molds, might I suggest you read this article at The Conversation?
The fact is that men and women are mostly the same when it comes to their brains, but “Everyone can probably become pretty good at reading maps whether or not they are male or female, suggests common sense, not needing to be backed up by neuroscience” doesn’t make a very catchy headline.
Case Western Reserve University researchers today published findings that point to a promising discovery for the treatment and prevention of prion diseases, rare neurodegenerative disorders that are always fatal. The researchers discovered that recombinant human prion protein stops the propagation of prions, the infectious pathogens that cause the diseases.
“This is the very first time recombinant protein has been shown to inhibit diseased human prions,” said Wen-Quan Zou, MD, PhD, senior author of the study and associate professor of pathology and neurology at Case Western Reserve School of Medicine.
Recombinant human prion protein is generated in E. coli bacteria and it has the same protein sequence as normal human brain protein. But different in that, the recombinant protein lacks attached sugars and lipids. In the study, published online in Scientific Reports, researchers used a method called protein misfolding cyclic amplification which, in a test-tube, mimics the prions’ replication within the human brain. The propagation of human prions was completely inhibited when the recombinant protein was added into the test-tube. The researchers found that the inhibition is dose-dependent and highly specific in responding to the human-form of the recombinant protein, as compared to recombinant mouse and bovine prion proteins. They demonstrated that the recombinant protein works not only in the cell-free model but also in cultured cells, which are the first steps of translational research. Further, since the recombinant protein has an identical sequence to the brain protein, the application of the recombinant protein is less likely to cause side effects.
Prion diseases are a group of fatal transmissible brain diseases affecting both humans and animals. Prions are formed through a structural change of a normal prion protein that resides in all humans. Once formed, they continue to recruit other normal prion protein and finally cause spongiform-like damage in the brain. Currently, the diseases have no cure.
Previous outbreaks of mad cow disease and subsequent occurrences of the human form, variant Creutzfeldt–Jakob disease, have garnered a great deal of public attention. The fear of future outbreaks makes the search for successful interventions all the more urgent.
It’s not visible to the naked eye and you can’t feel it, but up to 40 per cent of your body’s energy goes into supplying the microscopic sodium-potassium pump with the energy it needs. The pump is constantly doing its job in every cell of all animals and humans. It works much like a small battery which, among other things, maintains the sodium balance which is crucial to keep muscles and nerves working.
The sodium-potassium pump transports sodium out and potassium into the cell in a fixed cycle. During this process the structure of the pump changes. It is well-established that the pump has a sodium and a potassium form. But the structural differences between the two forms have remained a mystery, and researchers have been unable to explain how the pump distinguishes sodium from potassium.
Structure solves the mystery
Thanks to the international collaboration between Professor Chikashi Toyoshima’s group at the University of Tokyo and researchers from Aarhus University, the structure of the sodium-bound form of the protein has now been described. For the first time ever, the sodium ions can be studied at a resolution so high - 0.28 nanometres - that researchers can actually see the sodium ions and observe where they bind in the structure of the pump. In 2000, Professor Chikashi Toyoshima’s group described the structure of a calcium-pump for the first time, and in 2007 and 2009 research groups from Aarhus University and Toyoshima’s group described the potassium-bound form of the sodium-potassium pump.
"The new protein structure shows how the smaller sodium ions are bound and subsequently transported out of the cell, whereas the access of the slightly larger potassium ions is blocked. We now understand how the pump distinguishes between sodium and potassium at the molecular level. This is a great leap forward for research into ion pumps and may help us understand and treat serious neurological conditions associated with mutations of the sodium-potassium pump, including a form of Parkinsonism and alternating hemiplegia of childhood in which sodium binding is defective," explains Bente Vilsen, a professor at Aarhus University who spearheaded the project’s activities in Aarhus with Associate Professor Flemming Cornelius.
Impressed Nobel Prize winner
The vital pump was discovered in 1957 by Professor Jens Christian Skou of Aarhus University, who received the Nobel Prize for his discovery in 1997. The new result is the culmination of five or six decades of research aimed at the mechanism behind this vital motor of the cells.
"Years ago, when the first electron microscopic images were taken in which the enzyme was but a millimetre-sized dot at 250,000 magnifications, I thought, how on earth will we ever be able to establish the structure of the enzyme. The pump transports potassium into and sodium out of the cells, so it must be capable of distinguishing between the two ions. But until now, it has been a mystery how this was possible," says retired Professor Jens Christian Skou, who - even at 94 years of age - keeps up to date with new developments in the field of research which he initiated more than 50 years ago.
"Now, the researchers have described the structure that allows the enzyme to identify sodium and this may pave the way for a more detailed understanding of how the pump works. It is an impressive achievement and something I haven’t even dared dream of," concludes Jens Christian Skou.
How on Earth do ballet dancers whirl around doing fouettes en tournant without falling splat on their face? While master ballerinas can do 32 in a row, I once tried to do one and ended up looking like this:
(in my defense, I had been drinking)
One of the keys to genteel gyration is spotting, locking the eyes on a point for as much of the turn as possible. But dancers’ brains also adapt with practice, literally desensitizing them from becoming dizzy.
Everything we do changes our brains, from learning German to watching cat videos, but this is an adaptation that doesn’t immediately seem to have much of an analog in nature, and confuses me (although I hear the rare black swan may have a similar neurological modification).
Visit the Scicurious Brain blog at SciAm to read more about this whirly wonder of biology.
Finally, this sea lion:
Humans and other mammals show particularly intensive sleeping patterns during puberty. The brain also matures fastest in this period. But when pubescent rats are administered caffeine, the maturing processes in their brains are delayed. This is the result of a study supported by the Swiss National Science Foundation (SNSF).
Children’s and young adults’ average caffeine consumption has increased by more than 70 per cent over the past 30 years, and an end to this rise is not in sight: the drinks industry is posting its fastest-growing sales in the segment of caffeine-laden energy drinks. Not everybody is pleased about this development. Some people are worried about possible health risks caused in young consumers by the pick-me-up.
Researchers led by Reto Huber of the University Children’s Hospital Zurich are now adding new arguments to the debate. In their recently published study conducted on rats (*), the conclusions call for caution: in pubescent rodents, caffeine intake equating to three to four cups of coffee per day in humans results in reduced deep sleep and a delayed brain development.
Do the brains of different people listening to the same piece of music actually respond in the same way? An imaging study by Stanford University School of Medicine scientists says the answer is yes, which may in part explain why music plays such a big role in our social existence.
(Image: Anthony Ellis)
The investigators used functional magnetic resonance imaging to identify a distributed network of several brain structures whose activity levels waxed and waned in a strikingly similar pattern among study participants as they listened to classical music they’d never heard before. The results will be published online April 11 in the European Journal of Neuroscience.
“We spend a lot of time listening to music — often in groups, and often in conjunction with synchronized movement and dance,” said Vinod Menon, PhD, a professor of psychiatry and behavioral sciences and the study’s senior author. “Here, we’ve shown for the first time that despite our individual differences in musical experiences and preferences, classical music elicits a highly consistent pattern of activity across individuals in several brain structures including those involved in movement planning, memory and attention.”
The notion that healthy subjects respond to complex sounds in the same way, Menon said, could provide novel insights into how individuals with language and speech disorders might listen to and track information differently from the rest of us.
The new study is one in a series of collaborations between Menon and co-author Daniel Levitin, PhD, a psychology professor at McGill University in Montreal, dating back to when Levitin was a visiting scholar at Stanford several years ago.
To make sure it was music, not language, that study participants’ brains would be processing, Menon’s group used music that had no lyrics. Also excluded was anything participants had heard before, in order to eliminate the confounding effects of having some participants who had heard the musical selection before while others were hearing it for the first time. Using obscure pieces of music also avoided tripping off memories such as where participants were the first time they heard the selection.
The researchers settled on complete classical symphonic musical pieces by 18th-century English composer William Boyce, known to musical cognoscenti as “the English Bach” because his late-baroque compositions in some respects resembled those of the famed German composer. Boyce’s works fit well into the canon of Western music but are little known to modern Americans.
Next, Menon’s group recruited 17 right-handed participants (nine men and eight women) between the ages of 19 and 27 with little or no musical training and no previous knowledge of Boyce’s works. (Conventional maps of brain anatomy are based on studies of right-handed people. Left-handed people’s brains tend to deviate from that map.)
While participants listened to Boyce’s music through headphones with their heads maintained in a fixed position inside an fMRI chamber, their brains were imaged for more than nine minutes. During this imaging session, participants also heard two types of “pseudo-musical” stimuli containing one or another attribute of music but lacking in others. In one case, all of the timing information in the music was obliterated, including the rhythm, with an effect akin to a harmonized hissing sound. The other pseudo-musical input involved maintaining the same rhythmic structure as in the Boyce piece but with each tone transformed by a mathematical algorithm to another tone so that the melodic and harmonic aspects were drastically altered.
The team identified a hierarchal network stretching from low-level auditory relay stations in the midbrain to high-level cortical brain structures related to working memory and attention, and beyond that to movement-planning areas in the cortex. These regions track structural elements of a musical stimulus over time periods lasting up to several seconds, with each region processing information according to its own time scale.
Activity levels in several different places in the brain responded similarly from one individual to the next to music, but less so or not at all to pseudo-music. While these brain structures have been implicated individually in musical processing, their identifications had been obtained by probing with artificial laboratory stimuli, not real music. Nor had their coordination with one another been previously observed.
Notably, subcortical auditory structures in the midbrain and thalamus showed significantly greater synchronization in response to musical stimuli. These structures have been thought to passively relay auditory information to higher brain centers, Menon said. “But if they were just passive relay stations, their responses to both types of pseudo-music would have been just as closely synchronized between individuals as to real music.” The study demonstrated, for the first time, that those structures’ activity levels respond preferentially to music rather than to pseudo-music, suggesting that higher-level centers in the cortex direct these relay stations to closely heed sounds that are specifically musical in nature.
The fronto-parietal cortex, which anchors high-level cognitive functions including attention and working memory, also manifested intersubject synchronization — but only in response to music and only in the right hemisphere.
Interestingly, the structures involved included the right-brain counterparts of two important structures in the brain’s left hemisphere, Broca’s and Geschwind’s areas, known to be crucial for speech and language interpretation.
“These right-hemisphere brain areas track non-linguistic stimuli such as music in the same way that the left hemisphere tracks linguistic sequences,” said Menon.
In any single individual listening to music, each cluster of music-responsive areas appeared to be tracking music on its own time scale. For example, midbrain auditory processing centers worked more or less in real time, while the right-brain analogs of the Broca’s and Geschwind’s areas appeared to chew on longer stretches of music. These structures may be necessary for holding musical phrases and passages in mind as part of making sense of a piece of music’s long-term structure.
“A novelty of our work is that we identified brain structures that track the temporal evolution of the music over extended periods of time, similar to our everyday experience of music listening,” said postdoctoral scholar Daniel Abrams, PhD, the study’s first author.
The preferential activation of motor-planning centers in response to music, compared with pseudo-music, suggests that our brains respond naturally to musical stimulation by foreshadowing movements that typically accompany music listening: clapping, dancing, marching, singing or head-bobbing. The apparently similar activation patterns among normal individuals make it more likely our movements will be socially coordinated.
“Our method can be extended to a number of research domains that involve interpersonal communication. We are particularly interested in language and social communication in autism,” Menon said. “Do children with autism listen to speech the same way as typically developing children? If not, how are they processing information differently? Which brain regions are out of sync?”
Associate Professor Neil McLachlan from the Melbourne School of Psychological Sciences said previous theories about how we appreciate music were based on the physical properties of sound, the ear itself and an innate ability to hear harmony.
“Our study shows that musical harmony can be learnt and it is a matter of training the brain to hear the sounds,” Associate Professor McLachlan said. “So if you thought that the music of some exotic culture (or Jazz) sounded like the wailing of cats, it’s simply because you haven’t learnt to listen by their rules.”
The researchers used 66 volunteers with a range of musical training and tested their ability to hear combinations of notes to determine if they found the combinations familiar or pleasing.
“What we found was that people needed to be familiar with sounds created by combinations of notes before they could hear the individual notes. If they couldn’t find the notes they found the sound dissonant or unpleasant,” he said. “This finding overturns centuries of theories that physical properties of the ear determine what we find appealing.”
Coauthor on the study Associate Professor Sarah Wilson also from the Melbourne School of Psychological Sciences said the study found that trained musicians were much more sensitive to dissonance than non-musicians.
“When they couldn’t find the note, the musicians reported that the sounds were unpleasant, whereas non-musicians were much less sensitive,” Assoc. Prof Wilson said. “This highlights the importance of training the brain to like particular variations of combinations of sounds like those found in jazz or rock.”
Depending on their training, a strange chord or a gong sound was accurately pitched and pleasant to some musicians, but impossible to pitch and very unpleasant to others. “This showed us that even the ability to hear a musical pitch (or note) is learnt,” Assoc. Prof Wilson said.
To confirm this finding they trained 19 non-musicians to find the pitches of a random selection of western chords. Not only did the participants ability to hear notes improve rapidly over ten short sessions, afterward they reported that the chords they had learnt sounded more pleasant – regardless of how the chords were tuned.
The question of why some combinations of musical notes are heard as pleasant or unpleasant has long been debated. “We have shown in this study that for music, beauty is in the brain of the beholder,” Assoc. Prof McLachlan said. The study was published in the Journal of Experimental Psychology: General.
Music has been incorporated into medical practice since before the ancient Greeks. However, though practitioners have been convinced of music’s health benefits for thousands of years, there had been little peer-reviewed research to back them up. But recent studies are providing an empirical backbone for the anecdotal evidence. A 2012 scientific review, published in the journal Nutrition, collects information from a number of studies to support music’s influence on the hypothalamic-pituitary-adrenal (HPA) axis, the sympathetic nervous system (SNS) and the immune system. These results support the experiences of complementary practitioners, who have long used music to help heal.
“As an integrative physician and traditional Chinese medicine practitioner, the healing power of music has always been an important part of my practice and family life,” says integrative medicine pioneer Isaac Eliaz, M.D. “Harmony and tempo help synchronize the rhythms of the natural world with the music of the heart – each person’s individual energetic pattern, expressed in their pulse.”
The review highlighted a number of studies that confirm music’s healing potential. For example, music reduces levels of serum cortisol in the blood. An important player in the HPA axis, cortisol increases metabolic activity, suppresses the immune system and has been associated with both anxiety and depression. A number of studies have shown that exposing post-operative patients to music dramatically lowers their cortisol levels, enhancing their ability to heal.
Other studies in the review measured music’s impact on congestive heart failure, premature infants, immunity, digestive function and pain perception. In particular, music’s effects on the limbic and hypothalamic systems reduced the incidence of heart failure. Other studies showed that surgical patients required less sedation and post-operative pain medication.
“These results only confirm what I have observed for many years in my practice,” says Dr. Eliaz. “Music produces quantifiable healing. For example, my daughter Amity, a professional musician, regularly plays her songs for chronically ill patients who express how uplifting her music is. These performances do more than encourage good feelings, they help the body heal on a molecular level.”
Perhaps the most interesting aspect of music’s healing properties is how widespread they are. For example, music also aided recovery time following strenuous exercise. Other studies showed that fast-paced music can increase resting metabolism, which may prove helpful for people trying to lose weight.
“Modern science has just begun to scratch the surface of music and sound in terms of healing potential,” says Dr. Eliaz. “However, traditional medical systems from around the world have long revered the beneficial vibrations of music, harmony and rhythm for health and vitality. The effects are instant and tangible, but they are also powerful and long lasting.”
The same brain system that controls our muscles also helps us remember music, scientists say. But the discovery might never have happened without The Beatles.
Contrary to the prevailing theories that music and language are cognitively separate or that music is a byproduct of language, theorists at Rice University’s Shepherd School of Music and the University of Maryland, College Park (UMCP) advocate that music underlies the ability to acquire language.
“Spoken language is a special type of music,” said Anthony Brandt, co-author of a theory paper published online this month in the journal Frontiers in Cognitive Auditory Neuroscience. “Language is typically viewed as fundamental to human intelligence, and music is often treated as being dependent on or derived from language. But from a developmental perspective, we argue that music comes first and language arises from music.”
Brandt, associate professor of composition and theory at the Shepherd School, co-authored the paper with Shepherd School graduate student Molly Gebrian and L. Robert Slevc, UMCP assistant professor of psychology and director of the Language and Music Cognition Lab.
“Infants listen first to sounds of language and only later to its meaning,” Brandt said. He noted that newborns’ extensive abilities in different aspects of speech perception depend on the discrimination of the sounds of language – “the most musical aspects of speech.”
The paper cites various studies that show what the newborn brain is capable of, such as the ability to distinguish the phonemes, or basic distinctive units of speech sound, and such attributes as pitch, rhythm and timbre.
The authors define music as “creative play with sound.” They said the term “music” implies an attention to the acoustic features of sound irrespective of any referential function. As adults, people focus primarily on the meaning of speech. But babies begin by hearing language as “an intentional and often repetitive vocal performance,” Brandt said. “They listen to it not only for its emotional content but also for its rhythmic and phonemic patterns and consistencies. The meaning of words comes later.”
Brandt and his co-authors challenge the prevailing view that music cognition matures more slowly than language cognition and is more difficult. “We show that music and language develop along similar time lines,” he said.
Infants initially don’t distinguish well between their native language and all the languages of the world, Brandt said. Throughout the first year of life, they gradually hone in on their native language. Similarly, infants initially don’t distinguish well between their native musical traditions and those of other cultures; they start to hone in on their own musical culture at the same time that they hone in on their native language, he said.
The paper explores many connections between listening to speech and music. For example, recognizing the sound of different consonants requires rapid processing in the temporal lobe of the brain. Similarly, recognizing the timbre of different instruments requires temporal processing at the same speed — a feature of musical hearing that has often been overlooked, Brandt said.
“You can’t distinguish between a piano and a trumpet if you can’t process what you’re hearing at the same speed that you listen for the difference between ‘ba’ and ‘da,’” he said. “In this and many other ways, listening to music and speech overlap.” The authors argue that from a musical perspective, speech is a concert of phonemes and syllables.
“While music and language may be cognitively and neurally distinct in adults, we suggest that language is simply a subset of music from a child’s view,” Brandt said. “We conclude that music merits a central place in our understanding of human development.”
Brandt said more research on this topic might lead to a better understanding of why music therapy is helpful for people with reading and speech disorders. People with dyslexia often have problems with the performance of musical rhythm. “A lot of people with language deficits also have musical deficits,” Brandt said.
More research could also shed light on rehabilitation for people who have suffered a stroke. “Music helps them reacquire language, because that may be how they acquired language in the first place,” Brandt said.
The trigger to transition between styles in this dual-process cognition is partially dependent on the sufficiency principle. Generally, when making a decision, we weigh how much we know against how much we need to know to make a confident judgment about a topic. If this gap between what we know and what we need to know is small, heuristic-style thinking is more likely. Conversely, if there is a large gap, we need to expend more mental resources to close it, thus encouraging systematic thinking. This Scrooge-like mental calculus determines how much we process the information we are inundated with everyday. And we readily recognize this game of cognitive economy, especially when browsing the web.