Pain can be valuable. Without it, we might let our hand linger on a
hot stove, for example. But longer-lasting pain, such as the
inflammatory pain that can arise after injury, can be debilitating and
costly, preventing us from completing important tasks. In natural
settings, the lethargy triggered by such pain could even hinder
survival.
According to research by University of Pennsylvania
neuroscientists, the brain has a way to suppress chronic pain when an
animal is hungry, allowing it to go look for food while leaving intact
the response to acute pain. Their work pinpointed a tiny population of
300 brain cells responsible for the ability to prioritize hunger over
chronic pain, a group of neurons that may offer targets for novel pain
therapies.
“In neuroscience we’re very good about studying one behavior at a time,” says J. Nicholas Betley, an assistant professor of biology in Penn’s School of Arts and Sciences.
“My lab studies hunger, and we can find neurons that make you hungry
and manipulate those neurons and monitor their activity. But in the real
world, things aren’t that simple. You’re not in an isolated situation
where you’re only hungry. This research was to try to understand how an
animal integrates multiple needs to come to a behavioral conclusion that
is optimal.”
“We didn’t set out having this expectation that hunger would influence pain sensation so significantly,” says Amber Alhadeff,
a postdoctoral researcher, “but when we saw these behaviors unfold
before us, it made sense. If you’re an animal, it doesn’t matter if you
have an injury, you need to be able to overcome that in order to go find
the nutrients you need to survive.”
Betley’s lab has focused on studying hunger, in particular how hunger
can alter perception. Curious about how hunger may interact with the
sensation of pain, the researchers observed how mice that hadn’t eaten
for 24 hours responded to either acute pain or longer-term inflammatory
pain, which is thought to involve sensitization of neural circuits in
the brain.
The Penn team found that hungry mice still responded to sources of
acute pain but seemed less responsive to inflammatory pain than their
well-fed counterparts. Their behavior was similar to that of mice that
had been given an anti-inflammatory painkiller.
In a conditioning experiment, the researchers found that hungry mice
did not avoid a place where they had been exposed to inflammatory pain,
while mice that were not hungry avoided the place.
That left the question of what part of the brain was processing this
intersection between hunger and pain. To find out, the researchers
experimentally turned on a group of neurons known to be activated by
hunger, agouti-related protein (AgRP) neurons, and found that chronic
pain responses subsided, while acute pain responses stayed intact.
To get more specific about the brain region involved, the team next
looked at which subpopulation of AgRP neurons appeared to integrate the
signals of hunger with inflammatory pain. Activating each AgRP neuron
subpopulation one at a time, Betley, Alhadeff, and colleagues found that
stimulation of only a few hundred AgRP neurons that project to the
parabrachial nucleus significantly suppressed inflammatory pain.
“It was really striking,” Alhadeff says. “We showed that acute
response to pain was perfectly intact, but inflammatory pain was
suppressed to a very significant extent.”
“The really interesting thing to my mind is that out of a brain of
billions of neurons, this specific behavior is mediated by 300 or so
neurons,” Betley says.
Further experiments pinpointed the neurotransmitter, a molecule
called NPY, responsible for selectively blocking inflammatory pain
responses. Blocking receptors for NPY reversed the effects of hunger,
and pain returned.
The researchers are excited by the potential clinical relevance of
their findings. If they hold up in humans, this neural circuit offers a
target for ameliorating the chronic pain that can linger after injuries,
a type of pain that is currently often addressed by opioid medications,
drugs that also inhibit acute pain.
“We don’t want to shut off pain altogether,” Alhadeff says, “there
are adaptive reasons for pain, but it would be great to be able to
target just the inflammatory pain.”
Taking the next steps in this line of work, the researchers would
like to map out in greater depth how the brain processes inflammatory
pain, ideally identifying more targets for suppressing it. And they will
continue considering how different survival behaviors integrate in the
brain and how the brain processes and prioritizes them.
“We’ve initiated a new way of thinking about how behavior is
prioritized,” Betley says. “It’s not that all the information is
funneled up to your higher thinking centers in the brain but that
there’s a hierarchy, a competition that occurs between different drives,
that occurs before something like pain is even perceived.”
Until very recently, Parkinson’s had been thought a disease that
starts in the brain, destroying motion centers and resulting in the
tremors and loss of movement. New research published this week in the
journal Brain,
shows the most common Parkinson’s gene mutation may change how immune
cells react to generic infections like colds, which in turn trigger the
inflammatory reaction in the brain that causes Parkinson’s. The research
offers a new understanding of Parkinson’s disease.
“We know that brain cells called microglia cause the inflammation
that ultimately destroys the area of the brain responsible for movement
in Parkinson’s,” said Richard Smeyne, PhD, Director of the Jefferson
Comprehensive Parkinson’s Disease and Movement Disorder Center at the Vickie and Jack Farber Institute for Neuroscience.
“But it wasn’t clear how a common inherited mutation was involved in
that process, and whether the mutation altered microglia.”
Together with Dr. Smeyne, first author Elena Kozina, PhD, looked at
the mutant version of the LRRK2 gene (pronounced ‘lark’). Mutations in
the LRRK2 gene are the most common cause of inherited Parkinson’s
disease and are found in 40 percent of people of North African Arab
descent and 18 percent of people of Ashkenazi Jewish descent with
Parkinson’s. However there’s been controversy around the exact function
of the LRRK2 gene in the brain.
“We know that gene mutation is not enough to cause the disease,” said Dr. Kozina, Post-Doctoral student at Jefferson (Philadelphia University + Thomas Jefferson University).
“We know that twins who both carry the mutation, won’t both necessarily
develop Parkinson’s. A second ‘hit’ or initiating event is needed.”
Based on his earlier work showing that the flu might increase risk of Parkinson’s disease,
Dr. Smeyne decided to investigate whether that second hit came from an
infection. Suspecting that the LRRK2 mutations might be acting outside
of the brain, the researchers used an agent – the outer shell of
bacteria, called lippopolysaccharide (LPS) – that causes an immune
reaction. LPS itself does not pass into the brain, nor do the immune
cells it activates, which made it ideal for testing whether this second
hit was acting directly in the brain.
When the researchers gave the bacterial fragments to the mice
carrying the two most common LRRK2 gene mutations, the immune reaction
became a “cytokine storm,” with inflammatory mediators rising to levels
that 3-5 times higher than a normal reaction to LPS. These inflammatory
mediators were produced by T and B immune cells expressing the LRRK2
mutation.
Despite the fact that LPS did not cross the blood-brain barrier, the
researchers showed that the elevated cytokines were able to enter the
brain, creating an environment that caused the microglia to activate
pathologically and destroy the brain region involved in movement.
“Although more tests are needed to prove the link, as well as testing
whether the same is true in humans, these findings give us a new way to
think about how these mutations could cause Parkinson’s,” said Dr.
Smeyne. “Although we can’t treat people with immunosuppressants their
whole lives to prevent the disease, if this mechanism is confirmed, it’s
possible that other interventions could be effective at reducing the
chance of developing the disease.”
During the hours of sleep the memory performs a cleaning shift. A
study led by a Spanish scientist at the University of Cambridge reveals
that when we sleep, the neural connections that collect important
information are strengthened and those created from irrelevant data are
weakened until they get lost.
Throughout the day, people retain a lot of information. The brain
creates or modifies the neural connections from these data, elaborating
memories. But most of the information we receive is irrelevant and it
does not make sense to keep it. In such a case, the brain would be
overloaded.
So far there have been two hypotheses about how the sleeping brain
modifies the neural connections created throughout the day: while one of
them argues that all of them are reinforced during sleep hours, the
other maintains that their number is reduced.
A group of scientists from the Ole Paulsen Laboratory, at the
University of Cambridge (United Kingdom), has analyzed the mechanisms
underlying the maintenance of memory during the phase of slow wave sleep
- the third phase of sleep without rapid eye movements in the brain
during which there is more relaxation and a deeper rest.
“Depending on the experiences of a person and depending on their
relevance, the size of their corresponding neuronal connections changes.
Those that save important information are larger and those that store
the dispensable are small,” explains Ana González Rueda, main author of
the study and researcher at the MRC Laboratory of Molecular Biology
(LMB) in Cambridge.
According to the expert, in the event that all these links were
reinforced equally during sleep, the brain would be saturated by an
extreme overexcitement of the nervous system.
In the study, published in the Neuron journal, the
researchers stimulated the neuronal connections of mice subjected to a
type of anesthesia that achieves a brain state similar to the slow wave
sleep phase in humans.
In the words of González Rueda, the stimulation was carried out
‘blindly’ because the information contained in each of the links was not
known. “We developed a system to follow the evolution of a specific
neuronal synapse and thus study what type of activity influences that
these are maintained, grow or decrease.”
What is the maintenance of neural connections dependent on?
The results show that during slow wave sleep, the largest connections
are maintained while the smaller ones are lost. This brain mechanism
improves the signal-to-noise ratio - important information remains and
the dispensable is discarded - and allows the storage of various types
of information from one day to the next without losing the previous
data. That is, those that have already been considered relevant are kept
in that state without having to reinforce them.
According to González Rueda, the brain “puts order” during the hours of
sleep, discarding the weakest connections to ensure stronger and
consolidated memories.
“Although the brain has an extraordinary storage capacity,
maintaining connections and neuronal activities requires a lot of
energy. It is much more efficient to keep only what is necessary,” says
the expert. “Even without maintaining all the information we receive,
the brain spends 20% of the calories we consume.”
This research is a first indication of the electro-physiological
mechanism of sleep and opens new horizons thanks to the development of a
new way of studying live synaptic plasticity.
The next objective of the experts is to research the consequences of
this type of brain activity for the maintenance of certain information
and to analyze new phases of sleep. “In addition to the analysis of the
slow wave phase, it could be interesting to know what happens in the REM
phase, during which dreams occur”, concludes González Rueda.
Scientists at Johns Hopkins have used supercomputers to create
an atomic scale map that tracks how the signaling chemical glutamate
binds to a neuron in the brain. The findings, say the scientists, shed
light on the dynamic physics of the chemical’s pathway, as well as the
speed of nerve cell communications.
It’s long been known that brain neurons use glutamate as a way
to communicate with each other. As one neuron releases glutamate, an
adjacent neuron latches onto the chemical through a structure on the
neuron’s surface called a receptor. The glutamate-receptor connection
triggers a neuron to open chemical channels that let in charged
particles called ions, creating an electric spark that activates the
neuron.
“All of this happens within a millisecond, and what hasn’t been
known is the way receptors latch onto glutamate. Our new experiments
suggest that glutamate molecules need to take very particular pathways
on the surface of glutamate receptors in order to fit into a pocket
within the receptor,” says Albert Lau, Ph.D., assistant professor of biophysics and biophysical chemistry at the Johns Hopkins University School of Medicine.
For the research, the Johns Hopkins scientists used a
supercomputer called Anton, which is run by the Pittsburgh
Supercomputing Center. They also worked with researchers at Humboldt
University in Berlin who specialize in recording how charged particles
flow between biological membranes.
A report of the experiments will be published in the Jan. 3 issue of Neuron.
To develop their model of how glutamate might connect to brain
cell receptors, Lau and Johns Hopkins research fellow Alvin Yu used a
computing technique called molecular dynamic simulations, which was
developed by Martin Karplus, Michael Levitt and Arieh Warshel and earned
them a Nobel Prize in 2013.
The simulations use Sir Isaac Newton’s laws of motion and a set of
mathematical rules, or algorithms to assign energy functions to atoms
and the substances made from those atoms.
“It takes an enormous amount of computer processing power to do these types of simulations,” says Lau.
In their experiment, Yu and Lau immersed glutamate molecules
and a truncated version of the glutamate receptor in a water and sodium
chloride solution. The supercomputer recorded dynamics and interactions
among nearly 50,000 atoms in the solution.
“There are many ways glutamate can connect with a receptor,”
says Lau. But some pathways are more direct than others. “The difference
is like taking the faster highway route versus local roads to get to a
destination.”
Yu and Lau counted how frequently they saw glutamate in every
position on the receptor. It turns out that glutamate spends most of its
time gliding into three distinct pathways.
Zooming in more closely at those pathways, the scientists found
that the chemical’s negatively charged atoms are guided by positively
charged atoms on the neuron’s glutamate receptors.
“What we see is an electrostatic connection, and the path
glutamate follows is determined by where the charges are,” says Lau. In
the world of physics, when two objects near each other have opposing
electrical charges, they attract each other.
Lau says that the positively charged residues on the glutamate
receptor may have evolved to shorten the time that glutamate takes to
find its binding pocket.
To test this idea, Lau teamed up with scientists at Humboldt
University to introduce mutations into the gene that codes for the
glutamate receptor to change positively charged residues into either
negatively charged or uncharged ones.
Then, they measured the resulting electrical currents to
determine if there was a change in the rate of the receptor’s activation
in the presence of glutamate.
The results of that experiment showed that mutated glutamate
receptors activated at half the speed of the normal version of the
receptor.
“If, as we think is the case, communication between neurons has
to happen at a particular rate for effective brain activity, then
slowing down that rate means that the brain won’t work as well,” says
Lau. “We believe that these glutamate receptors have evolved a way to
speed up the binding process.”
The scientists add that, in some cases, glutamate seems to be
able to bind to the receptor upside down. When this happens, the
glutamate receptor’s pocket can’t close entirely, possibly making it
unable to fully open its channels to allow ions into the neuron.
Lau says that further research is needed to determine if other
compounds that target the glutamate receptor, such as quisqualic acid,
which is found in the seeds of some flowering plants, tread the same
three pathways that glutamate tends to follow.
So far, Lau’s team has focused its computer simulations only on
the main binding region of the glutamate receptor. The researchers plan
to study other areas of glutamate receptors exposed to glutamate.
When your attention shifts from one place to another, your brain blinks.
The blinks are momentary unconscious gaps in visual perception and came
as a surprise to the team of Vanderbilt psychologists who discovered
the phenomenon while studying the benefits of attention.
“Attention is beneficial because it increases our ability to detect
visual signals even when we are looking in a different direction,” said Assistant Professor of Psychology Alex Maier,
who directed the study. “The ‘mind’s eye blinks’ that occur every time
your attention shifts are the sensory processing costs that we pay for
this capability.”
Details of their study are described in a paper titled “Spiking suppression precedes cued attentional enhancement of neural responses in primary visual cortex” published Nov. 23 in the journal Cerebral Cortex.
“There have been several behavior studies in the past that have
suggested there is a cost to paying attention. But our study is the
first to demonstrate a sensory brain mechanism underlying this
phenomenon,” said first author Michele Cox, who is a psychology doctoral student at Vanderbilt.
The research was conducted with macaque monkeys that were trained to
shift their attention among different objects on a display screen while
the researchers monitored the pattern of neuron activity taking place in
their brains. Primates are particularly suited for the study because
they can shift their attention without moving their eyes. Most animals
do not have this ability.
“We trained macaques to play a video game that rewarded them with
apple juice when they paid attention to certain visual objects. Once
they became expert at the game, we measured the activity in their visual
cortex when they played,” said Maier.
By combining advanced recording techniques that simultaneously track
large numbers of neurons with sophisticated computational analyses, the
researchers discovered that the activity of the neurons in the visual
cortex were momentarily disrupted when the game required the animals to
shift their attention. They also traced the source of the disruptions to
parts of the brain involved in guiding attention, not back to the eyes.
Mind’s eye blink is closely related to “attentional blink” that has been studied by Cornelius Vanderbilt Professor of Psychology David Zald and Professor of Psychology René Marois.
Attentional blink is a phenomenon that occurs when a person is
presented with a rapid series of images. If the spacing between two
images is too short, the observer doesn’t detect the second image. In
2005, Zald determined that the time of temporary blindness following violent or erotic images was significantly longer than it is for emotionally neutral images.
NIMH researchers have produced the first direct evidence that parts
of our brains implicated in mental disorders may be shaped by a
“residual echo” from our ancient past. The more a person’s genome
carries genetic vestiges of Neanderthals, the more certain parts of his
or her brain and skull resemble those of humans’ evolutionary cousins
that went extinct 40,000 years ago, says NIMH’s Karen Berman, M.D.
(Image caption: MRI data shows (left) areas of the skull preferentially affected by the amount of Neanderthal-derived DNA and (right) areas of the brain’s visual system in which Neanderthal gene variants influenced cortex folding (red) and gray matter volume (yellow). Credit: Michael Gregory, M.D., NIMH Section on Integrative Neuroimaging)
In particular, the parts of our brains that enable us to
use tools and visualize and locate objects owe some of their lineage to
Neanderthal-derived gene variants that are part of our genomes and
affect the shape of those structures – to the extent that an individual
harbors the ancient variants. But this may involve trade-offs with our
social brain. The evidence from MRI scans suggests that such
Neanderthal-derived genetic variation may affect the way our brains work
today – and may hold clues to understanding deficits seen in
schizophrenia and autism-related disorders, say the researchers.
Berman, Michael Gregory, M.D., of the NIMH Section on
Integrative Neuroimaging, and colleagues, report on their magnetic
resonance imaging (MRI) study online July 24, 2017, in the journal Scientific Reports.
During their primordial migration out of Africa, ancestors
of present-day humans are thought to have interbred with Neanderthals,
whose brain characteristics can be inferred from their fossilized
skulls. For example, these indicate that Neanderthals had more prominent
visual systems than modern humans.
“It’s been proposed that Neanderthals depended on
visual-spatial abilities and toolmaking, for survival, more so than on
the social affiliation and group activities that typify the success of
modern humans – and that Neanderthal brains evolved to preferentially
support these visuospatial functions,” Berman explained. “Now we have
direct neuroimaging evidence that such trade-offs may still be operative
in our brains.”
Might some of us, more than others, harbor
Neanderthal-derived gene variants that may bias our brains toward
trading sociability for visuospatial prowess – or vice versa?
The new study adds support to this possibility by showing how these gene
variants influence the structure of brain regions underlying those
abilities.
To test this possibility, Gregory and Berman measured the
amount of Neanderthal variants in a sample of 221 participants of
European ancestry drawn from the NIMH Sibling Study of schizophrenia
risk and related it to MRI measures of brain structure.
The new MRI evidence points to a shared gene variant that
is likely involved in development of the brain’s visual system.
Similarly, Neanderthal variants impacting development of a particular
suspect brain area may help to inform cognitive disability seen in
certain brain disorders, say the researchers.
For example, in 2012, Berman and colleagues reported on how genetic variation shapes the structure and function of a brain area called the Insula in the autism-related disorder Williams Syndrome.
People with this rare genetic disorder are overly sociable and
visuo-spatially impaired – conspicuously opposite to the hypothesized
Neanderthal propensities and more typical cases on the autism spectrum.
Mice in which a gene affected by Williams syndrome is experimentally deleted show increased separation anxiety. And just last week, researchers showed that the same genetic variability also appears to explain why dogs are friendlier than wolves.
Berman and Gregory’s team is currently working on further
studies documenting the pivotal role of this suspect genetic variation
in shaping the insula and related social brain circuitry.
Most people know from their own experience that just a single
sleepless night can lead to difficulty in mastering mental tasks the
next day. Researchers assume that deep sleep is essential for
maintaining the learning efficiency of the human brain in the long term.
While we are awake, we constantly receive impressions from our
environment, whereby numerous connections between the nerve cells –
so-called synapses – are excited and intensified at times. The
excitation of the synapses does not normalize again until we fall
asleep. Without a recovery phase, many synapses remain maximally
excited, which means that changes in the system are no longer possible:
Learning efficiency is blocked.
Causal connection between deep sleep and learning efficiency
The connection between deep sleep and learning efficiency has long
been known and proven. Now, researchers at the University of Zurich
(UZH) and the Swiss Federal Institute of Technology (ETH) in Zurich have
been able to demonstrate a causal connection within the human brain for
the first time. Reto Huber, professor at the University Children’s
Hospital Zurich and of Child and Adolescent Psychiatry at UZH, and
Nicole Wenderoth, professor in the Department of Health Sciences and
Technology at the ETH Zurich, have succeeded in manipulating the deep
sleep of test subjects in targeted areas. “We have developed a method
that lets us reduce the sleep depth in a certain part of the brain and
therefore prove the causal connection between deep sleep and learning
efficiency,” says Reto Huber.
Subjective sleep quality was not impaired
In the two-part experiment with six women and seven men, the test
subjects had to master three different motoric tasks. The concrete
assignment was to learn various sequences of finger movements throughout
the day. At night, the brain activity of the test subjects during sleep
was monitored by EEG. While the test subjects were able to sleep
without disturbance after the learning phase on the first day, their
sleep was manipulated in a targeted manner on the second day of the
experiment – using acoustic stimulation during the deep sleep phase. To
do so, the researchers localized precisely that part of the brain
responsible for learning the abovementioned finger movements, i.e., for
the control of motor skills (motor cortex). The test subjects were not
aware of this manipulation; to them, the sleep quality of both
experimental phases was comparable on the following day.
Deep sleep disturbances impair learning efficiency
In a second step, researchers tested how the manipulation of deep
sleep affected the motoric learning tasks on the following day. Here,
they observed how the learning and performance curves of the test
subjects changed over the course of the experiment. As expected, the
participants were particularly able to learn the motoric task well in
the morning. As the day went on, however, the rate of mistakes rose.
After sleep, the learning efficiency considerably improved again. This
was not the case after the night with the manipulated sleep phase. Here,
clear performance losses and difficulties in learning the finger
movements were revealed. Learning efficiency was similarly as weak as on
the evening of the first day of the experiment. Through the
manipulation of the motor cortex, the excitability of the corresponding
synapses was not reduced during sleep. “In the strongly excited region
of the brain, learning efficiency was saturated and could no longer be
changed, which inhibited the learning of motor skills,” Nicole Wenderoth
explains.
In a controlled experiment with the same task assignment, researchers
manipulated another region of the brain during sleep. In this case,
however, this manipulation had no effect on the learning efficiency of
the test subjects.
Use in clinical studies planned
The newly gained knowledge is an important step in researching human
sleep. The objective of the scientists is to use this knowledge in
clinical studies. “Many diseases manifest in sleep as well, such as
epilepsy,” Reto Huber explains. “Using the new method, we hope to be
able to manipulate those specific brain regions that are directly
connected with the disease.” This could help improve the condition of
affected patients.