Showing posts with label a moment in NeuroHistory. Show all posts
Showing posts with label a moment in NeuroHistory. Show all posts

Tuesday, July 30, 2013

Treatise on the Diseases of Females: Pregnancy in the 1800s

While looking through some seriously old books, I came across a medical treatise from 1853. Now this would be fascinating on its own, but even better, it's a treatise specifically about the "diseases of females" written by William P. Dewees, M.D.

William Dewees (from Wikipedia)
Having recently been pregnant, I was particularly interested in the 1800s recommendations for pregnancy.

Dewees starts out his chapter on pregnancy by explaining why it is important to scientifically determine whether a woman is pregnant or not. The reasons are essentially as follows:

1. So if the woman needs to be treated for some other disease, she doesn't get prescribed something that would hurt her or the baby if pregnant.
2. Because if she is under trial or awaiting execution, pregnancy might forestall it.
3. If the predicted date of birth might influence the 'character or property' of someone else.

So yes, clearly it is important to know if a woman is pregnant.

So how do you tell in the 1800s when no pee-sticks with plus signs were available? Not surprisingly, the first way is 'she doesn't have her period.' However there is clearly some debate in the field at this time.

Other things can 'suppress the menses' and sometimes a woman can bleed while pregnant.

Dewees spends excessive words and semi-colons defending his position on the subject:

"In declaring that women may menstruate after impregnation, I have no favourite hypothesis to support; nor am I influenced by any affectation or vanity to differ from others; neither do I believe I am more than ordinarily prone to be captivated or misled by the marvellous; for I soberly and honestly believe what I say, and pledge myself for the fidelity of the relation of the cases I adduce in support of my position." *

So you need some other signs of pregnancy other than just not menstruating. Next up: Nausea and Vomiting. Though "far from certain" as a sign of pregnancy, in conjunction with other signs, it is 'added proof'

Another sign is the enlargement of the sebaceous glands (which are on the areolae around the nipple), and the formation of milk. But milk coming in is also not certain:

"I once new a considerable quantity of milk form in the breasts of a lady, who though she had been married a number of years had never been pregnant; but who at this time had been two years separated from her husband. She mentioned the fact of her having milk to a female friend, who from an impression that it augured pregnancy, told it to another friend, as a great secret; and thus, after having enlisted fifteen or twenty to help them keep the secret, it got to the ears of the lady's brother. Her surprise was only equaled by his rage; and, in a paroxysm, he accused his sister, in the most violent and indelicate terms, of incontinency, and menaced her with the most direful vengeance." *

It turns out the lady was not pregnant, but was sick with 'phthisis pulmonalis.'

So finally the surest signs of pregnancy are the enlargement of the uterus and abdomen, and feeling the baby move "quickening".

(also mentioned are the 'pouting of the navel' and the 'spitting of frothy saliva')

*All quotes from Treatise on the Diseases of Females by William P. Dewees

© TheCellularScale

For more on historical pregnancy medicine, see some great posts from Tea in a Teacup

Monday, March 25, 2013

Guest Post: AMPA Receptors are not Necessary for long term potentiation

Today's post is brought to you by @BabyAttachMode, who is an electrophysiologist and blogger. Today we are blog swapping! I have a post over at her blog and her post about AMPA receptors and LTP is here. So enjoy, and when you're done reading about the newest advances in synaptic plasticity here, you can head over to InBabyAttachMode and read about my personal life.
 
AMPA Receptors are not Necessary for long term potentiation

Science is most interesting to me when you’re testing a hypothesis, and not only do you prove the hypothesis to be false, but you discover something unexpected. I think that happened to Granger et al. They were trying to find which part of the AMPA receptor is necessary for long-term potentiation (LTP), the process that strengthens the connection between two brain cells when that connection is used often. Indeed they find that AMPA receptors are not necessary at all for LTP, which is very surprising given the large body of literature describing how the GluA1 subunits of the AMPA receptor, through interactions with other synaptic molecules that bind to the intracellular C-tail (the end of the receptor that is located inside the cell), are inserted into the synapse to induce LTP.
LTP (source)
The authors made an inducible triple knock-out, which means that they could switch off the genes for the three different AMPA receptor subunits GluA1, GluA2 and GluA3. This way, they ended up with mice that had no AMPA receptors at all. The authors were then able to selectively put back one of the AMPA receptors, either the entire receptor or a mutated receptor. By inserting mutated receptors, for example a receptor that lacks its intracellular C-tail that was thought to be important for insertion of the AMPA receptor into the synapse, they could then study whether this mutated receptor was still sufficient for induction of LTP.

Surprisingly, they found that deleting the C-tail of the GluA1 subunit does not change the cell’s ability to induce LTP. Even more so, they showed that you don’t even need any AMPA receptor to still be able to induce LTP; the kainate receptor (another type of glutamate receptor that has never been implicated in LTP) can take over its job too.

Figure 6C from Granger et al. (2013). Kainate receptor overexpression can lead to LTP expression, without the presence of AMPA receptors.

About this surprising discovery the authors say the following:
"These results demonstrate the synapse's remarkable flexibility to potentiate with a variety of glutamate receptor subtypes, requiring a fundamental change in our thinking with regard to the core molecular events underlying synaptic plasticity."
Of course if you say something like that, the main players in the LTP field will have something to say about it, and they did. Three giants in the field of synaptic physiology commented in the journal Nature, but their opinions differed. Morgan Shang called it "a step forward", whereas Roberto Malinow and Richard Huganir called it "two steps back", saying that LTP without AMPA receptors can only happen in the artificial system that the authors of the paper use to study this. They expect that cells lacking all three AMPA receptors will look so different from the normal cells that the results are difficult to interpret.

Either way, this paper opens new views and questions to how LTP works, and whether AMPA receptors are as important as we thought.


ResearchBlogging.orgGranger AJ, Shi Y, Lu W, Cerpas M, & Nicoll RA (2013). LTP requires a reserve pool of glutamate receptors independent of subunit type. Nature, 493 (7433), 495-500 PMID: 23235828
 
Sheng M, Malinow R, & Huganir R (2013). Neuroscience: Strength in numbers. Nature, 493 (7433), 482-3 PMID: 23344353

Saturday, March 9, 2013

Dopamine and Reward Prediction Error

I am back from the IBAGS conference and full of new information! I plan to blog about tons of amazing things over the next month or so, but today we'll start with some foundation building.

Dopamine nails (source)
The IBAGS (international basal ganglia society) meeting is all about the basal ganglia (which includes the striatum), and as you may know, dopamine is a super important molecule for the proper function of the striatum (it is the dopamine cells that die in Parkinson's Disease).

There were many fantastic talks during the IBAGS meeting and almost a third of them showed the exact same figure on one of their slides. So much so that everyone would start to laugh when someone showed it. And as you may have guessed, it is about dopamine. Here it is:

Schultz 1998 Figure 2
This figure is the basis for the belief that dopamine represents a 'reward prediction error'.  Let me explain. The scattered dots on the lower half of each panel represent action potentials from individual dopaminergic neurons. The x axis it time in seconds. The black columns above them are a histogram showing how much firing is going on at each point in time. When the black columns are tall, there was more dopamine neuron firing. You can see that the height of the black columns matches up with the density of the scattered dots below them.

During 'reward learning', an animal is trained to associate a stimulus (like a tone or a flash of light) with getting a reward (like a drop of water or juice). These three panels show how dopamine responds to this whole process. The first panel shows that when there is no stimulus (CS) and the reward is a surprise, the dopamine neurons respond very strongly to it. The second panel shows that when there is a stimulus that tells the animal that a reward is soon to come, the dopamine neurons respond to the stimulus, but not to the reward. Finally the third panel shows that when there is a stimulus the dopamine neurons respond to it, but if the reward (R) never comes, the dopamine neurons actually stop firing when the reward should have happened.

What is so fascinating about this is that it shows dopamine neurons do not just fire in response to reward, they encode the actual reward with respect to the expected reward. In the author's words:
"Dopamine neurons report rewards relative to their prediction rather than signaling primary rewards unconditionally (Fig. 2). The dopamine response is positive (activation) when primary rewards occur without being predicted. The response is nil when rewards occur as predicted. The response is negative (depression) when predicted rewards are omitted. Thus dopamine neurons report primary rewards according to the difference between the occurrence and the prediction of reward, which can be termed an error in the prediction of reward..." Schultz 1998
This finding is so important to researchers now because it shows that dopamine neurons can encode learning rules. Dopamine neurons constantly and dynamically tell the rest of the brain which stimuli lead to reward, and which stimuli don't. The implications here for pathological learning are huge as well. Mis-signalling in dopamine neurons could lead to an inability to tell what is rewarding and what is not.

© TheCellularScale


ResearchBlogging.org
Schultz W (1998). Predictive reward signal of dopamine neurons. Journal of neurophysiology, 80 (1), 1-27 PMID: 9658025

Monday, March 4, 2013

Honoring a Legend

The Cellular Scale is at the International Basal Ganglia Society meeting this week (#IBAGS2013), and finally has internet!

Sunrise over the Gulf of Aqaba (I took this picture)

It's already been two days of conferencing, and I plan to mainly write some follow up posts when I get back. But I will just briefly mention the "Lifetime Member" lecture that was given on the first evening of the conference.

Mahlon Delong (source)
This year's lifetime member is Mahlon DeLong.
I've written before about deep brain stimulation (DBS) as a treatment for Parkinson's Disease, and DeLong has done some fascinating work that has lead up to DBS in the  subthalamic nucleus (STN).

One particular treat was to see a video during the talk of the very first attempt at alleviating Parkinson's symptoms through a subthalamotomy, the lesion of the subthalamic nucleus.

A Parkinson's Disease monkey was given the subthalamotomy on only one side of the brain and the video shows Mahlon DeLong interacting with the monkey and noting that it's treated side is less stiff than the untreated side. A second video shows the monkey later able to move its arm with no problems.

It was exciting to see this sort of 'moment of discovery' from 1989. There were no cries of "Eureka!" or anything it was more of a 'hm, interesting' tone. You actually hear his post-doc on the video saying
(paraphrasing from memory) "the right side has better tone, at least Mahlon thinks so" and then start laughing.

(source)


One other cool thing about Dr. DeLong is that he is Muhammad Ali's physician.

© TheCellularScale

Tuesday, January 15, 2013

How big is the GIANT Squid Giant Axon?

With all the hubbub about the first ever video of an attacking giant squid in the wild about to unveiled, I started wondering about the giant axon of the giant squid... I mean it would be huge right?...



Giant Squid, Giant Axon? (source)
Squid are special creatures to neuroscientists. Specifically to neurophysiologists, who study the electrical activity of neurons.
Squid Axon location

Atlantic squid have this huge (1mm) amazing axon running down each side of their mantle which allowed for the first recordings of action potentials in the 1930s.

Here is a really nice 5 minute video showing how with (by today's standards) very crude techniques, the electrical signal could be recorded from these axons.


So the squid giant axon is neat, and modern neurophysiology would probably not exist with out it. But what about the GIANT squid giant axon? Wouldn't that be an electrophysiologist's dream?

If it scaled proportionally to say, mantle length, the 1foot long Atlantic squid with a 1mm diameter axon would become a 16 foot long GIANT squid with a 16mm giant axon.
Let's think about this for a minute, 16mm is about 5/8 of an inch. 

US coins for size reference
That is like the diameter of a dime! For those not familiar with US coins, it's like the size of a bead on a necklace... a big bead, like a nice-sized pearl. Basically HUGE considering that most axons in vertebrates are not even visible without a microscope.

However,before you all start running out to hunt the giant squid for its precious precious axon...the truth is that the giant squid does not have a super-giant dime-sized axon. The giant squid axon actually has a smaller diameter than the 'normal' squid axon.  Surprising right?
Do the giant squid just have more axons there, so they don't need one gigantic one? Or is this axon somehow magically myelinated (probably not)? Or does the giant squid just not need one?

First, let me explain that this information was pretty hard to come by and basically anecdotal. I watched a few dissections of giant squid. And while these were really amazing (look at the hooks on the colossal squid's tentacles!), they said very little about the giant axon or how it was modified in these larger animals.

hooks of the colossal squid tentacles, yikes! (source)
This information comes from a comment quoting JZ Young at a 1977 symposium describing his dissection of a 125cm (about 4 feet) long giant squid. I could not get access to this manuscript, so I have to trust the commenter with his quote:
“Everyone wants to know whether giant squids have giant giant fibres. We have no material of the central nervous system but some years ago I was able to dissect the stellate ganglion of an animal washed up at Scarborough in 1933 and sent to the British Museum. The mantle length was 125 cm. The nerves of the mantle muscles are arranged in this genus differently from any other I have seen. Those in the front part of the mantle arise from a relatively small stellate ganglion, in the usual way. The hinder part of the mantle, perhaps more than half of the whole, is suspended from a distinct median nerve, running with the fin nerve and giving off a series of branches to the mantle.
Each of the nerves arising from the ganglion contains one or two large fibres, ranging in diameter from about 80 micrometers in the more anterior ones to a maximum of 250 micrometers further back. The median nerve was further preserved but one fibre of about 250 micrometers could be seen. Two of the more posterior branches contained fibres of about 200 micrometers each. None of the nerves examined contained the exceptionally large fibres reported by Aldrich & Brown (1967). We may conclude that Architeuthis is not an especially fast-moving animal. This would agree with evidence that it is neutrally buoyant with a high concentration of ammonium ions in the mantle and arms (Denton, 1974).”
Young explains that the axon network is set up differently in the giant squid (Architeuthis). He reasons that because the axon is not especially large, it could only conduct so fast, and therefore the fast escape reflex which it causes in the normal squid is just not that fast in the giant squid. This sort of makes sense, in that the giant squid might not benefit from escape as much as the normal squid. The giant squid might be better served by having razor sharp teeth on its suckers or terrifying pain causing-hooks so it could fight away a predator. 

The biggest axon award goes to the Humboldt Squid which has an axon the 'size of spaghetti.'

And while the first ever video of a giant squid just came out, the first ever photographs from the wild were published in 2005.


© TheCellularScale


ResearchBlogging.org Kubodera T, & Mori K (2005). First-ever observations of a live giant squid in the wild. Proceedings. Biological sciences / The Royal Society, 272 (1581), 2583-6 PMID: 16321779


JZ Young, 1977 The Biology of Cephalopods Symposia of the Zoological Society of London #38

Wednesday, July 25, 2012

Fire Together Wire Together

Fire together Wire together (source)
Synaptic plasticity, the strengthening and weakening of neuronal connections, is thought to be the cellular correlate of learning and memory. Neurons that are active around the same time (fire together) will generally strengthen the connection between them (wire together). 

One theory is that neurons strengthen their synapses based on the specific timing between receiving a signal and firing an action potential. This is called Spike Timing Dependent Plasticity (STDP). The basic premise is that if neuron A fires first, and then neuron B fires, neuron A is probably partly responsible for neuron B firing. If this is the case then the signal from A to B probably contains meaningful information.
Neurons Connecting (modified from here)
The action potential in B coming right after a signal from A is a trigger for the cell to strengthen that synapse. The most common hypothesis is that it does this by causing a calcium surge inside the dendrite.

And the opposite? What if neuron B fires before neuron A? It could be a meaningful signal or it could just be random noise. Maybe neuron A fired for no good reason (or even just a little vesicle of glutamate popping out untriggered). Many studies show that when B fires before A, the connection was actually weakened.

So the saying perhaps should be 'neurons that fire one right after the other wire together' ... But that just doesn't flow off the tongue the way 'fire together wire together' does.

STDP is incredibly complex and even though this A before B = strengthening and B before A = weakening explanation makes intuitive sense, there is an exception for every rule.  Some studies find just the opposite! And some studies find that both directions strengthen the synapse (A-B and B-A). 

Dan and Poo (2006) have a nice table explaining these exceptions and the parts of the brain where they are found.
© TheCellularScale

ResearchBlogging.org
Bi G, & Poo M (1999). Distributed synaptic modification in neural networks induced by patterned stimulation. Nature, 401 (6755), 792-6 PMID: 10548104

Dan Y, & Poo MM (2006). Spike timing-dependent plasticity: from synapse to perception. Physiological reviews, 86 (3), 1033-48 PMID: 16816145

Sunday, June 3, 2012

A Tale of Two Huxleys

Andrew Huxley is one of the founders of both modern electrophysiology and  computational neuroscience, and is consequently a personal hero of mine. His recent (May 30, 2012) death inspired me to learn more about his life.

Andrew Huxley (1917-2012)
Andrew Huxley along with Alan Hodgkin discovered the mechanisms which governed the action potential in nerve cells. They inserted micro-electrodes into the squid giant axon and recorded the sodium and potassium currents which generated and propagated the action potential. They shared the Nobel prize for physiology and medicine (with John Eccles) in 1963.

(squid giant axon)
Andrew Huxley is a hero of neuroscience because he (along with Alan Hodgkin) was not only able to develop the equipment and techniques necessary for the complex electrophysiological recordings of the squid axon, but he was also able to understand and mathematically interpret the results of their experiments. Hodgkin and Huxley's mathematical interpretation of their experimental results is basically the beginning of modern computational neuroscience. Their equations describing the flow of ions based on voltage and on concentration are still used in computational models of neurons today. Their famous series of papers (1952) in the Journal of Physiology culminates in their mathematical model of the action potential.

Time constants and steady state curves for activation and inactivation of sodium (Na) and potassium (K) channels (source)
This paper is fascinating to read because of the meticulous thought process that can be traced through it, and because of how much was not known about neurons at the time. The simple composition of the cell membrane was not clear and the fact that sodium and potassium ions actually flow in and out of channels formed by proteins was unknown.

"The next question to consider is how changes in the distribution of a charged particle might affect the ease with which sodium ions cross the membrane. Here we can do little more than reject a suggestion which formed the original basis of our experiments (Hodgkin, Huxley & Katz, 1949). According to this view, sodium ions do not cross the membrane in ionic form, but in combination with a lipoid soluble carrier which bears a large negative charge and which can combine with one sodium ion but no more. Since both combined and uncombined carrier molecules bear a negative charge they are attracted to the outside of the membrane in the resting state. Depolarization allows the carrier molecules to move, so that the sodium current increases and membrane potential is reduced. The steady state relation between sodium current and voltage could be calculated for this system and was found to agree reasonable with the observed curve at 0.2msec after the onset of a sudden depolarization. This was encouraging, but the analogy breaks down if it is pursued further. In the model the first effect of depolarization is a movement of negatively charged molecules from the outside to the inside of the membrane. This gives an initial outward current, and an inward current does not occur until combined carriers lose sodium to the internal solution and return to the outside of the membrane. In our original treatment the initial outward current was reduced to vanishingly small proportions by assuming a low density of carriers and a high rate of movement and combination. Since we now know that sodium current takes an appreciable time to reach its maximum, it is necessary to suppose that there are more carriers and that they react or move more slowly. This means that any inward current should be preceded by a large outward current. Our experiments show no sign of a component large enough to be consistent with the model. This invalidates the detailed mechanism assumed for the permeability change but it does not exclude the more general possibility that sodium ions cross the membrane in combination with the lipoid soluble carrier. " (Hodgkin &Huxley 1952) (emphasis mine)
They describe the ions being bound on one side of the membrane, carried through and released on the other side. If you did not have any idea about membrane channels, this would make sense. What is so beautiful about this is that their experiments and model constrain the vague theory.  However the ions get across the membrane, it must be this fast, this strong, and this dependent on temperature.
They continue:
                "A different form of hypothesis is to suppose that sodium movement depends on the distribution of charged particles which do not act as carriers in the usual sense, but which allow sodium to pass through the membrane when they occupy particular sites on the membrane. On this view the rate of movement of the activating particles determines the rate at which the sodium conductance approaches its maximum but has little effect on the magnitude of conductance. It is therefore reasonable to find that temperature has a large effect on the rate of rise of sodium conductance but a relatively small effect on its maximum value. In terms of this hypothesis one might explain the transient nature of the rise in sodium conductance by supposing that the activating particles undergo a chemical change after moving from the position which they occupy when the membrane potential is high. An alternative is to attribute the decline of sodium conductance to the relatively slow movement of another particle which blocks the flow of sodium ions when it reaches a certain position in the membrane." (Hodgkin &Huxley 1952) (emphasis mine)
Without any structural or molecular analysis of the membrane, Hodgkin and Huxley speculate that there might be sodium channels. They also discuss whether potassium has an entirely separate mechanism of membrane-transport, or whether it is the same one as sodium, but switched in affinity and timecourse in response to membrane depolarization.  Rather than quoting the entire paper here, I urge you to read it as an example of a truly beautiful train of scientific thought.

Aldous Huxley (1894-1963)

Speaking of truly beautiful trains of thought, a different Huxley, half brother to Andrew and 23 years his senior, was a world famous novelist. Aldous Huxley is known best for writing Brave New World, a dystopian novel about a 'perfect' future in which everyone has a place and likes it.
"Till at last the child's mind is these suggestions, and the sum of the suggestions is the child's mind. And not the child's mind only. The adult's mind too-all his life long. The mind that judges and desire and decides-made up of these suggestions. But all these suggestions are our suggestions... Suggestions from the State."
- Aldous Huxley, Brave New World, Ch. 2
Aldous Huxley was on track to become a scientist or doctor, but was struck by an illness which rendered him functionally blind for 3 years, preventing him from maintaining this course of study.

I am not sure which delights me more, that Aldous Huxley is a novelist with a scientist brother, or that Andrew Huxley is a scientist with a novelist brother.

© TheCellularScale

ResearchBlogging.orgHODGKIN AL, & HUXLEY AF (1952). A quantitative description of membrane current and its application to conduction and excitation in nerve. The Journal of physiology, 117 (4), 500-44 PMID: 12991237

Tuesday, March 27, 2012

Seeing Cells: Nissl and Golgi together at last


Seeing through a glass darkly (Bob Jacobs, Colorado College)
The quest to visualize cells is a long and continuously evolving one. 

We have previously discussed how neuroscientists use calcium to watch cells in action, but a surprising amount of information can be gleaned from simply staining fixed (inactive) cells.

There are so many ways to stain fixed cells that I have to write this in parts.  First we'll discuss two of the oldest techniques still commonly used, the Nissl stain and the Golgi stain. 

One of the earliest techniques to allow for the visualization of neurons is the Nissl Stain.

Nissl stain of visual cortex (source)
The Nissl stain colors the cell (purple if you are using cresyl violet) because it reacts with nucleic acids (which make up DNA and RNA) in the nucleus of the cell and in the endoplasmic reticulum. 

The Nissl stain reacts with most of the cells in a brain slice (both neurons and glial cells), so it is not great for seeing the detailed morphology of a single neuron. However, it is great for seeing the cellular patterns of a particular brain area.  The image above clearly shows the different layers of the visual cortex.  Layer 1 has almost no cells, but layer 4 has tons. 

This technique can be used to visualize the results of a certain mutation or drug treatment on the brain. It is often also used as a control experiment to confirm that a treatment did not kill cells or damage the brain. 
(side note: If you want to do this stain yourself, numerous protocols are available online)

In contrast to the Nissl stain which stains almost all the cells, the Golgi stain impregnates only a few of them.
Golgi stain of Hippocampal neuron (Bob Jacobs, Colorado College)
The Golgi stain works by starting a silver chromate reaction in random cells.  It is not known why a certain cell would undergo the reaction while a cell right next to it would not. The result is that the morphology of the cells can be clearly seen without contamination from nearby dendrites from other cells. 

This technique can be used to test whether a mutation or drug treatment alters the growth of cell dendrites. 

(There are protocols online for this stain as well)

A group at Leicester University in the UK has developed a protocol that combines the Nissl and Golgi stains, so you can have the best of both worlds.
Pilati et al., 2008 Figure 3

Using this technique, they were able to more accurately characterize the morphology of neurons in specific areas of the brain.  They also found that the Nissl stain underestimates the soma size compared to the Golgi stain.

© TheCellularScale

ResearchBlogging.org Pilati N, Barker M, Panteleimonitis S, Donga R, & Hamann M (2008). A rapid method combining Golgi and Nissl staining to study neuronal morphology and cytoarchitecture. The journal of histochemistry and cytochemistry : official journal of the Histochemistry Society, 56 (6), 539-50 PMID: 18285350

Tuesday, February 28, 2012

If you give a mouse a placebo...

...It might ask for some cocaine.  Or it might feel the effects of cocaine anyway. 
Just say no, Rat (source)

The "Placebo Effect" occurs when someone takes a functionally ineffectual drug, but feels the effects anyway. There are many examples of this: Someone in pain takes a sugar pill, but is told that it is a painkiller might report 'feeling much less pain'.  A Parkinson's patient takes a sugar pill having been told it was their 'L-dopa' medication and can suddenly move more fluidly. The "Placebo Effect" is so strong that most experiments testing the effectiveness of a drug in healing anything use a placebo control.  The researchers want to make sure that the drug has an actual effect that is greater than the placebo effect.  (It has been proposed that homeopathic remedies are entirely due to the placebo effect)

One problem with deeply understanding the physical mechanisms which underlie the placebo effect is that all the experiments must be on humans.  You can't simply tell a mouse it's getting a 'cure' and give it a fake pill.  However, scientists at the National Institute on Drug Abuse (NIDA) have conducted an ingenious experiment that involves giving a mouse what is essentially a placebo.  Better still, they published it in PLoS One, so everyone can read the paper for free!

Before we dive into the placebo aspect of this paper, we need to back up and learn a little about as the addiction and reward system works in the brain.

In 1954, James Olds and Peter Milner published a paper showing that a rat would press a lever to receive an electrical stimulation in certain areas of its brain.
Olds and Milner, 1954 Figure 2, Xray of rat
This was a huge discovery showing that 'reward' could be activated directly. 

Later studies found that when this electrode stimulates the dopamine system of a rat, the rat will press and press and press this lever, even forgoing food when it is famished.  Incidentally, a mouse/rat will also compulsively press a lever to get injections of cocaine (which acts by stimulating the dopamine system).  You can do all sorts of experiments on drug addiction using this cocaine self-injection system. You can test how long it takes the mouse to become addicted, you can test the effect of drug concentration, you can test how other drugs interact with self-injection of cocaine, and you can even test aspects of withdrawal and relapse.

Which brings up back to our placebo.  Wise et al., (2008) investigated the mechanisms underlying this self-administration.  What was happening in the mouse brain when they got a dose of cocaine? They found that when the mouse pressed the lever and got the cocaine there was a surge of dopamine almost immediately after.  There is a center in the brain called the VTA that contains neurons which release dopamine. When these neurons are active, other areas of the brain are flushed with dopamine and the person/rat/mouse 'feels reward'.  But what makes these neurons active?

This brings up a problem we discussed a while ago, about the never ending cycle of neuronal firing.  The dopamine neurons fire, but why? what neurons are firing onto them to make them fire, and then, what neurons are making those neurons fire and which ones are firing before that...so forth into forever. 
To go one step up in this firing-chain, Wise et al. cleverly looked at 'brain juice' in the VTA and found that when the cocaine is administered, the mouse gets a surge of glutamate there.  (Glutamate activates cells, so this would cause the dopamine neurons of the VTA to fire and release dopamine onto other cells). 

So what does all this mean, and how does it get us to a placebo for a rat?

Here's the thing: the surge of glutamate that stimulates the VTA only shows up in mice that have already learned that a lever press gives them cocaine.  (That is, this glutamate surge doesn't occur the very first time the mouse gets cocaine)

Wise et al., 2008 (figure1B)
 Here is the figure showing this glutamate surge in the VTA.  The vertical dotted gray line is when the mouse presses the lever for the cocaine.  The red and yellow traces are the condition where the mouse actually gets cocaine in response to the lever press.  the blue and green traces are the condition where the mouse gets saline instead (a control), and the gray trace is the first time the rat gets cocaine in response to the lever press.

 Another peculiar aspect of this glutamate surge is that it is probably too fast a response to be a result of the cocaine acting in the brain.  So how is the injection of cocaine causing a glutamate surge if it is not even acting on the brain yet?  This is quite the puzzle.  Wise et al., wanted to make sure that this glutamate surge was absolutely not due to the cocaine reaching the brain, so they invented the rat placebo! They altered the cocaine molecule, so it was mostly the same shape as normal cocaine, except it couldn't cross the blood brain barrier.  That is, when it is injected into the mouse, it can be detected in the blood and in the peripheral body, but it won't be detected in the brain, and the cocaine will not be able to act directly on the neurons.

When they run the test with this molecule injected instead of cocaine, the figure looks like this:

Wise et al., 2008 (figure1E)
Pretty similar! This cocaine molecule can't reach the brain, but it causes the same glutamate surge as the real stuff! This shows that glutamate surge is somehow due to the cocaine being felt by the body, but not being felt by the brain.  Although they don't call it a placebo in the paper, that is essentially what it is.  It is tricking the brain into thinking it has just received cocaine. (In a stronger way than context can, as evidenced by the lack of response to saline.) 

 So there you have it, a way to trick a rat into thinking it has just received a 'real drug' when it has actually received an ineffectual drug. I think this technique could be adapted to actually study the physiological mechanisms governing the placebo effect. 

© TheCellularScale

ResearchBlogging.orgWise RA, Wang B, & You ZB (2008). Cocaine serves as a peripheral interoceptive conditioned stimulus for central glutamate and dopamine release. PloS one, 3 (8) PMID: 18682722

Monday, February 6, 2012

the synapse: where the magic happens

What is a synapse?
The synapse is the junction between two neurons, usually between an axon, which gives the signal, and a dendrite, which receives the signal.   

This meeting of neurons is absolutely essential to how the brain works.  It is where the information gets passed on from one neuron to the next. 

The 'magic' at the synapse
When someone talks about neuronal pathways being strengthened, they usually mean a strengthening of this synaptic connection.  This strengthening (or weakening) is referred to as "synaptic plasticity." Specifically, when the connection between two neurons is strengthened, it is often referred to as Long Term Potentiation (LTP) and when it is weakened it is is often called Long Term Depression (LTD).  Synaptic plasticity is so exciting because it is a feasible biological mechanism for memory formation and storage. 

How this 'magic' was discovered
The first paper to show that the connections between neurons could be strengthened was Bliss and Lomo 1973.  They were studying the hippocampus, the region that underlies episodic memory and spatial learning.
Bliss and Lomo, 1973 Fig1a

They found that when you stimulated the nerve fibers with certain frequencies (100 Hz is now a commonly used frequency for this), the signal from the group of neurons grew, and stayed large for hours.  (They tracked at least one experiment for 10 hours!)

Bliss and Lomo, 1973 Fig4c

In this figure, the dots represent the size of the signal at each point in time.  The arrows represent the high frequency stimulation (here they stimulated 4 times).  After each stimulation, the signal grows. 
The black dots are the pathway that was stimulated and the open circles are an unstimulated pathway that they used as a control. 

The concept that activity patterns between cells could strengthen the connection between them fundamentally changed the way people thought about information processing in the brain. Now there is a huge branch of neuroscience devoted to connecting LTP and LTD to behavior and investigating the mechanisms which underlie synaptic plasticity.

In a retrospective paper, Lomo describes how the discovery came about.  I found this quote particularly interesting:
"Why did I not pursue and publish a fuller account of my findings in 1966? Because I was overcome by the complexity of the system and my lack of understanding of what was behind the findings. There was also no sense of urgency. Thus, when Tim and I published a full account in 1973 (Bliss & Lømo 1973), it still took years for the significance of the findings to be generally appreciated. "
It's hard to imagine 'no rush' to publish something like this and it is refreshing to see a scientist who is hesitant about publishing something that s/he does not fully understand.


ResearchBlogging.orgBliss TV, & Lomo T (1973). Long-lasting potentiation of synaptic transmission in the dentate area of the anaesthetized rabbit following stimulation of the perforant path. The Journal of physiology, 232 (2), 331-56 PMID: 4727084

ResearchBlogging.orgLømo T (2003). The discovery of long-term potentiation. Philosophical transactions of the Royal Society of London. Series B, Biological sciences, 358 (1432), 617-20 PMID: 12740104