The synapse is the connection between two neurons. The pre-synaptic part is from the neuron sending a signal and the post-synaptic part is from the neuron receiving the signal.
If you want to learn about the connection between the two neurons, you want to know what is happening on both sides of the synapse. It's relatively easy to record signals from the post-synaptic side using patch clamp or sharp electrode recording, but it is much much harder (basically impossible until now) to record from the pre-synaptic side.
Synapses, the connections between neurons can strengthen and weaken depending on the specific activity at that synapse. This is called synaptic plasticity, and we've talked about it a lot on this blog (here, here, here and here).
the strengthening and weakening of synaptic connections corresponds to the spine growing or shrinking (Matsuzaki 2007)
However, there is another kind of plasticity that can occur at synapses. This is called homeostatic plasticity. And instead of the synapse strengthening or weakening depending on the specific activity at that synapse, the synapses strengthen and weaken in homeostatic plasticity depending on the activity of the whole cell.
To drastically simplify, each cell 'wants' to fire about a certain amount, if it suddenly starts to fire a lot less, it will take steps to strengthen its connections or make itself more 'excitable' so it can get back to its preferred amount of firing. Similarly if the cell starts to fire a lot more than normal, it will take steps to make itself less excitable and to weaken its connections until it reaches the right amount of firing.
A recent paper from the Pak lab explains how in some specific neurons in the hippocampus (CA3 pyramidal cells), the activity of the whole cell is strongly controlled by a some very peculiar synapses. These synapses are close to the cell body, and are on these HUGE weirdly shaped spines (see above) called "Thorny Excrescences". For comparison 'normal' spines look more like this:
The Thorny Excrescences (TEs) are massive spines that contain many separate synapses on them, but connect to the dendrite through 1 neck. 'Normal' spines, on the other hand, usually have 1 synapse at the spine head, and connect to the dendrite through 1 neck.
The size of the TEs, and their proximity to the soma makes them an extremely powerful way to control the signals that the soma receives. Lee et al (2013) shows that when you drastically reduce activity by blocking action potentials (using TTX), you get massive growth of these TEs, but the normal spines further away from the soma stay the same.
They test 3 things to determine whether the TEs have undergone homeostatic plasticity. They look at the morphology (they are bigger), the activity (the electrical signals from them are bigger) and the molecular signatures (the molecules indicative of new synapses are more plentiful). The paper is a really nice complete story showing that these TEs have a lot of control over the general activity of the cell.
It also solves an important problem with homeostatic plasticity. That is, how can the general activity of the cell be modulated without the specific differences between synapses being erased, and consequently the memories or pieces of information they encode? If homeostatic plasticity occurs at spines dedicated to it, then the other spines can still encode specific signals while the activity of the cell as a whole changes.
I would like to thank my good friend Anonymous for asking me a great question on a previous post.
Anonymous asks:
"Are there any known transmitters in the NS that activate both inhibitory
receptor subtypes AND excitatory receptor subtypes? Or does every
known transmitter activate EITHER a bunch of excitatory subtypes OR a
bunch of inhibitory subtypes?"
(btw. This doesn't qualify as a LMAYQ post because it's a real true question that someone directly asked, not a search term)
While I don't know of any instances of glutamate (excitatory) activating GABA (inhibitory) receptors or of GABA activating glutamate receptors, there is an interesting little way that GABA can activate an inhibitory receptor, but actually help excite the cell.
Here's how that works: GABA(A) receptors are permeable to chloride ions, and as the picture above shows, chloride ions (Cl-) are negatively charged. When GABA binds to the receptor, the receptor opens and chloride ions rush in, bringing their negative charge with them. This hyperpolarizes the cell, meaning it brings it lower and lower in total charge (membrane potential), which brings it further and further away from the threshold where it will fire an action potential.
BUT.... if there is a lot of chloride inside the cell already (or if the cell is resting more negatively than the chloride reversal potential), chloride will actually flow out of the cell, bringing its negative charge with it. Negative ions flowing out of the cell will depolarize the neuron increasing its total charge (membrane potential), which brings it closer and closer to the threshold where it will fire an action potential.
A paper published last year in the Journal of Neuroscience shows that in a model of a hippocampal neuron, when a strong excitatory (glutamate) stimulation happens right after a GABA stimulation close by on the dendrite, the cell is actually more likely to fire than when the glutamate stimulation occurs on its own. This effect is dependent on the location of the GABA stimulation along the dendrite.
This figure shows that a GABA stimuation (first dotted line, blue trace) can push the glutamate (excitatory) stimulation (second dotted line, red trace) up to the point of firing an action potential (green trace). This paper also showed that GABA can still inhibit the action potential in these cells, it just has to be at the soma and almost the same time as the glutamatergic input.
Chiang et al., 2012 Figure 4G (GPSP in the soma)
So there you have it, GABA enhancing the likelihood of an action potential and acting excitatory sometimes, and acting inhibitory other times.
Chiang PH, Wu PY, Kuo TW, Liu YC, Chan CF, Chien TC, Cheng JK, Huang YY, Chiu CD, & Lien CC (2012). GABA is depolarizing in hippocampal dentate granule cells of the adolescent and adult rats. The Journal of neuroscience : the official journal of the Society for Neuroscience, 32 (1), 62-7 PMID: 22219270
But how, when and why these neurons grow is currently under investigation. A 2008 paper attempts to answer the 'when' of neurogenesis. They labeled (PH3) cells in the mouse hippocampus (dentate gyrus to be specific), and counted how many cells were currently going through mitosis at different times of day. They found that during the dark phase, more cells were PH3-positive, indicating that more cells were growing at night.
They also tested whether neurogenesis was modulated by exercise. And it was. Mice who had access to a running wheel in their cage grew about the same number of cells during the night, but grew more cells during the day. So much so that the difference between night and day disappeared.
This figure shows the light-dark cycle (Zeitgeber time) and the number of 'growing' cells. B shows the pattern for control mice, and D shows the pattern for the running mice. Notice that the y axes are scaled differently.
So exercise helped new cells grow, but without exercise more cells grew during the night time. Now all this use of the phrase 'night time' might make you think that this neural growth is happening during sleep.
After a long night of wheel running, Jasper succumbs to a restful days sleep. (source)
But it's not. Mice are nocturnal. They sleep during the day and are wide awake at night. The paper shows that almost all the running that occurs on the running wheel happens at night. So the enhanced cell growth is happening when the mice are active. Why exercising at night causes cells to grow during the day is interesting, but the authors offer no mechanism for why that might be happening.
Sometimes reading the textbook is just too hard. And sometimes it's much easier just to type your exact homework question into a search engine and find the answer. Before we get started you might want to take a look at Smith and Wren (2010) "What is Plagiarism and how can I avoid it?"
This edition of Let Me Answer Your Questions will address 'homework questions.' As always, you can find previous LMAYQ questions here.
Tough Homework Questions are for the Internet (source)
1. "cells that fire together a) wire together. b) definitely don’t wire together. c) become overheated and die. d) wire with inactive cells."
Ok, student. If you have to turn to the Internet for this multiple choice question, you need a serious lesson in test-taking skills. Here's a tip. If you don't know the answer, eliminate some because they are obviously not the right answer. A good rule to follow is if it says definitely or all or always in the answer it is unlikely that that is the right one. So we can eliminate B. If you know anything about neurons from your class, you should at least know that neurons fire. If they didn't fire they wouldn't 'work'. So you can eliminate C. If all the cells that fired together overheated and died, you would basically not have any neurons left in your brain after reading this sentence. Now you have a 50-50 chance of guessing it correctly. Not too bad. But seriously, if one answer creates a cute little rhyme with the question... it's probably going to be that one. So yes, neurons that fire together wire together.
2. "explain simply where the hippocampus is"
The hippocampus is one of the most famous brain structures because it has to do with encoding new memories. It's name comes from the Greek hippo (horse) and campus (sea), because it looks like a seahorse:
(By the way, potamus=river, so a hippopotamus is a 'river-horse'). Back to the hippocampus: It is a structure in the brain and it is located subcortically meaning under the cortex. Specifically it's located under the temporal lobe of the cortex. There are two of them, one on each side of brain.
3. "Why are neurons and blood cells structured and shaped differently from each other?"
Smith N Jr, & Wren KR (2010). Ethical and legal aspects part 2: plagiarism--"what is it and how do I avoid it?". Journal of perianesthesia nursing : official journal of the American Society of PeriAnesthesia Nurses / American Society of PeriAnesthesia Nurses, 25 (5), 327-30 PMID: 20875892
1. Ketamine and the neurogenesis theory of depression. At a nice poster (324.28) R.M. Carter explained that ketamine has fast anti-depressant effects. Some people think that depression is caused by neurodegeneration and cured by neurogenesis (the growth of new neurons). So this group wanted to test whether ketamine could affect the growth of new neurons on the same timescale that ketamine affects depression (a few hours). And it does! Ketamine increased the rate that newly generated neurons in the dentate gyrus became synaptically mature. It will be interesting to see where this theory of depression goes. I would love to see a possible mechanism of action by which an NMDA antagonist could speed up neuron maturity. (Intuitively I would guess it would do the opposite.)
2. Fat and dopamine. In a nanosymposium (420.06) J. Carlin explained that rats fed a high fat diet from birth had lower dopamine than controls. This goes along with the idea that obesity can be related to a lower sensitivity to reward. The good news is that when the rats were put on a normal diet, the dopamine went back to normal. HOWEVER, this was only true for the males! The female rats did not go back to normal dopamine levels. Yikes, right? Carlin explained that maybe the females do go back to normal dopamine, but just not within the time frame that they tested. As always, more research is necessary.
Last post, we talked about the fallibility of flashbulb memories. Today we're going to discuss a new paper in which scientists claim to have created a fake memory in a mouse.
Garner et al., (2012) use the same kind of genetic trickery that Han et al. (2009) used to erase memories. They genetically modified mice to express a foreign receptor that mice don't normally express. These kinds of receptors are called DREADDs which stands for "Designer Receptor Exclusively Activated by a Designer Drug." A DREADD can activate the cell, inactivate the cell, or even kill the cell. (Han et al., added a receptor that killed the the cells, but Garner et al. add a receptor than activates the cells.)
But, here's the real genetic trickery, the DREADD is promoted only in the cells that are active at a certain time. When something happens, the cells that are active during the event will express the DREADD. So later, when the designer drug is applied, only the cells which were active during the event will respond.
Using this DREADD system, Garner et al. try to trick mice into thinking that they were shocked in one room, when really they were shocked in another room. They call this a 'generating a synthetic memory trace' and this is how they do it:
Garner et al., 2012 Figure 2A
First of all the kind of memory the authors are synthesizing is the association between a room (or context) and an electric shock. If you put a mouse in a room and then give it an electric shock, the next time it is in that same room it will 'remember' that that room is scary and will show a freezing behavior. The measurement of how good this memory is is simply counting the percent of the time that the mouse spends freezing in the room.
They have two rooms, context A (Ctx A) and context B (Ctx B). First they take the mouse and put it in context A (but don't give it an electric shock). This activates a certain subset of neurons and so the DREADD will get expressed in those neurons. Let's call them the "Context A neurons." Then they stop the creation of new DREADDs by adding in doxycyclin, which turns off the DREADD gene expression. This makes it so (in theory) the only cells that have DREADDs are the "Context A neurons."
Then they put the mouse in context B, but at the same time they apply the "designer drug" to activate the DREADDs. Since the DREADDs are (supposedly) only in the "Context A neurons," the neurons that the drug activates should trick the mouse into thinking it is actually in context A, when really it is in context B. Then they apply the shock to the mouse.
To see if they have 'generated a synthetic memory trace' the authors test whether the mouse freezes in context A (where it thinks it was shocked) or context B (where it was actually shocked).
Garner et al., 2012 Figure 2B&C
Unfortunately the authors don't find something simple. First of all, they find that the mice with the DREADDs (the filled black circles above) almost always freeze less than the normal control mice (grey triangles), and they don't really explain why that might be. Second of all, they find that the application of the designer drug (+CNO) increases freezing for the DREADD mice in both context A and context B.
The mouse didn't learn that Context A is where it got shocked. Instead it learned that Context B with the "Context A neurons" is where it got shocked. It's like the "Context A neurons" become part of context B.
The authors call this a 'hybrid memory trace' where the mouse learns to associate a combination of the "Context A neurons" and the actual context B environment with the shock.
So what if just adding this drug is enough to create a hybrid memory? The authors did a nice control experiment to test this. They did the exact same protocol, but put the mouse in context B every single time (never in context A). That way the neurons expressing the DREADD are the "Context B neurons" and should basically be the same set of neurons that are active anyway when the mouse is shocked in Context B. In this case, the mouse froze a lot to context B without the drug, and it froze the same amount to context B with the drug. The drug caused no enhancement when it was activating the "Context B neurons." This is strong evidence that the hybrid memory trace has to involve the activation of a new set of neurons.
This is a really nice experimental design, but I think that the authors oversold their result a little bit in the title "Generation of a Synthetic Memory Trace." They didn't create a totally fake memory, they created a hybrid memory by adding in new neurons to the 'context' that the animal associated with the shock. There is no evidence that the mouse thought it was in context A or even that having a context A is important. If they had just stimulated a random, but new, set of neurons in context B and then stimulated that same random set of neurons when testing the mouse for freezing behavior, they might have seen the same results.
Spiral stairs at the Vatican (I took this picture)
In a study out last year, Hayman et al., (2011) investigate whether the classic place cells and grid cells of the rat brain also encode vertical height.
We've discussed place cells before, so read this if you want to get back to the basics. Grid cells are a sort of extension of place cells. They are cells that fire in a regular pattern over an area while you move around in it.
The red dots represent when the neuron fires and the black line represents the path that the animal (probably a rat) was traversing. As you can see the neuron fires when the rat reaches any of the points that make up a regular grid.
But this is just the rat crawling around on a flat surface. What happens if you have the rat move vertically? Does a vertical grid show up? Hayman et al. tested exactly that by introducing the rats to the exciting world of rock climbing.
While the rats were climbing around on this rat-sized rock wall, the cells that had fired in a grid pattern on a flat surface actually fired in a striped pattern on the pegboard.
Figure 2A Hayman et al., 2011
On the left is the cell firing like a normal grid cell on a flat surface, but on the right a grid cell (not the same one) is firing in a striped pattern on the vertical climbing wall.
The authors suggest that this might be just the normal grid showing up but extending along the vertical plane. In other words, each point of the grid includes the space directly above and directly below it and basically forms a grid of columns.
This finding could mean a number of things:
1. The brain does not encode vertical space very specifically.
2. Vertical space is encoded, just not in the hippocampus and entorhinal cortex (where place cells and grid cells reside).
3. A rat's brain doesn't encode vertical space, but maybe brains in other animals (flying animals for example) do.
In a mini review of this paper, Savelli and Knierim (2011), suggest that future experiments on flying mammals known to have grid cells (such as bats) would shed light on the third point.
Vertical grid cells in 'the flying squirrel'? (source)
I agree and I also wonder if the entorhinal cortex of humans could develop three dimensional grid cells under certain conditions. Could people who really need to know where they are in vertical space, such as trapeze artists or gymnasts, develop a more specific sense of height?
Hayman R, Verriotis MA, Jovalekic A, Fenton AA, & Jeffery KJ (2011). Anisotropic encoding of three-dimensional space by place cells and grid cells. Nature neuroscience, 14 (9), 1182-8 PMID: 21822271
Savelli F, & Knierim JJ (2011). Coming up: in search of the vertical dimension in the brain. Nature neuroscience, 14 (9), 1102-3 PMID: 21878925
Pain is usually a helpful sign that something is wrong with a part of your body. Heat-pain will cause you to pull your hand back from something hot before it burns you. The pain of a cut will draw your attention to it, so you can clean it.
However damage to the central or peripheral nervous system can result in chronic neuropathic pain, which is not helpful form of pain. Neuropathic pain is basically some mis-firing or mis-connected pain neurons sending meaningless, but persistant pain signals to the brain. And as bad as that sounds, chronic pain can also apparantly wreak havoc on your brain.
A recent study by Mutso et al., (2012) shows that in both humans and experimental animals, the brain is actually re-organized in response to chronic pain. Specifically, they look at pain-related changes in the hippocampus, the part of the brain most strongly implicated in memory encoding.
They compared human patients with chronic back pain, complex regional pain syndrome, and osteoarthritis to people with no pain-related condition, and found that the people with both chronic back pain and with complex regional pain syndrome both had reduced hippocampal volume when compared with the normal control group. The osteoarthritis patients showed a trend toward reduced hippocampal volume, but the result was not statistically significant.
So what does this mean? If you have chronic pain you have a smaller hippocampus? We've covered this kind of study before, basketball players had larger striatums that non-basketball players, but it is never really clear what the volume of a brain region tells us.
It is very difficult to draw any conclusions about the effect of pain on the hippocampus simply by learning that the hippocampi of people with chronic pain are smaller than the hippocampi of normal people.
Luckily the study did not end there. Mutso et al. also investigated the effects of chronic pain on the cellular level.
They found that in mice with chronic pain, the hippocampus has fewer 'new' cells. By staining for two specific markers DCX and BrdU, you can tell which neurons are new. The hippocampi of control (normal) mice had around 40 new cells, while the chronic pain mice had only 14. This is an indication that neurogenesis is much reduced in response to chronic pain, and suggests that the reduction in hippocampal volume could be related to fewer new neurons being generated (though it does not show this conclusively).
Unfortunately, chronic pain is bad for your hippocampus, and a cure for both the pain and the collateral brain re-organization are still illusive.
Mutso AA, Radzicki D, Baliki MN, Huang L, Banisadr G, Centeno MV, Radulovic J, Martina M, Miller RJ, & Apkarian AV (2012). Abnormalities in hippocampal functioning with persistent pain. The Journal of neuroscience : the official journal of the Society for Neuroscience, 32 (17), 5747-56 PMID: 22539837
They can use all sorts of specialty inks. So if you want a set of shirts for your lab with a gfp-style glow in the dark pyramidal neuron on it, they can do it. If you want a shirt with a brain on it that says "neuroscience"...done! They made me one (see above), and you could contact them to order one yourself if you want. They can even do high-resolution color blending, so if you've always wanted some brainbow shirts, they could do that too.
A few readers were kind enough to take the online typing tests that I linked to and report their results. Unfortunately there are too few Dvorak users out there, so no new results from them. However, the Qwerty users had some seriously fast fingers, so I had to change the scale of the graph!
This piqued my curiosity and I wanted to know how fast the FASTEST typists could type, I also wanted to see them in action. so for your viewing pleasure, here is Sean Wrona winning the 2010 SXSW typing championship with 163 wpm. (I gather from the internet that he can actually type as fast as 237 wpm)
He types in Qwerty, and since he could apparently type 80 wpm at age 6, I imagine the keyboard format is pretty ingrained in his brain.
Which brings me to another point: I would like to look at this guy's brain.
But what exactly would I be looking for?
Consistently 'exercising' a part of the brain can result in visible structural changes there. A classic example of this is the taxi drivers who show navigation based changes in hippocampal structure. (Maguire et al., 2000)
The hippocampus is cool and all, but I wouldn't expect to see typing-dependent changes there. It traditionally has much more to do with episodic memories (yes, Proust), and spatial navigation (yes, Place Cells).
striatum is the striped area
The brain structure that I might expect to be affected by extreme typing expertise is called the striatum (a part of the basal ganglia). While it receives less attention than the hippocampus and amygdala, the striatum is a fascinating structure crucial to forming habits, addiction, and learning motor skills. Playing the piano, kicking the soccer ball, typing, and almost anything that people refer to as 'muscle memory' is a motor sequence learned with the help of the striatum.
A recent study from Korea has compared the size of striatum in basketball players (6 hours of practice a day) to the size of the striatum in non-athletes matched for height and weight. Park et al., (2010) found that both the absolute and relative sizes of the striatums (in both hemispheres) were larger for the basketball group than for the non-athlete group. They give a few reasons why this might be so (more cells, more blood flow to that region etc), but nothing conclusive.
While this is an interesting study, it is very limited. The question remains: What aspect of basketball (if any) is causing this structural difference? And there are many possibilities:
1. People who have bigger striatums to begin with are more likely to play basketball, and the 'structural change' is not due to the basketball playing at all.
2. Exercise itself causes striatal enlargement.
3. The teamwork and interactions causes striatal enlargement.
4. Hand-eye coordination causes it
5. learning the game of basketball causes it
and so on...
I would like to see a more thorough experiment, using something as simple as this foursquare design:
If both the basketball players and the runners have larger striatums, exercise would be implicated. If both the basketball players and the piano players have larger striatums, then the skill learning would be implicated. If all three groups have larger striatums than the couch potatoes, that could be a sign that being a couch potato is pretty bad for your brain.
So getting back to the expert typist scenario, which box does the typist fit in? It is clearly not exercise, but it is not exactly equivalent to piano playing either. The piano player learns new songs on a regular basis, while the typist doesn't learn new paragraphs or new sentences in the same way. If it is the skill learning that 'grows' the striatum, then typing practice might not do anything past a certain point. It might take learning a new keyboard style to stimulate 'skill-learning' based growth.
In conclusion, the striatum (and possibly the cerebellum) might be an interesting place to look for brain changes in typing experts. However, the particular skill of typing fast in not necessarily the most likely skill to cause changes in the striatum.
Park IS, Lee KJ, Han JW, Lee NJ, Lee WT, Park KA, & Rhyu IJ (2011). Basketball training increases striatum volume. Human movement science, 30 (1), 56-62 PMID: 21030099
Maguire EA, Gadian DG, Johnsrude IS, Good CD, Ashburner J, Frackowiak RS, & Frith CD (2000). Navigation-related structural change in the hippocampi of taxi drivers. Proceedings of the National Academy of Sciences of the United States of America, 97 (8), 4398-403 PMID: 10716738
A place cell fires in one particular spot
(source)
Place cells are neurons in the hippocampus that fire when an animal is in a particular location. Like many other cases where a neuron activates in response to something specific, the question everyone wants to answer is 'why does the neuron fire at that particular spot?' A study published 1 year ago today used a quite difficult technique and a combination of patience and extreme persistence to look more deeply into the intracellular properties of individual place cells.
Previously people have studied place cells using a technique called 'extracellular recording.' This technique involves implanting a recording electrode into the hippocampus of a rat, mouse, or bat(sometimes human, if the electrode is being implanted for health reasons). This recording electrode can tell when a neuron close to it spikes (i.e. fires an action potential), and the time of the spike can be matched to a video recording of the animal moving around in space. The above image represents a top-down view of a square box where the rat was allowed to run around freely. The black line is where the rat moved during the recording and the red dots indicate where the rat was each time a specific neuron fired. You can see that this particular neuron fired only when the rat was in a certain area.
Place cell recording set up (Rotenberg et al., 1996)
Extracellular recording has been used extensively to investigate how place cells develop, adapt to new environments, and even how they are remembered. However, this technique can only show when a neuron spikes. It can't reveal any information about intracellular characteristics.
Epsztein et al., (2011) uses a new technique to investigate what is happening inside a place cell. The technique they use is called whole cell patch clamp. In whole cell patch clamp, a glass micro-electrode, which is filled with a salt solution similar to that found inside actual neurons, is lowered so that it is right next to the surface of the cell (the opening of the glass micro-electrode is smaller than the cell body). The cell membrane forms a seal around the tip of the micro electrode, and then brief suction is applied to break a hole into the cell. Once the hole is made, the electrical signal of the neuron can be measured through the micro electrode.
This is a difficult technique because any slight movements of either the cell or the glass micro electrode could break the seal and sever the connection. This technique is commonly used in slices of brain or in cultured brain cells and is done on a vibration isolation table to prevent jostling of the cell and micro electrode. I am very familiar with this technique and its difficulties, so I am beyond impressed that Epsztein et al. were able to used this technique in a moving rat!
Epsztein et al., 2011 Fig 5
While the use of this technique in freely moving rats is difficult, the findings are certainly interesting enough to justify the effort.
The authors found that before the rat was put in the maze, the cells that turned out to be place cells were physiologically different than the cells that turned out not to be place cells (so called silent cells). Specifically the future place cells spiked in a more 'bursty' pattern (see image), while the future silent cells spiked in a more 'regular' pattern.
Previous theories about how place cells were generated mostly focused on what inputs the cells were receiving, not their intrinsic properties. What makes this finding so fascinating is that the intrinsic cellular properties which govern the spiking pattern of the cell actually predicts whether they will be a place cell or not. The inputs onto these cells may be important for organizing which cells fire at each particular place, but the cell must have certain intrinsic qualities to become a place cell to begin with. In the author's words:
"Therefore what intrinsic factors may predetermine is the restricted subset of cells that could potentially have place fields. Moreover, among the set of possible place cells, the relative locations of their place fields also appear to be predetermined."
One big issue that the authors bring up in their discussion is that of 're-mapping.' Place cells are specific to the environment that the rat is in. When the rat is moved to a new environment, it forms new place fields with new cells (though some overlap). The important thing is that sometimes cells will be silent in one environment and have place fields in a different environment. It's really not clear whether these cells can modulate their intrinsic properties fast enough to 'become' place cells from silent cells, or whether there are some cells that are never going to be place cells no matter what environment they are put in. Because this technique is so difficult, these questions are not likely to be clarified very soon. But, at least now we know that we should be asking them.
Epsztein J, Brecht M, & Lee AK (2011). Intracellular determinants of hippocampal CA1 place and silent cell activity in a novel environment. Neuron, 70 (1), 109-20 PMID: 21482360