When my advisor told me that it takes a semester just to graduate, she seriously wasn't kidding.
Here's the math:
To graduate this December, I had to have everything signed and turned in on December 6th. Doesn't sound so bad, right?
But that basically means I have to defend in November.
And my college/department has a mandatory pre-defense which must be a MONTH before the real defense.
/*Note on the pre-defense: I am not sure how many universities require a pre-defense. It has some pros and cons.
Pros: you have everything in a state of readiness a month before you really need to, and if anything is glaringly horrible and you might not graduate because of it, you find that out before your defense and likely before you tell everyone you are going to graduate. The pre-defense is private, so your presentation is critiqued and likely better for the public real defense, which is to everyone's benefit.
Cons: you have to have everything ready a month earlier that you really need to. Your committee might use it as an excuse to tell you to do extra things because you have a month. Your committee has to sit through basically the same talk twice, and it is sort of a waste of their time.
end of note*/
Thus the pre-defense must occur in October.
And your committee needs to read the dissertation before you pre-defend it, so you really need to give it to them 2 weeks before the defense.
This means that to graduate in December, you basically need your dissertation in a state of readability and relative completeness by the end of September! And since the semester starts at the beginning of September, you have essentially 4 weeks of the fall semester to work on your dissertation.
And the same goes for a spring graduation.
If you want to graduate in May, you defend in April, pre-defend in March, and have have everthing turned in by the end of February.
Plan accordingly.
© DrCellularScale
▼
Monday, December 9, 2013
Thursday, November 7, 2013
Official SfN Neurobloggers 2013
Due to starting a major simulation this summer, I am not going to the annual society for neuroscience meeting in San Diego this year. And therefore I won't be neuroblogging it like I did last year.
I look forward to reading the posts and tweets from the official neurobloggers this year.
Here they are:
From Brains to Beyonce by @Spork15
House of Mind by @houseofmind
Neuron Physics by @Eric_Melonakos
NeuroscienceDC by @NeuroscienceDC
Neurolore by @TheMrsZam
NeuroCultureBlog by @LaSaks87
Churchland lab by @anne_churchland
Dormivigilia by @beastlyvaulter
Neurorexia by @ShellyFan
On Psychology and Neuroscience by @astroglia
Más Ciencia por México by @mrenteria_
Imagining Science by @DrImmySmith
Corona Radiata by @JohnKubie
Follow the #SfN13 hashtag on twitter to find all the unofficial coverage of the conference.
© TheCellularScale
I look forward to reading the posts and tweets from the official neurobloggers this year.
Here they are:
From Brains to Beyonce by @Spork15
House of Mind by @houseofmind
Neuron Physics by @Eric_Melonakos
NeuroscienceDC by @NeuroscienceDC
Neurolore by @TheMrsZam
NeuroCultureBlog by @LaSaks87
Churchland lab by @anne_churchland
Dormivigilia by @beastlyvaulter
Neurorexia by @ShellyFan
On Psychology and Neuroscience by @astroglia
Más Ciencia por México by @mrenteria_
Imagining Science by @DrImmySmith
Corona Radiata by @JohnKubie
Follow the #SfN13 hashtag on twitter to find all the unofficial coverage of the conference.
© TheCellularScale
Thursday, September 12, 2013
Use Imposter Syndrome to become an excellent grad student
Let's talk about Aristotle for a minute.
Many people mis-attribute this quote to him:
Aristotle does say:
Well lots of people are starting grad school right now with lots of potential. Tons of potential probably, it's what got them into grad school in the first place.
But here's the thing, your potential doesn't mean anything unless you live up to it (or at least come close). Basically Aristotle says that your feelings and intentions and capabilities do not make you excellent, your actions do.
The real lesson here is that you ARE what you DO. if you want to be a good person think 'what would a good person do in this situation?' and then do that thing. Simple really. So in grad school this translates to:
Make Imposter Syndrome work in your favor.
Imposter Syndrome is when someone thinks 'I'm not good enough to be where I am, and I'm just minutes away from the moment my colleagues find out' and it is apparently a plague of many grad students and there are plenty of blog posts around on how to combat it.
But guess what? Playing dress up can make you smarter. People wearing a white coat called a lab coat did better on focus tasks that people wearing the same white coat called an painter's coat (Adam and Galinsky 2012). These are the same people who did the perspective taking experiments showing that when you pretend to be something you become more like it. (See item number 4 on this post.)
Pretending to be what you want to be is actually a completely valid and useful way to become what you want to be. This doesn't mean go into class and pretend you are the professor (that's not a good idea). It means go into class and pretend you are the BEST student in that class.
So go put on those 'smart person clothes' and make believe that you are the best student that school has ever seen. If you run into a dilemma think to yourself 'what would an excellent grad student do in this situation?' or better yet think 'what would an excellent scientist do in this situation?' and then do that thing.
© TheCellularScale
Adam and Galinsky (2012). Enclothed Cognition Journal of experimental social psychology DOI: 10.1016/j.jesp.2012.02.008
School of Athens Aristotle is the one in blue. |
Many people mis-attribute this quote to him:
But really this quote is from someone summarizing Aristotle. It's a great summary and it seems to say what Aristotle means, just more concisely."We are what we repeatedly do. Excellence therefore is not an act, but a habit." -Will Durant
Aristotle does say:
"For these reasons the virtues are not capacities either; for we are neither called good nor called bad, nor are we praised or blamed, insofar as we are simply capable of feelings. Further, while we have capacities by nature, we do not become good or bad by nature." Nicomachean Ethics Book II 5.5Ok, so what does this have to do with grad school?
Well lots of people are starting grad school right now with lots of potential. Tons of potential probably, it's what got them into grad school in the first place.
But here's the thing, your potential doesn't mean anything unless you live up to it (or at least come close). Basically Aristotle says that your feelings and intentions and capabilities do not make you excellent, your actions do.
The real lesson here is that you ARE what you DO. if you want to be a good person think 'what would a good person do in this situation?' and then do that thing. Simple really. So in grad school this translates to:
Make Imposter Syndrome work in your favor.
Imposter Syndrome is when someone thinks 'I'm not good enough to be where I am, and I'm just minutes away from the moment my colleagues find out' and it is apparently a plague of many grad students and there are plenty of blog posts around on how to combat it.
But guess what? Playing dress up can make you smarter. People wearing a white coat called a lab coat did better on focus tasks that people wearing the same white coat called an painter's coat (Adam and Galinsky 2012). These are the same people who did the perspective taking experiments showing that when you pretend to be something you become more like it. (See item number 4 on this post.)
Pretending to be what you want to be is actually a completely valid and useful way to become what you want to be. This doesn't mean go into class and pretend you are the professor (that's not a good idea). It means go into class and pretend you are the BEST student in that class.
So go put on those 'smart person clothes' and make believe that you are the best student that school has ever seen. If you run into a dilemma think to yourself 'what would an excellent grad student do in this situation?' or better yet think 'what would an excellent scientist do in this situation?' and then do that thing.
© TheCellularScale
Adam and Galinsky (2012). Enclothed Cognition Journal of experimental social psychology DOI: 10.1016/j.jesp.2012.02.008
Tuesday, August 27, 2013
Philosophy of Computational Neuroscience
Just like experimental neuroscience, computational neuroscience can be done well or poorly.
This post was motivated by Janet Stemwedel's recent post in Adventures in Ethics and Science about the philosophy of computational neuroscience. There seem to be three views of the use of computational models in biology and neuroscience:
1. All models are bullshit.
2. Models rely on MATH, so of course they are right.
3. Some models are good and some are bad.
Obviously the first two are extremes and usually posited by people who don't know anything about computational neuroscience, and I am clearly advocating the third view. The only problem is that it is hard to tell if a model is good or bad unless you know a lot about it.
So here are some general principles that can help you divide the good and the bad in computational neuroscience.
1. The authors use the correct level of detail.
If you are trying to test how brain regions interact with each other, you don't need to model every single cell in each region, but you need to have enough detail to differentiate the brain regions from one another. Similarly, if you are trying to test how molecules diffuse within a dendrite, you don't need to model a whole cell, but you need to have enough detail to differentiate one molecule type from another. If you are trying to test how a cell processes information, you need to have a cell, as you may have learned in how to build a neuron. Basically a model can be bad simply because it is applied to the wrong question.
2. The authors tune and validate their model using separate data.
When you are making a model you tune it to fit data. For example, in a computational model of a neuron you want to make sure your particular composition of channels produces the right spiking pattern. However, you also want to validate it against data. So how is tuning different from validating? Tuning is when you change the parameters of the model to make it match data. Validating is when you check the tuned model to see if it matches data. Good practice in computational neuroscience is to tune your model to one set of data, but to validate it against a different set of data.
For example, if a cell does X and Y, you can tune your model to effect X, but then check to see that the parameters that make it do X also make it do Y. Sometimes this is not possible. Maybe there is not enough experimental data out there. But if it is not possible, you should at least test the robustness of your model (see point 3).
3. The authors test the robustness of their model.
One problem with computational models is that the specific set of parameters you've found by tuning the model might not be the 'right ones.' In fact they probably aren't the right ones. There are many different sets of parameters that can make a neuron spike slowly, for example. And the chance that you hit on exactly the correct combination of things is very low. But that doesn't mean the model is not useful. You can still use the model to test effects that are not strongly altered by small changes in these parameters. So you need to test whether the specific effect you are testing is robust to parameter variation. If you are testing effect Q, you can increase the sodium channels by 10%, or the network size by 20% and see if you still get effect Q. In other words is 'effect Q' robust to changes in sodium channels or network size? If it is, then great! Your effect is not some weird fluke due to the exact combination of parameters that you have used.
These are the main things I try to pay attention to, but I am sure there are other important things to keep in mind when making models and reading about them. What are your thoughts?
© TheCellularScale
computational models look beautiful (source) |
1. All models are bullshit.
2. Models rely on MATH, so of course they are right.
3. Some models are good and some are bad.
Obviously the first two are extremes and usually posited by people who don't know anything about computational neuroscience, and I am clearly advocating the third view. The only problem is that it is hard to tell if a model is good or bad unless you know a lot about it.
So here are some general principles that can help you divide the good and the bad in computational neuroscience.
1. The authors use the correct level of detail.
devil's in the details (source) |
2. The authors tune and validate their model using separate data.
When you are making a model you tune it to fit data. For example, in a computational model of a neuron you want to make sure your particular composition of channels produces the right spiking pattern. However, you also want to validate it against data. So how is tuning different from validating? Tuning is when you change the parameters of the model to make it match data. Validating is when you check the tuned model to see if it matches data. Good practice in computational neuroscience is to tune your model to one set of data, but to validate it against a different set of data.
For example, if a cell does X and Y, you can tune your model to effect X, but then check to see that the parameters that make it do X also make it do Y. Sometimes this is not possible. Maybe there is not enough experimental data out there. But if it is not possible, you should at least test the robustness of your model (see point 3).
3. The authors test the robustness of their model.
A robust computational model can be delicious (source) |
These are the main things I try to pay attention to, but I am sure there are other important things to keep in mind when making models and reading about them. What are your thoughts?
© TheCellularScale
Tuesday, July 30, 2013
Treatise on the Diseases of Females: Pregnancy in the 1800s
While looking through some seriously old books, I came across a medical treatise from 1853. Now this would be fascinating on its own, but even better, it's a treatise specifically about the "diseases of females" written by William P. Dewees, M.D.
Having recently been pregnant, I was particularly interested in the 1800s recommendations for pregnancy.
Dewees starts out his chapter on pregnancy by explaining why it is important to scientifically determine whether a woman is pregnant or not. The reasons are essentially as follows:
1. So if the woman needs to be treated for some other disease, she doesn't get prescribed something that would hurt her or the baby if pregnant.
2. Because if she is under trial or awaiting execution, pregnancy might forestall it.
3. If the predicted date of birth might influence the 'character or property' of someone else.
So yes, clearly it is important to know if a woman is pregnant.
So how do you tell in the 1800s when no pee-sticks with plus signs were available? Not surprisingly, the first way is 'she doesn't have her period.' However there is clearly some debate in the field at this time.
Other things can 'suppress the menses' and sometimes a woman can bleed while pregnant.
Dewees spends excessive words and semi-colons defending his position on the subject:
So you need some other signs of pregnancy other than just not menstruating. Next up: Nausea and Vomiting. Though "far from certain" as a sign of pregnancy, in conjunction with other signs, it is 'added proof'
Another sign is the enlargement of the sebaceous glands (which are on the areolae around the nipple), and the formation of milk. But milk coming in is also not certain:
It turns out the lady was not pregnant, but was sick with 'phthisis pulmonalis.'
So finally the surest signs of pregnancy are the enlargement of the uterus and abdomen, and feeling the baby move "quickening".
(also mentioned are the 'pouting of the navel' and the 'spitting of frothy saliva')
*All quotes from Treatise on the Diseases of Females by William P. Dewees
© TheCellularScale
For more on historical pregnancy medicine, see some great posts from Tea in a Teacup.
William Dewees (from Wikipedia) |
Dewees starts out his chapter on pregnancy by explaining why it is important to scientifically determine whether a woman is pregnant or not. The reasons are essentially as follows:
1. So if the woman needs to be treated for some other disease, she doesn't get prescribed something that would hurt her or the baby if pregnant.
2. Because if she is under trial or awaiting execution, pregnancy might forestall it.
3. If the predicted date of birth might influence the 'character or property' of someone else.
So yes, clearly it is important to know if a woman is pregnant.
So how do you tell in the 1800s when no pee-sticks with plus signs were available? Not surprisingly, the first way is 'she doesn't have her period.' However there is clearly some debate in the field at this time.
Other things can 'suppress the menses' and sometimes a woman can bleed while pregnant.
Dewees spends excessive words and semi-colons defending his position on the subject:
"In declaring that women may menstruate after impregnation, I have no favourite hypothesis to support; nor am I influenced by any affectation or vanity to differ from others; neither do I believe I am more than ordinarily prone to be captivated or misled by the marvellous; for I soberly and honestly believe what I say, and pledge myself for the fidelity of the relation of the cases I adduce in support of my position." *
So you need some other signs of pregnancy other than just not menstruating. Next up: Nausea and Vomiting. Though "far from certain" as a sign of pregnancy, in conjunction with other signs, it is 'added proof'
Another sign is the enlargement of the sebaceous glands (which are on the areolae around the nipple), and the formation of milk. But milk coming in is also not certain:
"I once new a considerable quantity of milk form in the breasts of a lady, who though she had been married a number of years had never been pregnant; but who at this time had been two years separated from her husband. She mentioned the fact of her having milk to a female friend, who from an impression that it augured pregnancy, told it to another friend, as a great secret; and thus, after having enlisted fifteen or twenty to help them keep the secret, it got to the ears of the lady's brother. Her surprise was only equaled by his rage; and, in a paroxysm, he accused his sister, in the most violent and indelicate terms, of incontinency, and menaced her with the most direful vengeance." *
It turns out the lady was not pregnant, but was sick with 'phthisis pulmonalis.'
So finally the surest signs of pregnancy are the enlargement of the uterus and abdomen, and feeling the baby move "quickening".
(also mentioned are the 'pouting of the navel' and the 'spitting of frothy saliva')
*All quotes from Treatise on the Diseases of Females by William P. Dewees
© TheCellularScale
For more on historical pregnancy medicine, see some great posts from Tea in a Teacup.
Sunday, July 7, 2013
Male DNA in the Female Brain
When you are pregnant, people like to tell you all sorts of things about yourself.
"you are going to have a boy/girl"
"you are carrying high/low"
"you look like an olive on a toothpick/beached whale"
"you probably have some of your husband's DNA/baby's cells in your brain now."
huh?
That last one requires a little more explanation. How could new external foreign cells get into my brain? First of all there is the blood-brain barrier which prevents your own blood cells from getting mixed in with your neurons, and second of all there is the placental barrier that prevents your blood from mixing with the baby's blood.
Neither of these barriers are perfect. Certain drugs and chemicals can cross the blood-brain barrier, and drugs and chemicals that a pregnant woman ingests can cross the placental barrier to get to the baby. But are these barriers so leaky that whole cells can get through?
Apparently they are. Dawe et al., 2007 explains possible ways that this can happen.
The placenta develops with the fetus, and so it is a hotbed of new growing cells early in pregnancy. It is made up of a combination of cells that contain the mother's DNA and cells that contain the new baby's DNA. However it is not clear exactly how baby cells get transferred to the mom. In the author's words:
It is also not clear how these baby cells, once in the mother, could cross the blood-brain barrier. In fact, it is not perfectly clear (as of this 2007 paper) that these cells do get into the mother's brain in humans, though studies have shown fetal DNA-containing cells in the brains of mice.
So in conclusion, if you have ever been pregnant, you probably still have some of that baby's DNA (and consequently some of the baby's father's DNA) in your body. If you were pregnant with a boy, then you probably have Y chromosomes in some of your cells! It even seems that mothers can transfer cells from previous babies into future babies. This means that if you have an older brother or sister, you might have some of their DNA in your body as well.
The next question is: Do these foreign DNA cells have a meaningful impact on your body?
© TheCellularScale
Dawe GS, Tan XW, & Xiao ZC (2007). Cell migration from baby to mother. Cell adhesion & migration, 1 (1), 19-27 PMID: 19262088
probably the most complimentary thing I have been compared to. |
"you are going to have a boy/girl"
"you are carrying high/low"
"you look like an olive on a toothpick/beached whale"
"you probably have some of your husband's DNA/baby's cells in your brain now."
huh?
That last one requires a little more explanation. How could new external foreign cells get into my brain? First of all there is the blood-brain barrier which prevents your own blood cells from getting mixed in with your neurons, and second of all there is the placental barrier that prevents your blood from mixing with the baby's blood.
Neither of these barriers are perfect. Certain drugs and chemicals can cross the blood-brain barrier, and drugs and chemicals that a pregnant woman ingests can cross the placental barrier to get to the baby. But are these barriers so leaky that whole cells can get through?
Apparently they are. Dawe et al., 2007 explains possible ways that this can happen.
The placenta, up close. (Dawe et al,. 2007 Figure 1) |
"The mechanism by which cells are exchanged across the placental barrier is unclear. Possible explanations include deportation of trophoblasts, microtraumatic rupture of the placental blood channels or that specific cell types are capable of adhesion to the trophoblasts of the walls of the fetal blood channels and migration through the placental barrier created by the trophoblasts." (Dawe et al., 2007)
It is also not clear how these baby cells, once in the mother, could cross the blood-brain barrier. In fact, it is not perfectly clear (as of this 2007 paper) that these cells do get into the mother's brain in humans, though studies have shown fetal DNA-containing cells in the brains of mice.
So in conclusion, if you have ever been pregnant, you probably still have some of that baby's DNA (and consequently some of the baby's father's DNA) in your body. If you were pregnant with a boy, then you probably have Y chromosomes in some of your cells! It even seems that mothers can transfer cells from previous babies into future babies. This means that if you have an older brother or sister, you might have some of their DNA in your body as well.
The next question is: Do these foreign DNA cells have a meaningful impact on your body?
© TheCellularScale
Dawe GS, Tan XW, & Xiao ZC (2007). Cell migration from baby to mother. Cell adhesion & migration, 1 (1), 19-27 PMID: 19262088
Monday, June 10, 2013
The Ultimate Simulation
You may have noticed that things have slowed down here at The Cellular Scale.
The reason is that I have been really busy making the ultimate simulation of a human brain. I've worked on this day and night for the past 8 months. It is exhausting work and is starting to take all my energy now that it is nearly complete.
Pretty soon I will have to push this simulation out my uterus, and then all my effort will be spent on simulation support and maintenance. I may write some posts, but they won't be appearing regularly for a while.
© TheCellularScale
The reason is that I have been really busy making the ultimate simulation of a human brain. I've worked on this day and night for the past 8 months. It is exhausting work and is starting to take all my energy now that it is nearly complete.
Pretty soon I will have to push this simulation out my uterus, and then all my effort will be spent on simulation support and maintenance. I may write some posts, but they won't be appearing regularly for a while.
© TheCellularScale
Monday, May 27, 2013
What is an Experiment?
What is an experiment?
People use the term 'experiment' to meant a lot of things. One may say "she experimented with drugs" or "she performs experimental music". Someone might 'experiment with ones hair' or 'perform a thought experiment'.
All of these uses of the word experiment have distinct connotations, but most of them essentially mean 'to try something and see what happens'. In the examples above, most of the phrases also imply that the experiment is something new. If she experiments with her hair, she's probably trying some new style and seeing if she likes it. If she performs experimental music, she's probably not following the conventional rules for the music she is playing.
Are these kinds of 'experiments' different from the experiments that scientists do? Well yes and no. The basic definition of an experiment as 'trying something [often new] and seeing what happens' is pretty much what scientists do. So what's different? Why isn't someones 'hair experiment' publishable in a scientific journal?
Mythbusters would have you believe that the only difference between science and screwing around is writing it down:
And that is sort of true.
But what really makes a scientific experiment scientific is controls. In our hair example, you can experiment with your hair by dying it black, and seeing if you like it. But that's not the scientific experiment. To be scientific you would have to decide how to measure how much you like your new hair color. You could do this by filling out a survey each day asking you how many times you thought you were pretty or rating your confidence on a scale of 1-10. You could fill this survey out for a week and then dye your hair and fill the survey out for another week. You could then compare the scores and decide if the new black hair had a 'significant' impact on your self-image.
Lets say it does impact your self image and you report higher self-confidence that second week. But what if you feel different just because you have a new hair color, not because you have black hair?
Well, you would want to do a control experiment, which controls for the newness of the hair color. You could control for novelty by dying your hair yet a different color, and taking the survey for another week. Or you could take the survey two months after you dyed your hair black to see if you still report higher confidence or if your confidence has dropped back down to normal.
This is not a perfect experiment by any means, it's not even a clever or well-designed one, but it is somewhat scientific. And illustrates what I think is the most important difference between experimenting as in trying something new, and experimenting as in trying to find something out:
The control group
In addition, here is a great example of how important the control group is in science. (See the epilogue)
© TheCellularScale
Experimenting with color (source) |
People use the term 'experiment' to meant a lot of things. One may say "she experimented with drugs" or "she performs experimental music". Someone might 'experiment with ones hair' or 'perform a thought experiment'.
All of these uses of the word experiment have distinct connotations, but most of them essentially mean 'to try something and see what happens'. In the examples above, most of the phrases also imply that the experiment is something new. If she experiments with her hair, she's probably trying some new style and seeing if she likes it. If she performs experimental music, she's probably not following the conventional rules for the music she is playing.
Are these kinds of 'experiments' different from the experiments that scientists do? Well yes and no. The basic definition of an experiment as 'trying something [often new] and seeing what happens' is pretty much what scientists do. So what's different? Why isn't someones 'hair experiment' publishable in a scientific journal?
Mythbusters would have you believe that the only difference between science and screwing around is writing it down:
And that is sort of true.
But what really makes a scientific experiment scientific is controls. In our hair example, you can experiment with your hair by dying it black, and seeing if you like it. But that's not the scientific experiment. To be scientific you would have to decide how to measure how much you like your new hair color. You could do this by filling out a survey each day asking you how many times you thought you were pretty or rating your confidence on a scale of 1-10. You could fill this survey out for a week and then dye your hair and fill the survey out for another week. You could then compare the scores and decide if the new black hair had a 'significant' impact on your self-image.
Lets say it does impact your self image and you report higher self-confidence that second week. But what if you feel different just because you have a new hair color, not because you have black hair?
Well, you would want to do a control experiment, which controls for the newness of the hair color. You could control for novelty by dying your hair yet a different color, and taking the survey for another week. Or you could take the survey two months after you dyed your hair black to see if you still report higher confidence or if your confidence has dropped back down to normal.
This is not a perfect experiment by any means, it's not even a clever or well-designed one, but it is somewhat scientific. And illustrates what I think is the most important difference between experimenting as in trying something new, and experimenting as in trying to find something out:
The control group
In addition, here is a great example of how important the control group is in science. (See the epilogue)
© TheCellularScale
Saturday, May 18, 2013
Homeostatic platsicity in a thorny situation
Synapses, the connections between neurons can strengthen and weaken depending on the specific activity at that synapse. This is called synaptic plasticity, and we've talked about it a lot on this blog (here, here, here and here).
the strengthening and weakening of synaptic connections corresponds to the spine growing or shrinking (Matsuzaki 2007) |
However, there is another kind of plasticity that can occur at synapses. This is called homeostatic plasticity. And instead of the synapse strengthening or weakening depending on the specific activity at that synapse, the synapses strengthen and weaken in homeostatic plasticity depending on the activity of the whole cell.
To drastically simplify, each cell 'wants' to fire about a certain amount, if it suddenly starts to fire a lot less, it will take steps to strengthen its connections or make itself more 'excitable' so it can get back to its preferred amount of firing. Similarly if the cell starts to fire a lot more than normal, it will take steps to make itself less excitable and to weaken its connections until it reaches the right amount of firing.
Thorny Excrescences from Lee et al., (2013) |
Spines from Lee et al. (2013) |
The size of the TEs, and their proximity to the soma makes them an extremely powerful way to control the signals that the soma receives. Lee et al (2013) shows that when you drastically reduce activity by blocking action potentials (using TTX), you get massive growth of these TEs, but the normal spines further away from the soma stay the same.
They test 3 things to determine whether the TEs have undergone homeostatic plasticity. They look at the morphology (they are bigger), the activity (the electrical signals from them are bigger) and the molecular signatures (the molecules indicative of new synapses are more plentiful). The paper is a really nice complete story showing that these TEs have a lot of control over the general activity of the cell.
It also solves an important problem with homeostatic plasticity. That is, how can the general activity of the cell be modulated without the specific differences between synapses being erased, and consequently the memories or pieces of information they encode? If homeostatic plasticity occurs at spines dedicated to it, then the other spines can still encode specific signals while the activity of the cell as a whole changes.
© TheCellularScale
Lee KJ, Queenan BN, Rozeboom AM, Bellmore R, Lim ST, Vicini S, & Pak DT (2013). Mossy fiber-CA3 synapses mediate homeostatic plasticity in mature hippocampal neurons. Neuron, 77 (1), 99-114 PMID: 23312519
Sunday, May 12, 2013
The Inadvertent Psychological Experiment
Escape from Camp 14 is deeply disturbing, and I highly recommend it.
Escape from Camp 14 by Blaine Harden |
Escape from Camp 14 reveals the obscene violations of human rights that occur in North Korean prison camps, and was especially poignant because I am a similar age to Shin Dong-hyuk and could directly compare my memories during the specified years to his. For example he escapes on January 2nd, 2005 and I couldn't help but think of the New Years party I was at that year and how absurdly different my life has been from his.
This book struck me in a way that reading about the horrors of the Holocaust never could. Those atrocities happened long before I was born. But the atrocities in North Korea are happening right now. I mean right this minute in a prison camp, a child is likely being beaten, a woman is likely being raped by a guard (later to be killed if she happens to become pregnant), someone may be picking undigested corn kernels from cow dung to ease hir starving belly, and maybe two lucky prisoners are getting to have 'reward breeding' time. Right now. This minute. That is just nuts.
The other thing that struck me about this whole situation is that having children born into a hostile prison environment is an inadvertent psychological experiment. These children are raised without love and without trust. One of the sharpest points in the book is the reveal that Shin Dong-hyuk turned his own mother and brother in to the guards for planning an escape. He watched his mother's execution shortly thereafter and felt nothing but anger at her for planning an escape.
When he finally escaped, it was shocking to him to see people talking and laughing together without guards coming over to (violently) stop it. In Camp 14, gathering of more than 2 people was forbidden. These prison children are being raised on fear of the guards and suspicion of each other. One of the easiest ways to be rewarded is to tattle on another prisoner for something (stealing food, for example), and the children learn this quickly.
If something drastic happens and North Korea dissolves, these children raised in prison camps will have a near impossible time trying to adjust to a life of freedom and will have a difficult time forming attachments and trusting others (as seen in Shin Dong-hyuk and other refugees from North Korea). Their personalities and psychological profiles could be fundamentally different from any other group on earth. These atrocities should be stopped and these people should be studied and rehabilitated.
© TheCellularScale
Lee YM, Shin OJ, & Lim MH (2012). The psychological problems of north korean adolescent refugees living in South Korea. Psychiatry investigation, 9 (3), 217-22 PMID: 22993519
Monday, May 6, 2013
Everyone should learn everything.
Today I am getting on a bit of a soapbox about things. Specifically about things scientists should learn.
In an ideal world everyone would be good at everything, but as you have probably noticed this is NOT the case. Some people are good at lots of things and some people are really good at specific things, but terrible at others, and some unfortunate people are generally bad at a lot of things and mediocre at a few.
Recently, I've been hearing increasing noise for scientists (or scientists-in-training) to learn X, Whatever X is. 'Scientists should learn art"; "Scientists should learn creative writing"; "Scientists should learn how to communicate to the public more clearly" ; "Scientists should learn managerial skills" and so forth.
This bothers me for a couple of reasons.
1. Why should the scientists learn all this stuff? Why aren't people clamoring for artists to learn microbiology, or for novelists to brush up on their molecular genetics?
and
2. What is wrong with some people being good at science and NOT being good at much else?
Yes, if waving a magic wand could suddenly make scientists good communicators, artists, and managers, I wouldn't object. But these things (like science itself) take training. And god knows, graduate students already get a lot of training.
And yes, running a lab takes managerial skills and grant writing requires clear communication and story-telling skills. But instead of requiring one person to be good at all these things, why not divide up the labor a little and have a 'lab manager' help run the lab, and a 'departmental grants guru' to help polish the grants.
It is really easy to say 'scientists should learn X' because...
1. there is a perception that scientists are smart and can learn things easily
and
2. it is always impossible to argue that things wouldn't be better if scientists were good at X. (Wouldn't it be great if all scientists were excellent public speakers? yes of course.)
The problem is implementing the extensive training in X that a scientist should have, and what current training to replace. Therefore I propose that the 'scientists should learn X' statements should all be adjusted to say 'scientists should get extensive training in X rather than Y'.
© TheCellularScale
Scientists should learn everything (source) |
Recently, I've been hearing increasing noise for scientists (or scientists-in-training) to learn X, Whatever X is. 'Scientists should learn art"; "Scientists should learn creative writing"; "Scientists should learn how to communicate to the public more clearly" ; "Scientists should learn managerial skills" and so forth.
This bothers me for a couple of reasons.
1. Why should the scientists learn all this stuff? Why aren't people clamoring for artists to learn microbiology, or for novelists to brush up on their molecular genetics?
and
2. What is wrong with some people being good at science and NOT being good at much else?
Yes, if waving a magic wand could suddenly make scientists good communicators, artists, and managers, I wouldn't object. But these things (like science itself) take training. And god knows, graduate students already get a lot of training.
And yes, running a lab takes managerial skills and grant writing requires clear communication and story-telling skills. But instead of requiring one person to be good at all these things, why not divide up the labor a little and have a 'lab manager' help run the lab, and a 'departmental grants guru' to help polish the grants.
It is really easy to say 'scientists should learn X' because...
1. there is a perception that scientists are smart and can learn things easily
and
2. it is always impossible to argue that things wouldn't be better if scientists were good at X. (Wouldn't it be great if all scientists were excellent public speakers? yes of course.)
The problem is implementing the extensive training in X that a scientist should have, and what current training to replace. Therefore I propose that the 'scientists should learn X' statements should all be adjusted to say 'scientists should get extensive training in X rather than Y'.
© TheCellularScale
Thursday, May 2, 2013
a STORM inside a cell
We've been talking about some of the most cutting edge intracellular visualization techniques lately. Array tomography and Serial block-face electron microscopy have been featured. Today we'll talk about STORM imaging.
STORM stands for Stochastic Optical Reconstruction Microscopy. While Array tomography and Serial block-face EM are both revolutionary in that they can combine very high resolution imaging with relatively large volumes of tissue, STORM is an advancement that lets you see tiny tiny little molecules within the cell.
The problem with 'normal' imaging is that molecules are smaller than the diffraction of light.
In the figure above, imaging some tiny molecules next to each other is impossible with traditional fluorescence microscopy, but with STORM, you can resolve 10s of nanometers (nm).
To do this, STORM uses photoswitchable dyes, which means that the dye can be turned on or off. This allows researchers to turn on tiny little areas and then turn them off. If all the dye is turned on all at once, the image will look like a big mess because the signals will all overlap each other. But turning on only a few at a time allows you to estimate where the actual protein or molecule is.
A recent paper by Xu et al. (2013) found that the actin which plays a huge role in the intracellular structure of a neuron, has a specific ring-like structure along the axons.
This is the kind of research that will immediately go into neuroscience and cell biology textbooks. Xu et al. discovered how actin was structured along the axon simply by being able to 'see it'.
Not only did they discover the structure of actin and spectrin (magenta above) in the axon, but they also found some other interesting molecular patterns that appear to relate to the actin ring structure. The sodium channels, which control action potential propagation down the axon, are concentrated about half way between the ends of the spectrin tetramers. The potential for super-resolution microscopy like STORM is huge. The location of molecules with relation to one another probably plays a huge role in the function of cells and now we have the tools to map them.
© TheCellularScale
Xu K, Zhong G, & Zhuang X (2013). Actin, spectrin, and associated proteins form a periodic cytoskeletal structure in axons. Science (New York, N.Y.), 339 (6118), 452-6 PMID: 23239625
STORM imaging (Xu et al., 2013) |
STORM stands for Stochastic Optical Reconstruction Microscopy. While Array tomography and Serial block-face EM are both revolutionary in that they can combine very high resolution imaging with relatively large volumes of tissue, STORM is an advancement that lets you see tiny tiny little molecules within the cell.
The problem with 'normal' imaging is that molecules are smaller than the diffraction of light.
Example of the STORM resolution (from Zhuang lab's webpage) |
To do this, STORM uses photoswitchable dyes, which means that the dye can be turned on or off. This allows researchers to turn on tiny little areas and then turn them off. If all the dye is turned on all at once, the image will look like a big mess because the signals will all overlap each other. But turning on only a few at a time allows you to estimate where the actual protein or molecule is.
"The imaging process consists of many cycles during which fluorophores are activated, imaged, and deactivated. In each cycle only a subset of the fluorescent labels are switched on, such that each of the active fluorophores is optically resolvable from the rest. This allows the position of these fluorophores to be determined with nanometer precision." -Zhuang lab webpageSo what amazing things can they do with this STORM?
A recent paper by Xu et al. (2013) found that the actin which plays a huge role in the intracellular structure of a neuron, has a specific ring-like structure along the axons.
Xu et al., 2013 Figure 4F |
This is the kind of research that will immediately go into neuroscience and cell biology textbooks. Xu et al. discovered how actin was structured along the axon simply by being able to 'see it'.
Not only did they discover the structure of actin and spectrin (magenta above) in the axon, but they also found some other interesting molecular patterns that appear to relate to the actin ring structure. The sodium channels, which control action potential propagation down the axon, are concentrated about half way between the ends of the spectrin tetramers. The potential for super-resolution microscopy like STORM is huge. The location of molecules with relation to one another probably plays a huge role in the function of cells and now we have the tools to map them.
© TheCellularScale
Xu K, Zhong G, & Zhuang X (2013). Actin, spectrin, and associated proteins form a periodic cytoskeletal structure in axons. Science (New York, N.Y.), 339 (6118), 452-6 PMID: 23239625
Saturday, April 27, 2013
LMAYQ: Mirror Neurons
Mirror neurons really excite people. They've been hyped as the root of empathy and essential to human nature. I've addressed some of this hype, but questions remain. So for this edition of Let Me Answer Your Questions, we will focus on mirror neurons. As always, the LMAYQ series can be found here.
1. "What do mirror neurons look like?"
Good question, and guess what? I have addressed this directly.
2. "Do mirror neurons fire when you die?"
Another good question. Ultimately, all neurons stop firing when you die including mirror neurons. But this doesn't happen immediately. In fact, if the death is due to something traumatic such as decapitation, the neurons might fire more when the nerves are severed between the spinal cord and the brain. But this just brings up questions about the moment of death. Is it when the heart stops, the head is severed? or is it when the neurons stop firing? Can a 'person' be dead when some of their cells are still alive?
In a lot of cellular-level research, cells are kept alive after the animal that they came from has died. Electrophysiologists keep slices of brain alive for hours to record electrical signals from their neurons. Still other projects involve culturing neurons that have been extracted from an animal. These neurons are carefully tended for days, weeks, and even months. These neurons not only stay alive in little dishes, but they can also grow and even control robots.
3. "what does it mean to have a mirrored brain?"
Well. nothing really. I have never heard the term 'mirrored brain' before, and it sounds like something that might be in a pseudo-scientific quiz along the lines of Are you left brained or right brained? "Do you have a mirrored brain? take our quiz and find out"
4. "Is love nothing but mirror cells?"
I love and hate these kinds of questions. The idea that love is nothing if it can be explained by a biological mechanism really gets me. If love is just neurons firing (mirror or otherwise), so what? Why would that make LOVE any less meaningful?
On the other hand, this is a really interesting question if it is asking whether mirror neurons have anything to do with love. Again mirror neurons are neurons that fire when you do something and also when you see someone else do that thing. Specifically, they were discovered in monkeys when monkeys reached for something and then saw other hands reach for something. Then the concept got hyped up. It's easy to imagine that if you have neurons that fire when you do something and when you see some one else do that same thing, that those might have something to do with 'feeling another's pain' and thus empathy. So it's not a huge step to take from there to think that maybe mirror neurons could have something to do with building relationships and love.
But the speculation here is WAY beyond the science. There isn't good solid evidence for mirror neurons controlling empathy, and certainly not for being the basis of love.
© TheCellularScale
Escher's Mirror (source) |
1. "What do mirror neurons look like?"
Good question, and guess what? I have addressed this directly.
2. "Do mirror neurons fire when you die?"
Another good question. Ultimately, all neurons stop firing when you die including mirror neurons. But this doesn't happen immediately. In fact, if the death is due to something traumatic such as decapitation, the neurons might fire more when the nerves are severed between the spinal cord and the brain. But this just brings up questions about the moment of death. Is it when the heart stops, the head is severed? or is it when the neurons stop firing? Can a 'person' be dead when some of their cells are still alive?
In a lot of cellular-level research, cells are kept alive after the animal that they came from has died. Electrophysiologists keep slices of brain alive for hours to record electrical signals from their neurons. Still other projects involve culturing neurons that have been extracted from an animal. These neurons are carefully tended for days, weeks, and even months. These neurons not only stay alive in little dishes, but they can also grow and even control robots.
There are living neurons in there (source) |
Well. nothing really. I have never heard the term 'mirrored brain' before, and it sounds like something that might be in a pseudo-scientific quiz along the lines of Are you left brained or right brained? "Do you have a mirrored brain? take our quiz and find out"
4. "Is love nothing but mirror cells?"
I love and hate these kinds of questions. The idea that love is nothing if it can be explained by a biological mechanism really gets me. If love is just neurons firing (mirror or otherwise), so what? Why would that make LOVE any less meaningful?
Heart Mirror (source) |
But the speculation here is WAY beyond the science. There isn't good solid evidence for mirror neurons controlling empathy, and certainly not for being the basis of love.
© TheCellularScale
Monday, April 22, 2013
Connecting Form and Function: Serial Block-face EM
The retina is a beautiful and wondrous structure, and it has some really weird cells.
Retinal Ganglion Cells (RGC) have all sorts of differentiating characteristics. Some are directly sensitive to brightness (like rods and cones), while some are sensitive to the specific direction that a bar is traveling.
I am discussing really amazing new techniques to see inside cells this month, and have already posted about the magic that is Array Tomography. Today we'll look at another amazing new technique that (like array tomography) combines nano-scale detail with a scale large enough to see many neurons at once. This technique is called Serial Block-face Electron Microscopy (SBEM), and was recently used to investigate how starburst amacrine cells control the direction-sensitivity of retinal ganglion cells.
SBEM images are acquired by embedding a piece of tissue (like a retina) in some firm substance and slicing it superthin (like 10s of nanometers thick) with a diamond blade. The whole slicing apparatus is set up directly under a scanning electron microscope, so as soon as the blade cuts, an image is taken of the surface remaining. Then another thin slice is shaved off and the next image is taken, and so on.
Using this technique, Briggman et al. (2011) are able to trace individual neurons and their connections for a (relatively) large section of retina. What is so great about this paper is that before they sliced up the retina, they moved bars around in front of it and measured the directional selectivity of a bunch of neurons. Then, using blood vessels and landmarks to orient themselves, they were able to find the exact same cells in the SBEM data and trace them.
The colored circles above represent the cell bodies and the black 'tree' shape are the blood vessel landmarks.
Once they found the cell bodies, the could trace the cells through the stacks of SBEM data. What is really neat is that you can try your hand at this yourself. This exact data set has been turned into a game called EYEWIRE by the Seung lab at MIT.
Reconstructing the cells, they could not only tell which cells connected to which other cells, but they could also see exactly where on the dendrites the cells connected. This is the really amazing part. They found that specific dendritic areas made synapses with specific cells.
This starburst amacrine cell overlaps with many retinal ganglion cells (dotted lines represent the dendritic spread of individual RGCs)...BUT its specific dendrites (left, right, up down etc) synapse selectively onto RGCs sensitive to a particular direction. Each color represents synapses onto a specific direction-sensitivity. e.g. yellow dots are synapses from the amacrine cell onto RGCs which are sensitive to downward motion.
This suggests that each individual dendritic area of these starburst amacrine cells inhibits (probably) a specific type of RGC, and that these dendrites act relatively independently of one another.
These cells are weird for so many reasons, but the ability of the dendrites to act so independently of one another is a new and exciting development that I hope to see more research on soon.
© TheCellularScale
Briggman KL, Helmstaedter M, & Denk W (2011). Wiring specificity in the direction-selectivity circuit of the retina. Nature, 471 (7337), 183-8 PMID: 21390125
Retina by Cajal (source) |
I am discussing really amazing new techniques to see inside cells this month, and have already posted about the magic that is Array Tomography. Today we'll look at another amazing new technique that (like array tomography) combines nano-scale detail with a scale large enough to see many neurons at once. This technique is called Serial Block-face Electron Microscopy (SBEM), and was recently used to investigate how starburst amacrine cells control the direction-sensitivity of retinal ganglion cells.
Serial Block-face EM (source) |
SBEM images are acquired by embedding a piece of tissue (like a retina) in some firm substance and slicing it superthin (like 10s of nanometers thick) with a diamond blade. The whole slicing apparatus is set up directly under a scanning electron microscope, so as soon as the blade cuts, an image is taken of the surface remaining. Then another thin slice is shaved off and the next image is taken, and so on.
Using this technique, Briggman et al. (2011) are able to trace individual neurons and their connections for a (relatively) large section of retina. What is so great about this paper is that before they sliced up the retina, they moved bars around in front of it and measured the directional selectivity of a bunch of neurons. Then, using blood vessels and landmarks to orient themselves, they were able to find the exact same cells in the SBEM data and trace them.
Briggman et al. (2011) Fig1C: Landmark blood vessels |
Once they found the cell bodies, the could trace the cells through the stacks of SBEM data. What is really neat is that you can try your hand at this yourself. This exact data set has been turned into a game called EYEWIRE by the Seung lab at MIT.
Reconstructing the cells, they could not only tell which cells connected to which other cells, but they could also see exactly where on the dendrites the cells connected. This is the really amazing part. They found that specific dendritic areas made synapses with specific cells.
Briggman et al. (2011) Fig4: dendrites as the computational unit |
This starburst amacrine cell overlaps with many retinal ganglion cells (dotted lines represent the dendritic spread of individual RGCs)...BUT its specific dendrites (left, right, up down etc) synapse selectively onto RGCs sensitive to a particular direction. Each color represents synapses onto a specific direction-sensitivity. e.g. yellow dots are synapses from the amacrine cell onto RGCs which are sensitive to downward motion.
This suggests that each individual dendritic area of these starburst amacrine cells inhibits (probably) a specific type of RGC, and that these dendrites act relatively independently of one another.
"The specificity of each SAC dendritic branch for selecting a postsynaptic target goes well beyond the notion that neuron A selectively wires to neuron B, which is all that electrophysiological measurements can test. Instead the dendrite angle has an additional, perhaps dominant, role, which is consistent with SAC dendrites acting as independent computational units." -Briggman et al (2011)(discussion)
These cells are weird for so many reasons, but the ability of the dendrites to act so independently of one another is a new and exciting development that I hope to see more research on soon.
© TheCellularScale
Briggman KL, Helmstaedter M, & Denk W (2011). Wiring specificity in the direction-selectivity circuit of the retina. Nature, 471 (7337), 183-8 PMID: 21390125
Wednesday, April 17, 2013
Van Gogh was afraid of the moon and other lies
I remember the first time I realized just how easily false information gets spread about.
I was in French class in high school. Our homework had been to find out 1 interesting fact about Van Gogh and tell it to the class. When it was my turn, I said some boring small fact that I no longer remember. My friend sitting behind me, however, had a fascinating fact: When Van Gogh was a young child, he was actually afraid of the moon.
The teacher and the class were all quite impressed and thought about how interesting that was and how that fact might be reflected in the way that he paints the Starry Night. Though this fact was new to everyone, including the teacher, no one even thought to question its truth.
In fact, the teacher was so enthralled by this idea that she passed the information on to all the other French classes that day.
When talking to my friend later that day, he admitted that he had not done the assignment, and just made the 'fact' up. I was completely surprised, not only that someone had not done their homework *gasp*, but that I hadn't even thought to question whether this was true or not.
Misinformation like this spreads like wildfire and is exceptionally difficult to undo. The more things you can link this piece of information to in your brain, the more true you might think it and even after your learn that it's not true, you still might inadvertently believe it or fit new ideas into the context it creates. Myths like the corpus callosum is bigger in women than in men is just one of those things that is easy to believe.
An interesting paper by Lewandowsky et al. (2012) explains how this kind of persistent misinformation is detrimental to individuals and to society with the example of vaccines causing autism. This particular piece of misinformation is widely believed to be true despite numerous attempts to publicize the correct information and the most recent scientific findings showing no evidence for a link between the two.
The authors of this paper give some recommendations for making the truth more vivid and effectively replacing the misinformation with new, true information. For example:
This paper is also covered over at The Jury Room.
© TheCellularScale
Lewandowsky, S., Ecker, U., Seifert, C., Schwarz, N., & Cook, J. (2012). Misinformation and Its Correction: Continued Influence and Successful Debiasing Psychological Science in the Public Interest, 13 (3), 106-131 DOI: 10.1177/1529100612451018
A terrifying starry night |
The teacher and the class were all quite impressed and thought about how interesting that was and how that fact might be reflected in the way that he paints the Starry Night. Though this fact was new to everyone, including the teacher, no one even thought to question its truth.
In fact, the teacher was so enthralled by this idea that she passed the information on to all the other French classes that day.
When talking to my friend later that day, he admitted that he had not done the assignment, and just made the 'fact' up. I was completely surprised, not only that someone had not done their homework *gasp*, but that I hadn't even thought to question whether this was true or not.
The best lies have an element of truth (source) |
An interesting paper by Lewandowsky et al. (2012) explains how this kind of persistent misinformation is detrimental to individuals and to society with the example of vaccines causing autism. This particular piece of misinformation is widely believed to be true despite numerous attempts to publicize the correct information and the most recent scientific findings showing no evidence for a link between the two.
The authors of this paper give some recommendations for making the truth more vivid and effectively replacing the misinformation with new, true information. For example:
"Providing an alternative causal explanation of the event can fill the gap left behind by retracting misinformation. Studies have shown that the continued influence of misinformation can be eliminated through the provision of an alternative account that explains why the information was incorrect." Lewandowsky et al. (2012)Misinformation can be replaced with information, but it takes more work to replace a 'false fact' than to just have the truth out there in the first place. It is much better when misinformation is not spread around in the first place, than when it is retroactively corrected.
This paper is also covered over at The Jury Room.
© TheCellularScale
Lewandowsky, S., Ecker, U., Seifert, C., Schwarz, N., & Cook, J. (2012). Misinformation and Its Correction: Continued Influence and Successful Debiasing Psychological Science in the Public Interest, 13 (3), 106-131 DOI: 10.1177/1529100612451018
Saturday, April 13, 2013
Seeing Inside Cells: Array Tomography
I wrote a lot about dopamine and its complicated nature last month after coming back from the IBAGs conference, so for a change of pace, I'll talk about some truly amazing new techniques that allow us to see inside cells with unprecedented resolution and at unprecedented volumes.
I've previously discussed some traditional techniques for visualizing specific details in neurons, and this month I'm going to talk about some of the newest fanciest ways to look at cellular scale information.
First off, Array Tomography!
Array Tomography combines the enhanced location information of the electron microscopy with the scale and context of immunohistochemistry or in situ hybridization. Not only that, but Array Tomography is done in such a way that the same preparation can be stained for 100s of different proteins. This is a priceless gift to those who want to study protein co-localization. Do certain receptors 'flock together', and if so does a mutation, or drug treatment alter their abundance or proximity to one another?
And just how do they accomplish this feat?
The trick is in the slicing. Using an ultramicrotome these guys can slice a section of brain 70 nm thin. That's 70 NANOmeters, which is really really thin. (Compare it to 'thick section staining' which works on sections 350,000 nanometers thin). The smallest cellular features, the necks of spines can be as thin as 50-100nm, so 70nm can really capture a lot of detail.
Here is a 'fly through' video of the cortical layers in a cortical column. The red dots are identified synapses, and around 2:11 of the video you get to the pyramidal cell bodies (green) which is pretty stunning.
While "Array Tomography" doesn't quite capture the public imagination like "neurons activated by light", it is huge leap forward in the domain of cellular neuroscience. With array tomography, it becomes possible to investigate co-localization of many proteins in a relatively large section of brain tissue.
© TheCellularScale
Micheva KD, Busse B, Weiler NC, O'Rourke N, & Smith SJ (2010). Single-synapse analysis of a diverse synapse population: proteomic imaging methods and markers. Neuron, 68 (4), 639-53 PMID: 21092855
I've previously discussed some traditional techniques for visualizing specific details in neurons, and this month I'm going to talk about some of the newest fanciest ways to look at cellular scale information.
First off, Array Tomography!
Micheva et al. 2010 Figure 1 Array Tomography |
Array Tomography combines the enhanced location information of the electron microscopy with the scale and context of immunohistochemistry or in situ hybridization. Not only that, but Array Tomography is done in such a way that the same preparation can be stained for 100s of different proteins. This is a priceless gift to those who want to study protein co-localization. Do certain receptors 'flock together', and if so does a mutation, or drug treatment alter their abundance or proximity to one another?
Micheva et al. 2010 Figure 4 spine head and neck locations of specific proteins |
And just how do they accomplish this feat?
The trick is in the slicing. Using an ultramicrotome these guys can slice a section of brain 70 nm thin. That's 70 NANOmeters, which is really really thin. (Compare it to 'thick section staining' which works on sections 350,000 nanometers thin). The smallest cellular features, the necks of spines can be as thin as 50-100nm, so 70nm can really capture a lot of detail.
Here is a 'fly through' video of the cortical layers in a cortical column. The red dots are identified synapses, and around 2:11 of the video you get to the pyramidal cell bodies (green) which is pretty stunning.
While "Array Tomography" doesn't quite capture the public imagination like "neurons activated by light", it is huge leap forward in the domain of cellular neuroscience. With array tomography, it becomes possible to investigate co-localization of many proteins in a relatively large section of brain tissue.
© TheCellularScale
Micheva KD, Busse B, Weiler NC, O'Rourke N, & Smith SJ (2010). Single-synapse analysis of a diverse synapse population: proteomic imaging methods and markers. Neuron, 68 (4), 639-53 PMID: 21092855
Sunday, April 7, 2013
LMAYQ: Scales
The word "scale" can mean many things, and The Internet can't yet use context to tell the difference. So for this issue of Let Me Answer Your Questions, here are questions about scales that The Internet thinks The Cellular Scale can answer. As always these are real true search terms, and all the posts in the LMAYQ series can be found here.
1. "Can you give a rat scales?"
I have never thought to ask this question, but it is an interesting one. If you can grow weird things on mice, like ears, then why not scales? Well here's the thing, the 'ear mouse' is growing skin like it normally does, the skin is just growing over an ear-shaped mold. It would actually be harder to make a rat grow scales. If it is possible, it would take some mastery in genetic manipulation...
Some sniffing around on wikipedia taught me that scales have evolved several times (fish, reptiles, arthropods, etc). It might be possible to make a rat (or mouse) grow scales by isolating the scale gene from these other animals and inserting it into the rat genome. However, since rats already grow fur, teeth, and nails, which are related to scales, it might be possible to manipulate those features already in the rat to become more scale-like.
But to answer your question, no. I am pretty sure we can't give a rat scales yet.
2. "Does the giant squid have scales?"
Another interesting question. The quick answer is no, giant squid and colossal squid (like their normal squid counterparts) have smooth skin that does not contain scales. This isn't too surprising because squid aren't fish, they are cephalopods (like octopus and cuttlefish). Cephalopods sometimes have shells, but not scales.
Instead of protective scales, cephalopods use pigment in their skin to camouflage themselves or confuse predators.
This octopus turning blue sure confused me.
3. "How to turn your cell phone into a scale."
There are a couple of ways that you might think a cell phone could be used as a scale. One is by the touch screen sensor. However, most smart phones now have capacitive touch screens which respond to the electric change your finger induces on the screen. That means that the amount of pressure applied doesn't matter. So you couldn't use a smart phone as a scale in that way.
Another way is through the accelerometer. Smart phones also have accelerometers, which you could possibly use to measure the force of something moving. But this wouldn't tell you the mass of the object unless you already knew the acceleration. (force = mass * acceleration).
But really the only way that seems to actually work (albeit slowly and with questionable accuracy) is using the 'tilt sensor' of the smart phone.
But really you just as well make your own if you are weighing out small amounts of something.
Most importantly it's helpful to know what some typical objects around the house weigh, so you can use them to calibrate a phone or homemade scale. Here are some useful weights:
1. US penny 2.5g
2. US nickel 5 g
3. 1ml water 1g
4. Euro 7.5g
5. British pound 9.5 g
4. "What is the scale on the cellular level?"
Finally a relevant question! Most cells are measured in microns, with a blood cell being about 6-8 microns in diameter.
Neurons on the other hand can have somas (cell bodies) ranging from tiny (5 micron diameter) to large (50 micron diameter). But even for neurons with small somas, the dendritic or axonal arbors can be gigantic.
Some neurons in the aplysia (snail) can get up to 1mm (1,000 microns) in diameter. Which is ridiculously huge for a neuron. For perspective, C. Elegans, a nematode frequently used for neuroscience research, is about 1mm in length. The whole animal! Including its 302 neurons!
© TheCellularScale
A Question of Scale (source) |
1. "Can you give a rat scales?"
I have never thought to ask this question, but it is an interesting one. If you can grow weird things on mice, like ears, then why not scales? Well here's the thing, the 'ear mouse' is growing skin like it normally does, the skin is just growing over an ear-shaped mold. It would actually be harder to make a rat grow scales. If it is possible, it would take some mastery in genetic manipulation...
Bee-Rat, the ultimate achievement in genetic manipulation (source) |
Some sniffing around on wikipedia taught me that scales have evolved several times (fish, reptiles, arthropods, etc). It might be possible to make a rat (or mouse) grow scales by isolating the scale gene from these other animals and inserting it into the rat genome. However, since rats already grow fur, teeth, and nails, which are related to scales, it might be possible to manipulate those features already in the rat to become more scale-like.
But to answer your question, no. I am pretty sure we can't give a rat scales yet.
2. "Does the giant squid have scales?"
Another interesting question. The quick answer is no, giant squid and colossal squid (like their normal squid counterparts) have smooth skin that does not contain scales. This isn't too surprising because squid aren't fish, they are cephalopods (like octopus and cuttlefish). Cephalopods sometimes have shells, but not scales.
Zoomed in view of Squid Skin (source) |
Blue Octopus, Eilat Israel (I took this picture) |
3. "How to turn your cell phone into a scale."
There are a couple of ways that you might think a cell phone could be used as a scale. One is by the touch screen sensor. However, most smart phones now have capacitive touch screens which respond to the electric change your finger induces on the screen. That means that the amount of pressure applied doesn't matter. So you couldn't use a smart phone as a scale in that way.
Another way is through the accelerometer. Smart phones also have accelerometers, which you could possibly use to measure the force of something moving. But this wouldn't tell you the mass of the object unless you already knew the acceleration. (force = mass * acceleration).
But really the only way that seems to actually work (albeit slowly and with questionable accuracy) is using the 'tilt sensor' of the smart phone.
But really you just as well make your own if you are weighing out small amounts of something.
Most importantly it's helpful to know what some typical objects around the house weigh, so you can use them to calibrate a phone or homemade scale. Here are some useful weights:
1. US penny 2.5g
2. US nickel 5 g
3. 1ml water 1g
4. Euro 7.5g
5. British pound 9.5 g
4. "What is the scale on the cellular level?"
Finally a relevant question! Most cells are measured in microns, with a blood cell being about 6-8 microns in diameter.
blood (source) |
Some neurons in the aplysia (snail) can get up to 1mm (1,000 microns) in diameter. Which is ridiculously huge for a neuron. For perspective, C. Elegans, a nematode frequently used for neuroscience research, is about 1mm in length. The whole animal! Including its 302 neurons!
© TheCellularScale