Showing posts with label new techniques. Show all posts
Showing posts with label new techniques. Show all posts

Saturday, February 15, 2014

A Hop, Skip, and a pre-synaptic Patch

 
This new technique is just too cool not to blog about. 


Novak et al. 2013 Figure 1A pre-synaptic patch clamp


The synapse is the connection between two neurons. The pre-synaptic part is from the neuron sending a signal and the post-synaptic part is from the neuron receiving the signal.

If you want to learn about the connection between the two neurons, you want to know what is happening on both sides of the synapse. It's relatively easy to record signals from the post-synaptic side using patch clamp or sharp electrode recording, but it is much much harder (basically impossible until now) to record from the pre-synaptic side.

Thursday, May 2, 2013

a STORM inside a cell

We've been talking about some of the most cutting edge intracellular visualization techniques lately. Array tomography and Serial block-face electron microscopy have been featured. Today we'll talk about STORM imaging.

STORM imaging (Xu et al., 2013)

STORM stands for Stochastic Optical Reconstruction Microscopy. While Array tomography and Serial block-face EM are both revolutionary in that they can combine very high resolution imaging with relatively large volumes of tissue, STORM is an advancement that lets you see tiny tiny little molecules within the cell.

The problem with 'normal' imaging is that molecules are smaller than the diffraction of light.
Example of the STORM resolution (from Zhuang lab's webpage)
In the figure above, imaging some tiny molecules next to each other is impossible with traditional fluorescence microscopy, but with STORM, you can resolve 10s of nanometers (nm).

To do this, STORM uses photoswitchable dyes, which means that the dye can be turned on or off. This allows researchers to turn on tiny little areas and then turn them off. If all the dye is turned on all at once, the image will look like a big mess because the signals will all overlap each other. But turning on only a few at a time allows you to estimate where the actual protein or molecule is.
"The imaging process consists of many cycles during which fluorophores are activated, imaged, and deactivated. In each cycle only a subset of the fluorescent labels are switched on, such that each of the active fluorophores is optically resolvable from the rest. This allows the position of these fluorophores to be determined with nanometer precision." -Zhuang lab webpage
So what amazing things can they do with this STORM?
A recent paper by Xu et al. (2013) found that the actin which plays a huge role in the intracellular structure of a neuron, has a specific ring-like structure along the axons.

Xu et al., 2013 Figure 4F

This is the kind of research that will immediately go into neuroscience and cell biology textbooks. Xu et al. discovered how actin was structured along the axon simply by being able to 'see it'.

Not only did they discover the structure of actin and spectrin (magenta above) in the axon, but they also found some other interesting molecular patterns that appear to relate to the actin ring structure. The sodium channels, which control action potential propagation down the axon, are concentrated about half way between the ends of the spectrin tetramers. The potential for super-resolution microscopy like STORM is huge. The location of molecules with relation to one another probably plays a huge role in the function of cells and now we have the tools to map them.

© TheCellularScale


ResearchBlogging.org
Xu K, Zhong G, & Zhuang X (2013). Actin, spectrin, and associated proteins form a periodic cytoskeletal structure in axons. Science (New York, N.Y.), 339 (6118), 452-6 PMID: 23239625

Monday, April 22, 2013

Connecting Form and Function: Serial Block-face EM

The retina is a beautiful and wondrous structure, and it has some really weird cells.

Retina by Cajal (source)
Retinal Ganglion Cells (RGC) have all sorts of differentiating characteristics. Some are directly sensitive to brightness (like rods and cones), while some are sensitive to the specific direction that a bar is traveling.

I am discussing really amazing new techniques to see inside cells this month, and have already posted about the magic that is Array Tomography. Today we'll look at another amazing new technique that (like array tomography) combines nano-scale detail with a scale large enough to see many neurons at once. This technique is called Serial Block-face Electron Microscopy (SBEM), and was recently used to investigate how starburst amacrine cells control the direction-sensitivity of  retinal ganglion cells.


Serial Block-face EM (source)

SBEM images are acquired by embedding a piece of tissue (like a retina) in some firm substance and slicing it superthin (like 10s of nanometers thick) with a diamond blade. The whole slicing apparatus is set up directly under a scanning electron microscope, so as soon as the blade cuts, an image is taken of the surface remaining. Then another thin slice is shaved off and the next image is taken, and so on.

Using this technique, Briggman et al. (2011) are able to trace individual neurons and their connections for a (relatively) large section of retina. What is so great about this paper is that before they sliced up the retina, they moved bars around in front of it and measured the directional selectivity of a bunch of neurons. Then, using blood vessels and landmarks to orient themselves, they were able to find the exact same cells in the SBEM data and trace them.

Briggman et al. (2011) Fig1C: Landmark blood vessels
The colored circles above represent the cell bodies and the black 'tree' shape are the blood vessel landmarks.

Once they found the cell bodies, the could trace the cells through the stacks of SBEM data. What is really neat is that you can try your hand at this yourself. This exact data set has been turned into a game called EYEWIRE by the Seung lab at MIT.

Reconstructing the cells, they could not only tell which cells connected to which other cells, but they could also see exactly where on the dendrites the cells connected. This is the really amazing part. They found that specific dendritic areas made synapses with specific cells.

Briggman et al. (2011) Fig4: dendrites as the computational unit

This starburst amacrine cell overlaps with many retinal ganglion cells (dotted lines represent the dendritic spread of individual RGCs)...BUT its specific dendrites (left, right, up down etc) synapse selectively onto RGCs sensitive to a particular direction. Each color represents synapses onto a specific direction-sensitivity. e.g. yellow dots are synapses from the amacrine cell onto RGCs which are sensitive to downward motion.

This suggests that each individual dendritic area of these starburst amacrine cells inhibits (probably) a specific type of RGC, and that these dendrites act relatively independently of one another.

"The specificity of each SAC dendritic branch for selecting a postsynaptic target goes well beyond the notion that neuron A selectively wires to neuron B, which is all that electrophysiological measurements can test. Instead the dendrite angle has an additional, perhaps dominant, role, which is consistent with SAC dendrites acting as independent computational units."  -Briggman et al (2011)(discussion)

These cells are weird for so many reasons, but the ability of the dendrites to act so independently of one another is a new and exciting development that I hope to see more research on soon.

© TheCellularScale


ResearchBlogging.org
Briggman KL, Helmstaedter M, & Denk W (2011). Wiring specificity in the direction-selectivity circuit of the retina. Nature, 471 (7337), 183-8 PMID: 21390125

Saturday, April 13, 2013

Seeing Inside Cells: Array Tomography

I wrote a lot about dopamine and its complicated nature last month after coming back from the IBAGs conference, so for a change of pace, I'll talk about some truly amazing new techniques that allow us to see inside cells with unprecedented resolution and at unprecedented volumes.

I've previously discussed some traditional techniques for visualizing specific details in neurons, and this month I'm going to talk about some of the newest fanciest ways to look at cellular scale information. 

First off, Array Tomography! 

Micheva et al. 2010 Figure 1 Array Tomography

Array Tomography combines the enhanced location information of the electron microscopy with the scale and context of immunohistochemistry or in situ hybridization. Not only that, but Array Tomography is done in such a way that the same preparation can be stained for 100s of different proteins.  This is a priceless gift to those who want to study protein co-localization.  Do certain receptors 'flock together', and if so does a mutation, or drug treatment alter their abundance or proximity to one another?

Micheva et al. 2010 Figure 4 spine head and neck locations of specific proteins

And just how do they accomplish this feat?

The trick is in the slicing. Using an ultramicrotome these guys can slice a section of brain 70 nm thin. That's 70 NANOmeters, which is really really thin. (Compare it to 'thick section staining' which works on sections 350,000 nanometers thin). The smallest cellular features, the necks of spines can be as thin as 50-100nm, so 70nm can really capture a lot of detail.

Here is a 'fly through' video of the cortical layers in a cortical column. The red dots are identified synapses, and around 2:11 of the video you get to the pyramidal cell bodies (green) which is pretty stunning.




While "Array Tomography" doesn't quite capture the public imagination like "neurons activated by light", it is huge leap forward in the domain of cellular neuroscience. With array tomography, it becomes possible to investigate co-localization of many proteins in a relatively large section of brain tissue. 

© TheCellularScale


ResearchBlogging.orgMicheva KD, Busse B, Weiler NC, O'Rourke N, & Smith SJ (2010). Single-synapse analysis of a diverse synapse population: proteomic imaging methods and markers. Neuron, 68 (4), 639-53 PMID: 21092855

Thursday, April 4, 2013

Neurons and New Newt Legs

Salamanders are amazing and mystical creatures.
Salamanders and their amazing leg-growing superpower (source)
Not because they can survive in fire (they can't), but because they can regrow amputated limbs.
A paper in 2007 investigates exactly what neural signals are required for this amazing superpower.

Newt Amputee (Kumar et al., 2007)
This paper brings together two interesting things about salamander (newt) leg growth.

1. The salamander arm 'knows' where it was cut. If it is cut at the wrist it only grows a hand (paw?...foot?), if it is cut at the shoulder, it grows the whole leg/arm. So one question is HOW does the arm know?

The answer is surprisingly simple: there is a small protein called Prod 1 that is highly concentrated at the shoulder and progressively decreases along the arm. This protein could tell the new bud of growing cells where it is, and what it should grow into.

and

2. To regenerate, the arm needs intact nerve endings at the point of the cut. That is, the nerve fiber that goes down the arm has to be attached to the nervous system. If the nerve is cut further up than the arm cut, the arm will not regenerate.

Kumar et al., 2007 Author Summary Figure
Kumar et al. (2007) found a molecule that ties these two interesting things together, completing the newt leg regeneration story. They find a molecule, nAG (n for newt and AG for anterior gradient) which binds to Prod 1, and is secreted by the nerve sheath (the Schwann cells that surrounds nerve fibers).

They show that when they cut the nerve further up the arm (denervation), they don't get nAG expression and they don't get limb regeneration. But, when they artificially supply nAG, (see D and E above), the amputated and denervated limb starts to grow. 

This is a really neat 'rescue experiment' suggesting that the reason the nerve is necessary for regeneration is because it triggers nAG release which binds to Prod 1 and says "GROW".

One thing that they don't do (because genetically manipulating salamanders is not really a thing yet) is remove nAG, but keep the nerve intact. This would show that the only important thing the nerve fiber is doing is triggering nAG.

This study is also a small step towards limb regeneration in humans, not because injecting nAG into an amputated human limb could regenerate it (It couldn't), but because the more we understand about how the system works, the more likely we can figure out a way to engineer a similar system in humans. 

© TheCellularScale


ResearchBlogging.org
Kumar A, Godwin JW, Gates PB, Garza-Garcia AA, & Brockes JP (2007). Molecular basis for the nerve dependence of limb regeneration in an adult vertebrate. Science (New York, N.Y.), 318 (5851), 772-7 PMID: 17975060

Monday, March 25, 2013

Guest Post: AMPA Receptors are not Necessary for long term potentiation

Today's post is brought to you by @BabyAttachMode, who is an electrophysiologist and blogger. Today we are blog swapping! I have a post over at her blog and her post about AMPA receptors and LTP is here. So enjoy, and when you're done reading about the newest advances in synaptic plasticity here, you can head over to InBabyAttachMode and read about my personal life.
 
AMPA Receptors are not Necessary for long term potentiation

Science is most interesting to me when you’re testing a hypothesis, and not only do you prove the hypothesis to be false, but you discover something unexpected. I think that happened to Granger et al. They were trying to find which part of the AMPA receptor is necessary for long-term potentiation (LTP), the process that strengthens the connection between two brain cells when that connection is used often. Indeed they find that AMPA receptors are not necessary at all for LTP, which is very surprising given the large body of literature describing how the GluA1 subunits of the AMPA receptor, through interactions with other synaptic molecules that bind to the intracellular C-tail (the end of the receptor that is located inside the cell), are inserted into the synapse to induce LTP.
LTP (source)
The authors made an inducible triple knock-out, which means that they could switch off the genes for the three different AMPA receptor subunits GluA1, GluA2 and GluA3. This way, they ended up with mice that had no AMPA receptors at all. The authors were then able to selectively put back one of the AMPA receptors, either the entire receptor or a mutated receptor. By inserting mutated receptors, for example a receptor that lacks its intracellular C-tail that was thought to be important for insertion of the AMPA receptor into the synapse, they could then study whether this mutated receptor was still sufficient for induction of LTP.

Surprisingly, they found that deleting the C-tail of the GluA1 subunit does not change the cell’s ability to induce LTP. Even more so, they showed that you don’t even need any AMPA receptor to still be able to induce LTP; the kainate receptor (another type of glutamate receptor that has never been implicated in LTP) can take over its job too.

Figure 6C from Granger et al. (2013). Kainate receptor overexpression can lead to LTP expression, without the presence of AMPA receptors.

About this surprising discovery the authors say the following:
"These results demonstrate the synapse's remarkable flexibility to potentiate with a variety of glutamate receptor subtypes, requiring a fundamental change in our thinking with regard to the core molecular events underlying synaptic plasticity."
Of course if you say something like that, the main players in the LTP field will have something to say about it, and they did. Three giants in the field of synaptic physiology commented in the journal Nature, but their opinions differed. Morgan Shang called it "a step forward", whereas Roberto Malinow and Richard Huganir called it "two steps back", saying that LTP without AMPA receptors can only happen in the artificial system that the authors of the paper use to study this. They expect that cells lacking all three AMPA receptors will look so different from the normal cells that the results are difficult to interpret.

Either way, this paper opens new views and questions to how LTP works, and whether AMPA receptors are as important as we thought.


ResearchBlogging.orgGranger AJ, Shi Y, Lu W, Cerpas M, & Nicoll RA (2013). LTP requires a reserve pool of glutamate receptors independent of subunit type. Nature, 493 (7433), 495-500 PMID: 23235828
 
Sheng M, Malinow R, & Huganir R (2013). Neuroscience: Strength in numbers. Nature, 493 (7433), 482-3 PMID: 23344353

Wednesday, February 6, 2013

JoVE: god of thunder, journal of techniques

If you don't know about the Journal of Visualized Experiments, now is the time to learn.

god of thunder, journal of techniques (source)
Methods sections of papers should contain enough detail that a scientist reading it could replicate the results of the paper. But this is rarely the case. Research in computational neuroscience has an advantage because the actual code used to run the simulations can be deposited and downloaded. But for experimental work, the nuances of exactly how to do each step in a process can get lost.

Protocol papers are often able to fully describe a process, but nothing can beat actually seeing the researchers performing the technique. For this there is JoVE.


You can get lost on their website watching fascinating 10 minute video after fascinating 10 minute video. For example, the very first video listed under 'neuroscience' when I checked it is called "Optogenetic activation of zebrafish somatosensory neurons using ChEF-tdTomato" and it shows you how stimulate the zebrafish neurons with light. But it doesn't just show you someone doing it, it shows each step in detail. How to modify the optic cable, how to position the zebrafish embryo, and even to be careful when using lasers. (Also, it taught me what the Pasteur Pipette can be used for.)

I think this is a great addition to scientific literature, and will be useful to many people. However, I still have some doubts about how easy it would be to replicate these techniques from the video alone. But fortunately accompanying the videos are detailed protocols with more details and equipment specs. 

I'd be interested to know if anyone has used a JoVE article and their sole resource and been able to replicate a technique successfully. 


ResearchBlogging.org
Palanca, A., & Sagasti, A. (2013). Optogenetic Activation of Zebrafish Somatosensory Neurons using ChEF-tdTomato Journal of Visualized Experiments (71) DOI: 10.3791/50184

Tuesday, January 22, 2013

How to Build a Neuron: Step 5

And now, the final step in how to build your computational model of a neuron: Add Synaptic Channels. All the steps in this series can be found here.
Synapses connect neurons (source)
So you already have a neuron, and you've added intrinsic channels to it. The next thing you want to do is add synaptic channels so you can hook this neuron up to other cells.

The main synaptic channels you want to add are the excitatory channels: NMDA and AMPA and the inhibitory channel GABA. These channels don't have the same kind of activation and inactivation curves and the intrinsic channels do because they aren't activated by voltage, they are activated by a neurotransmitter.

AMPA and NMDA receptors are activated primarily by glutamate, and cause an influx of sodium and calcium ions. Since both sodium and calcium ions are positively charged, this depolarizes the cell membrane and brings it closer to firing an action potential.

AMPA receptors (source)
GABA receptors, on the other hand are primarily activated by GABA, and cause and influx of chloride ions into the cell. Because chloride ions are negatively charged, this hyperpolarizes the cell membrane and brings it further away from firing an action potential.

So if you want to have a realistic model of a neuron, you need to add an approximation of these channels. This is easier than adding intrinsic channels, because it is an on/off style (binary) rather than an analogue activation. So basically you just put in the parameters you want like how fast does the channel open and close, how much current does it allow through when activated, and where are they on the neuron.

Of course deciding these parameters is not always easy. A paper out this year in PLoS Computational Biology describes 4 different ways the NMDA receptor can be configured and analyzes the consequences during different stimulation patterns. 

Evans et al., (2012) Figure 3
The 4 NMDA configurations (based on the 4 different GluN2 subunits) vary in their sensitivity to a magnesium block, how fast they decay, and their maximal current. Above are their responses to the same stimulation patterns (an STDP protocol). Even though they were all receiving the same input pattern, they each show a very different response.

So when considering adding synaptic channels to your model neuron, take the time to find out what the configuration of the receptors should actually be in the type of neuron you are building.


© TheCellularScale

If you are good at following clues, you will realize that I am very, very familiar with this paper.


ResearchBlogging.orgEvans RC, Morera-Herreras T, Cui Y, Du K, Sheehan T, Kotaleski JH, Venance L, & Blackwell KT (2012). The effects of NMDA subunit composition on calcium influx and spike timing-dependent plasticity in striatal medium spiny neurons. PLoS computational biology, 8 (4) PMID: 22536151

Tuesday, December 18, 2012

How to Build a Neuron: step 4

And now, the next step in neuron building! You can see all the previous steps and shortcuts here. Step 4 is adding intrinsic channels to your neuron.
Potassium Channel (source)
Pretty much all neurons need sodium and potassium channels so they can fire action potentials, but other channels such as calcium channels are also commonly seen in computational models.

To add these channels you have to extract the parameters from known data. This means extracting Boltzmann curves and time constant information so you can tell the channel which voltages activate it and inactivate it and how fast to open and close.
Activation (Boltzmann) curve for fast sodium channel
This step is tricky and can take a long time, but there is some software that can help. The Enguage Digitizer is one tool I could not live without.

Enguage is basically a tool that allows you to manually trace curves from published figures to get the curve data as an excel or .csv file. First you add axis points using the button at the top that has red plus signs on it. You tell the software what values each of the 3 corners of the graph are. Then you click the blue plus signs button and start to trace your graph, like so:

using Enguage digitizer to extract channel data

Then you export the data as whichever type of file you want. Pretty nice!
I like to have the data this way because then I can overlay this figure trace with any other trace I want and can manually fit an equation to it.

Channels are a hugely important part of a computational model. A recent paper from Eve Marder's lab shows that even with a very simple morphological model (just a soma), interesting electrical characteristics can be seen simply by manipulating the channels.

Kispersky et al., 2012 from Figure 1
Kispersky et al., (2012) introduce an interesting paradox. They show that when you increase the sodium channel conductance you see more action potentials with low current injections (like 200pA). This is expected because the sodium channel is what causes the upswing of the action potential and more sodium is thought to mean more excitability. However, the authors find that when a high current injection is given (like 10nA), the increased sodium channel conductance actually decreases the firing rate. This is counter-intuitive because it goes against the more sodium=more excitability rule.

This is a pretty cool finding published in the Journal of Neuroscience using only a simple one-compartment model. The finding is based entirely on channel manipulation, and demonstrates how important these intrinsic channels are to any computational model.


© TheCellularScale

ResearchBlogging.org
Kispersky TJ, Caplan JS, & Marder E (2012). Increase in sodium conductance decreases firing rate and gain in model neurons. The Journal of neuroscience : the official journal of the Society for Neuroscience, 32 (32), 10995-1004 PMID: 22875933

Sunday, December 9, 2012

Cortical spine growth and learning how to eat pasta

There are two aspects to neuron shape. One is the pattern of dendritic or axonal branching, and the other is the pattern of spines. Spines are the little protrusions that come off of the dendrite often receiving synaptic inputs.
spines on a pyramidal neuron (source)
Because these spines are associated with excitatory synapses, and because synapse development is thought to be the cellular basis of learning, it makes sense that spines would grow when we learn.

But how would they grow exactly?

Using transcranial two-photon microscopy (a window into the brain of a living mouse), Fu et al. (2012) have caught images of neural learning in action.

A window into the mouse brain (source)
 The authors used two learning tasks to investigate how spines grow during learning. In the "reaching" task, mice had to reach their paw into a slit and grab a seed. In the "capellini handling task" the mouse is given a 2.5 cm length of (I am not making this up) angel hair pasta and learns how to handle it for eating. learning is measured by how fast the mouse eats the pasta. 

learning how to eat pasta makes mouse cortical spines grow (source)


They found that spines grow during learning (not too surprising). But spines also grow when the mouse is exposed to a motor-enriched environment (like a mouse-sized playground).

Fu et al. 2012 (Figure 2C+D)

The interesting difference between learning a specific task rather than just playing is that the spines grow in distinct clusters when the mice are taught a learning task. C shows the total spine growth, while D shows the proportion of clustered spines to total spines. Reach only means the mice were only taught the reaching task, and cross-training means they were taught both the reaching task and the pasta handling task. 

The authors explain two possible functions for these spine clusters:
"Positioning multiple synapses between a pair of neurons in close proximity allows nonlinear summation of synaptic strength, and potentially increases the dynamic range of synaptic transmission well beyond what can be achieved by random positioning of the same number of synapses."
Meaning spines that are clustered and receive inputs from the same neuron have more power to influence the cell than spines further apart.
"Alternatively, clustered new spines may synapse with distinct (but presumably functionally related) presynaptic partners. In this case, they could potentially integrate inputs from different neurons nonlinearly and increase the circuit’s computational power. "
Meaning that maybe the spines don't receive input from the same neuron, but are clustered so they can integrate signals across neurons more powerfully.

And of course...

"Distinguishing between these two possibilities would probably require circuit reconstruction by electron microscopy following in vivo imaging to reveal the identities of presynaptic partners of newly formed spines."
 More work is needed to figure out what is really going on.

 © TheCellularScale

ResearchBlogging.org
Fu M, Yu X, Lu J, & Zuo Y (2012). Repetitive motor learning induces coordinated formation of clustered dendritic spines in vivo. Nature, 483 (7387), 92-5 PMID: 22343892

Thursday, November 29, 2012

Growing 3D Cells

Neurons don't grow in a vacuum. They have white fibers, other neurons, blood vessels and all sorts of other obstacles to grow around.


Some NeuroArt (source)

A recent paper from France details the making of a 3D environment that can facilitate 'realistic' neural growth. Labour et al. (2012) created a collagen biomimetic matrix which contains neural growth factor (NGF). 

Labour et al., (2012) Figure 3
These scanning electron microscope images show the porous fibril texture of the collagen matrix. Most of the paper is spent explaining the methods for making this biomimetic matrix, but they also actually grow some pseudo-neurons (PC-12 cells) on the matrix.

They show that when cultured on top of this collagen surface, the cells extend neurons in three dimensions into the matrices and are affected by the NGF. (when there is no NGF, the neurites don't grow and the cells die.)

This paper is mostly about the methods, but I like the new possibilities that growing 3D cells opens up. With these biomimetic collagen matrices, the factors that cause specific dendritic arborizations in three dimensions can be analyzed. The environment can be completely controlled and the neurons easily visualized during growth. The authors suggest using these matrices to study neurodegeneration as well.

Another interesting thing this paper introduced me to is the 'graphical abstract.' I didn't know that that was a thing, but it seems like a good idea. However, trying to summarize an entire paper in one figure seems pretty difficult. Here is their attempt:


Labour et al. (2012) graphical abstract
I think it does actually get the feel of the paper across pretty well, though it's not really informative without the actual abstract next to it.


© TheCellularScale


ResearchBlogging.orgLabour MN, Banc A, Tourrette A, Cunin F, Verdier JM, Devoisselle JM, Marcilhac A, & Belamie E (2012). Thick collagen-based 3D matrices including growth factors to induce neurite outgrowth. Acta biomaterialia, 8 (9), 3302-12 PMID: 22617741

Tuesday, November 20, 2012

Virtual reality for your robot cockroach

I have previously covered some interesting advances in the world of cyborg insects.

Biobot backpack (cockroach size) (source)
Latif and Bozkurt from North Carolina state university recently presented a paper (though I can't find a peer-reviewed publication on Pubmed), explaining their Biobot. They use the Madacascar hissing cockroach...

Hissing Cockroach (source). Terrifying.
... and attach a electrically stimulating 'backpack' (see first picture). They then stimulate the the antennae in a variety of ways to 'steer' the Biobot.

"In these studies, electrical pulses were applied to the insect to create biomechanical or sensory perturbations in the locomotory control system to steer it in desired directions, similar to steering a horse with bridle and reins." -Latif and Bozkurt

This is very similar to the backyard brains Roboroach, but the system created by Latif and Bozkurt is extremely precise. Rather than just making the Biobot turn when stimulated, Latif and Bozkurt can make the cockroach walk a specified line.





Pretty cool. The authors note that generally the cockroaches want to walk straight until they encounter an obstacle (or stimulation). So, sure, this is sort of like steering a horse with reins, but the horse has to be trained to know what the bridle signals mean. This setup is more like creating a virtual reality for the cockroach, where it thinks that it has 'run into' something at certain points on the line. This is similar to creating a virtual reality for worms by stimulating specific neurons with light.

Of course the practical applications of this are a little iffy. People always seems to say that these little insect-bots could be of use in disaster settings where people need to get some ground level surveillance of a rubble-littered area, but I think the scientific applications for this are what is really exciting. Being able to create a virtual reality of any shape or size could allow for tests of spatial navigation in the cockroach. You could even try to train the cockroach to find something or avoid something and the 'confuse it' by changing the virtual environment suddenly. Could it adapt?

© TheCellularScale

Friday, November 16, 2012

How to Build a Neuron: step 3

Steps 1 and 2 of neuron-building, as well as an important set of shortcuts can be found in the How to Build a Neuron index. Step 3 is deciding which simulation software or programming language you want to use.
Simulated Neuron in Genesis (source)
The big two are Genesis and Neuron. They are pretty similar in a lot of ways, but Genesis runs in Linux and Neuron runs in Windows. However, you can run Genesis in Windows if you install the Linux environment Cygwin.

Both programs can read in morphological data, but they use different syntax and coding procedures. There are other types of neural simulators as well, and an ongoing problem in the field of computational neuroscience is compatibility between programs. If someone has done the work to make a beautiful Purkinje cell in Genesis like the one above, it will take a lot of time and effort to translate that neuron into a different simulator such as Neuron.

Gleeson et al., (2010) explains this problem and presents a possible solution in the form of the "Neuron Open Markup Language" or NeuroML.

"Computer modeling is becoming an increasingly valuable tool in the study of the complex interactions underlying the behavior of the brain. Software applications have been developed which make it easier to create models of neural networks as well as detailed models which replicate the electrical activity of individual neurons. The code formats used by each of these applications are generally incompatible however, making it difficult to exchange models and ideas between researchers....Creating a common, accessible model description format will expose more of the model details to the wider neuroscience community, thus increasing their quality and reliability, as for other Open Source software. NeuroML will also allow a greater “ecosystem” of tools to be developed for building, simulating and analyzing these complex neuronal systems." -Gleeson et al (2010) Author Summary

NeuroML is basically a "simulator-independent" neuronal description language. A neuron built with or converted to NeuroML should be able to run on Neuron, Genesis, and plenty of other platforms. Gleeson et al. validated NeuroML by using a simulated pyramidal neuron converted to NeuroML format and run with several different simulators.

Gleeson et al., (2010) Figure 7

Zooming in:

Neuron, Genesis, Moose, Psics comparison
All the simulators overlay so tightly that you can barely tell that they are separate lines.

So when building you neuron, take care to follow the NeuroML format and then you and others can use it with any simulator you want.

© TheCellularScale

ResearchBlogging.org
Gleeson P, Crook S, Cannon RC, Hines ML, Billings GO, Farinella M, Morse TM, Davison AP, Ray S, Bhalla US, Barnes SR, Dimitrova YD, & Silver RA (2010). NeuroML: a language for describing data driven models of neurons and networks with a high degree of biological detail. PLoS computational biology, 6 (6) PMID: 20585541


Sunday, November 4, 2012

Ketamine for depression via neurogenesis?

A lot of fuss has been made recently about the street drug "Special K" (ketamine). It's basically an anesthetic used in labs and veterinary offices to tranquilize mice, rats, cats, and (famously) horses, but recently its been lauded as a newer faster anti-depressant.

Ketamine: from the dealer or from the doctor? (image source)
The possibility that it might have near immediate anti-depressant effects on humans has been around for a little while, but the concept is picking up steam as new research finds mechanisms for how it might actually work in depressed patients. (I briefly mention one new study in an SfN neuroblogging post. )

An emerging theory is that depression is not so much a chemical imbalance as it is a loss of neurons. Thus the cure for depression is not restoring the balance of serotonin or dopamine, but restoring the growth of new neurons. Some suggest that this is how classic anti-depressants (like Zoloft) work, by fixing the neuron atrophy problem. This could also explain why these anti-depressants take so long to work, though I have expressed skepticism about this hypothesis.

So the question is: Does ketamine cause the growth of new neurons, help in their maturation, or prevent neuronal atrophy? Ketamine is an NMDA receptor antagonist, so it inhibits synaptic transmission. It doesn't inhibit all synaptic transmission like deadly poisons do (tetrodotoxin for example), but enough of it to change something in the brain. Knowing something about NMDA receptors, it was still hard for me to conceive of a connection between blocking them and neuronal growth.

A nice review by Duman and Li (2012) spells it out for me, explaining new research that links ketamine with the growth of new synapses.

Duman and Li 2012 figure 3
The idea is that ketamine blocks the NMDA receptors on the GABAergic (inhibitory) neurons, so there is less inhibition and more glutamate. When there is more glutamate, there is more BDNF (brain derived neurotrophic factor). BDNF helps synapsse grow by triggering a cascade of events (via mTOR) which causes more AMPA receptors to be inserted into the synapse, making the synapse stronger, more stable, and more mature.

The authors cite their previous Li et al., 2010 Science paper explaining that when they block mTOR with the drug rapamycin, the effects of ketamine on new spine growth disappear and its anti-depressant effects disappear. However, this is a study in rats and assessing the depressed state of a rat is as tricky as assessing a rat's post-traumatic stress. So the claim here isn't so much that ketamine causes neurogenesis, but that it could help new neurons become synaptically mature, and thus functionally useful. (Carter et al. is investigating this further)

As shiny and interesting as this is, I am not quite sold on it. I don't see how the NMDA antagonist is going to inhibit the inhibitory neurons more than the excitatory neurons, and I would love to see research showing how ketamine causes glutamate accumulation.

And as far as actually using it as a treatment for depression, there are some serious side-effects. Ketamine is a hallucinagenic street drug which can cause a schizophrenia-like state. Therefore, it seems unlikely that ketamine itself will ever be prescribed as an anti-depressant, but new research could reveal (or synthesize) other molecules that activate mTOR directly or somehow bypass the hallucinogenic aspect of ketamine.

For more, see some skeptical and critical analyses of human ketamine studies.

© TheCellularScale

ResearchBlogging.orgDuman RS, & Li N (2012). A neurotrophic hypothesis of depression: role of synaptogenesis in the actions of NMDA receptor antagonists. Philosophical transactions of the Royal Society of London. Series B, Biological sciences, 367 (1601), 2475-84 PMID: 22826346

Li N, Lee B, Liu RJ, Banasr M, Dwyer JM, Iwata M, Li XY, Aghajanian G, & Duman RS (2010). mTOR-dependent synapse formation underlies the rapid antidepressant effects of NMDA antagonists. Science (New York, N.Y.), 329 (5994), 959-64 PMID: 20724638