Showing posts with label information processing. Show all posts
Showing posts with label information processing. Show all posts

Tuesday, August 27, 2013

Philosophy of Computational Neuroscience

Just like experimental neuroscience, computational neuroscience can be done well or poorly.

computational models look beautiful (source)
This post was motivated by Janet Stemwedel's recent post in Adventures in Ethics and Science about the philosophy of computational neuroscience. There seem to be three views of the use of computational models in biology and neuroscience:

1. All models are bullshit.
2. Models rely on MATH, so of course they are right.
3. Some models are good and some are bad.

Obviously the first two are extremes and usually posited by people who don't know anything about computational neuroscience, and I am clearly advocating the third view. The only problem is that it is hard to tell if a model is good or bad unless you know a lot about it.

So here are some general principles that can help you divide the good and the bad in computational neuroscience.

1. The authors use the correct level of detail.

devil's in the details (source)
If you are trying to test how brain regions interact with each other, you don't need to model every single cell in each region, but you need to have enough detail to differentiate the brain regions from one another. Similarly, if you are trying to test how molecules diffuse within a dendrite, you don't need to model a whole cell, but you need to have enough detail to differentiate one molecule type from another. If you are trying to test how a cell processes information, you need to have a cell, as you may have learned in how to build a neuron.  Basically a model can be bad simply because it is applied to the wrong question.

2. The authors tune and validate their model using separate data.

When you are making a model you tune it to fit data. For example, in a computational model of a neuron you want to make sure your particular composition of channels produces the right spiking pattern. However, you also want to validate it against data. So how is tuning different from validating? Tuning is when you change the parameters of the model to make it match data. Validating is when you check the tuned model to see if it matches data. Good practice in computational neuroscience is to tune your model to one set of data, but to validate it against a different set of data.
For example, if a cell does X and Y, you can tune your model to effect X, but then check to see that the parameters that make it do X also make it do Y. Sometimes this is not possible. Maybe there is not enough experimental data out there. But if it is not possible, you should at least test the robustness of your model (see point 3).

3. The authors test the robustness of their model.

A robust computational model can be delicious (source)
One problem with computational models is that the specific set of parameters you've found by tuning the model might not be the 'right ones.' In fact they probably aren't the right ones. There are many different sets of parameters that can make a neuron spike slowly, for example.  And the chance that you hit on exactly the correct combination of things is very low. But that doesn't mean the model is not useful. You can still use the model to test effects that are not strongly altered by small changes in these parameters. So you need to test whether the specific effect you are testing is robust to parameter variation. If you are testing effect Q, you can increase the sodium channels by 10%, or the network size by 20% and see if you still get effect Q. In other words is 'effect Q' robust to changes in sodium channels or network size? If it is, then great! Your effect is not some weird fluke due to the exact combination of parameters that you have used.

These are the main things I try to pay attention to, but I am sure there are other important things to keep in mind when making models and reading about them. What are your thoughts?

© TheCellularScale


Monday, May 27, 2013

What is an Experiment?

What is an experiment?
Experimenting with color (source)

People use the term 'experiment' to meant a lot of things. One may say "she experimented with drugs" or "she performs experimental music". Someone might 'experiment with ones hair' or 'perform a thought experiment'.

All of these uses of the word experiment have distinct connotations, but most of them essentially mean 'to try something and see what happens'. In the examples above, most of the phrases also imply that the experiment is something new. If she experiments with her hair, she's probably trying some new style and seeing if she likes it. If she performs experimental music, she's probably not following the conventional rules for the music she is playing.

Are these kinds of 'experiments' different from the experiments that scientists do? Well yes and no.  The basic definition of an experiment as 'trying something [often new] and seeing what happens' is pretty much what scientists do. So what's different? Why isn't someones 'hair experiment' publishable in a scientific journal?

Mythbusters would have you believe that the only difference between science and screwing around is writing it down:


And that is sort of true.

But what really makes a scientific experiment scientific is controls. In our hair example, you can experiment with your hair by dying it black, and seeing if you like it. But that's not the scientific experiment. To be scientific you would have to decide how to measure how much you like your new hair color. You could do this by filling out a survey each day asking you how many times you thought you were pretty or rating your confidence on a scale of 1-10. You could fill this survey out for a week and then dye your hair and fill the survey out for another week. You could then compare the scores and decide if the new black hair had a 'significant' impact on your self-image. 

Lets say it does impact your self image and you report higher self-confidence that second week. But what if you feel different just because you have a new hair color, not because you have black hair?

Well, you would want to do a control experiment, which controls for the newness of the hair color. You could control for novelty by dying your hair yet a different color, and taking the survey for another week. Or you could take the survey two months after you dyed your hair black to see if you still report higher confidence or if your confidence has dropped back down to normal.


This is not a perfect experiment by any means, it's not even a clever or well-designed one, but it is somewhat scientific. And illustrates what I think is the most important difference between experimenting as in trying something new, and experimenting as in trying to find something out:

The control group


In addition, here is a great example of how important the control group is in science. (See the epilogue)

© TheCellularScale



Monday, April 22, 2013

Connecting Form and Function: Serial Block-face EM

The retina is a beautiful and wondrous structure, and it has some really weird cells.

Retina by Cajal (source)
Retinal Ganglion Cells (RGC) have all sorts of differentiating characteristics. Some are directly sensitive to brightness (like rods and cones), while some are sensitive to the specific direction that a bar is traveling.

I am discussing really amazing new techniques to see inside cells this month, and have already posted about the magic that is Array Tomography. Today we'll look at another amazing new technique that (like array tomography) combines nano-scale detail with a scale large enough to see many neurons at once. This technique is called Serial Block-face Electron Microscopy (SBEM), and was recently used to investigate how starburst amacrine cells control the direction-sensitivity of  retinal ganglion cells.


Serial Block-face EM (source)

SBEM images are acquired by embedding a piece of tissue (like a retina) in some firm substance and slicing it superthin (like 10s of nanometers thick) with a diamond blade. The whole slicing apparatus is set up directly under a scanning electron microscope, so as soon as the blade cuts, an image is taken of the surface remaining. Then another thin slice is shaved off and the next image is taken, and so on.

Using this technique, Briggman et al. (2011) are able to trace individual neurons and their connections for a (relatively) large section of retina. What is so great about this paper is that before they sliced up the retina, they moved bars around in front of it and measured the directional selectivity of a bunch of neurons. Then, using blood vessels and landmarks to orient themselves, they were able to find the exact same cells in the SBEM data and trace them.

Briggman et al. (2011) Fig1C: Landmark blood vessels
The colored circles above represent the cell bodies and the black 'tree' shape are the blood vessel landmarks.

Once they found the cell bodies, the could trace the cells through the stacks of SBEM data. What is really neat is that you can try your hand at this yourself. This exact data set has been turned into a game called EYEWIRE by the Seung lab at MIT.

Reconstructing the cells, they could not only tell which cells connected to which other cells, but they could also see exactly where on the dendrites the cells connected. This is the really amazing part. They found that specific dendritic areas made synapses with specific cells.

Briggman et al. (2011) Fig4: dendrites as the computational unit

This starburst amacrine cell overlaps with many retinal ganglion cells (dotted lines represent the dendritic spread of individual RGCs)...BUT its specific dendrites (left, right, up down etc) synapse selectively onto RGCs sensitive to a particular direction. Each color represents synapses onto a specific direction-sensitivity. e.g. yellow dots are synapses from the amacrine cell onto RGCs which are sensitive to downward motion.

This suggests that each individual dendritic area of these starburst amacrine cells inhibits (probably) a specific type of RGC, and that these dendrites act relatively independently of one another.

"The specificity of each SAC dendritic branch for selecting a postsynaptic target goes well beyond the notion that neuron A selectively wires to neuron B, which is all that electrophysiological measurements can test. Instead the dendrite angle has an additional, perhaps dominant, role, which is consistent with SAC dendrites acting as independent computational units."  -Briggman et al (2011)(discussion)

These cells are weird for so many reasons, but the ability of the dendrites to act so independently of one another is a new and exciting development that I hope to see more research on soon.

© TheCellularScale


ResearchBlogging.org
Briggman KL, Helmstaedter M, & Denk W (2011). Wiring specificity in the direction-selectivity circuit of the retina. Nature, 471 (7337), 183-8 PMID: 21390125

Wednesday, April 17, 2013

Van Gogh was afraid of the moon and other lies

I remember the first time I realized just how easily false information gets spread about.

A terrifying starry night
I was in French class in high school. Our homework had been to find out 1 interesting fact about Van Gogh and tell it to the class. When it was my turn, I said some boring small fact that I no longer remember. My friend sitting behind me, however, had a fascinating fact: When Van Gogh was a young child, he was actually afraid of the moon.

The teacher and the class were all quite impressed and thought about how interesting that was and how that fact might be reflected in the way that he paints the Starry Night. Though this fact was new to everyone, including the teacher, no one even thought to question its truth.

In fact, the teacher was so enthralled by this idea that she passed the information on to all the other French classes that day.

When talking to my friend later that day, he admitted that he had not done the assignment, and just made the 'fact' up. I was completely surprised, not only that someone had not done their homework *gasp*, but that I hadn't even thought to question whether this was true or not. 
The best lies have an element of truth (source)
 Misinformation like this spreads like wildfire and is exceptionally difficult to undo. The more things you can link this piece of information to in your brain, the more true you might think it and even after your learn that it's not true, you still might inadvertently believe it or fit new ideas into the context it creates. Myths like the corpus callosum is bigger in women than in men is just one of those things that is easy to believe.

An interesting paper by Lewandowsky et al. (2012) explains how this kind of persistent misinformation is detrimental to individuals and to society with the example of vaccines causing autism. This particular piece of misinformation is widely believed to be true despite numerous attempts to publicize the correct information and the most recent scientific findings showing no evidence for a link between the two

The authors of this paper give some recommendations for making the truth more vivid and effectively replacing the misinformation with new, true information. For example:
"Providing an alternative causal explanation of the event can fill the gap left behind by retracting misinformation. Studies have shown that the continued influence of misinformation can be eliminated through the provision of an alternative account that explains why the information was incorrect." Lewandowsky et al. (2012)
Misinformation can be replaced with information, but it takes more work to replace a 'false fact' than to just have the truth out there in the first place. It is much better when misinformation is not spread around in the first place, than when it is retroactively corrected.

This paper is also covered over at The Jury Room.


© TheCellularScale


ResearchBlogging.org
Lewandowsky, S., Ecker, U., Seifert, C., Schwarz, N., & Cook, J. (2012). Misinformation and Its Correction: Continued Influence and Successful Debiasing Psychological Science in the Public Interest, 13 (3), 106-131 DOI: 10.1177/1529100612451018

Sunday, April 7, 2013

LMAYQ: Scales

The word "scale" can mean many things, and The Internet can't yet use context to tell the difference. So for this issue of Let Me Answer Your Questions, here are questions about scales that The Internet thinks The Cellular Scale can answer. As always these are real true search terms, and all the posts in the LMAYQ series can be found here

A Question of Scale (source)


1. "Can you give a rat scales?"

 I have never thought to ask this question, but it is an interesting one. If you can grow weird things on mice, like ears, then why not scales? Well here's the thing, the 'ear mouse' is growing skin like it normally does, the skin is just growing over an ear-shaped mold. It would actually be harder to make a rat grow scales. If it is possible, it would take some mastery in genetic manipulation...

Bee-Rat, the ultimate achievement in genetic manipulation (source)

Some sniffing around on wikipedia taught me that scales have evolved several times (fish, reptiles, arthropods, etc). It might be possible to make a rat (or mouse) grow scales by isolating the scale gene from these other animals and inserting it into the rat genome. However, since rats already grow fur, teeth, and nails, which are related to scales, it might be possible to manipulate those features already in the rat to become more scale-like.

But to answer your question, no. I am pretty sure we can't give a rat scales yet.


2. "Does the giant squid have scales?"

Another interesting question. The quick answer is no, giant squid and colossal squid (like their normal squid counterparts) have smooth skin that does not contain scales. This isn't too surprising because squid aren't fish, they are cephalopods (like octopus and cuttlefish). Cephalopods sometimes have shells, but not scales. 

Zoomed in view of Squid Skin (source)
Instead of protective scales, cephalopods use pigment in their skin to camouflage themselves or confuse predators.

Blue Octopus, Eilat Israel (I took this picture)
This octopus turning blue sure confused me.


3. "How to turn your cell phone into a scale."

There are a couple of ways that you might think a cell phone could be used as a scale. One is by the touch screen sensor. However, most smart phones now have capacitive touch screens which respond to the electric change your finger induces on the screen. That means that the amount of pressure applied doesn't matter. So you couldn't use a smart phone as a scale in that way.

Another way is through the accelerometer. Smart phones also have accelerometers, which you could possibly use to measure the force of something moving. But this wouldn't tell you the mass of the object unless you already knew the acceleration. (force = mass * acceleration). 

But really the only way that seems to actually work (albeit slowly and with questionable accuracy) is using the 'tilt sensor' of the smart phone.

But really you just as well make your own if you are weighing out small amounts of something.

Most importantly it's helpful to know what some typical objects around the house weigh, so you can use them to calibrate a phone or homemade scale.  Here are some useful weights:

1. US penny 2.5g
2. US nickel 5 g
3. 1ml water 1g
4. Euro 7.5g
5. British pound 9.5 g



4. "What is the scale on the cellular level?" 

Finally a relevant question! Most cells are measured in microns, with a blood cell being about 6-8 microns in diameter.

blood (source)
Neurons on the other hand can have somas (cell bodies) ranging from tiny (5 micron diameter) to large (50 micron diameter). But even for neurons with small somas, the dendritic or axonal arbors can be gigantic. 

Some neurons in the aplysia (snail) can get up to 1mm (1,000 microns) in diameter. Which is ridiculously huge for a neuron. For perspective, C. Elegans, a nematode frequently used for neuroscience research, is about 1mm in length. The whole animal! Including its 302 neurons! 

© TheCellularScale



Monday, March 25, 2013

Guest Post: AMPA Receptors are not Necessary for long term potentiation

Today's post is brought to you by @BabyAttachMode, who is an electrophysiologist and blogger. Today we are blog swapping! I have a post over at her blog and her post about AMPA receptors and LTP is here. So enjoy, and when you're done reading about the newest advances in synaptic plasticity here, you can head over to InBabyAttachMode and read about my personal life.
 
AMPA Receptors are not Necessary for long term potentiation

Science is most interesting to me when you’re testing a hypothesis, and not only do you prove the hypothesis to be false, but you discover something unexpected. I think that happened to Granger et al. They were trying to find which part of the AMPA receptor is necessary for long-term potentiation (LTP), the process that strengthens the connection between two brain cells when that connection is used often. Indeed they find that AMPA receptors are not necessary at all for LTP, which is very surprising given the large body of literature describing how the GluA1 subunits of the AMPA receptor, through interactions with other synaptic molecules that bind to the intracellular C-tail (the end of the receptor that is located inside the cell), are inserted into the synapse to induce LTP.
LTP (source)
The authors made an inducible triple knock-out, which means that they could switch off the genes for the three different AMPA receptor subunits GluA1, GluA2 and GluA3. This way, they ended up with mice that had no AMPA receptors at all. The authors were then able to selectively put back one of the AMPA receptors, either the entire receptor or a mutated receptor. By inserting mutated receptors, for example a receptor that lacks its intracellular C-tail that was thought to be important for insertion of the AMPA receptor into the synapse, they could then study whether this mutated receptor was still sufficient for induction of LTP.

Surprisingly, they found that deleting the C-tail of the GluA1 subunit does not change the cell’s ability to induce LTP. Even more so, they showed that you don’t even need any AMPA receptor to still be able to induce LTP; the kainate receptor (another type of glutamate receptor that has never been implicated in LTP) can take over its job too.

Figure 6C from Granger et al. (2013). Kainate receptor overexpression can lead to LTP expression, without the presence of AMPA receptors.

About this surprising discovery the authors say the following:
"These results demonstrate the synapse's remarkable flexibility to potentiate with a variety of glutamate receptor subtypes, requiring a fundamental change in our thinking with regard to the core molecular events underlying synaptic plasticity."
Of course if you say something like that, the main players in the LTP field will have something to say about it, and they did. Three giants in the field of synaptic physiology commented in the journal Nature, but their opinions differed. Morgan Shang called it "a step forward", whereas Roberto Malinow and Richard Huganir called it "two steps back", saying that LTP without AMPA receptors can only happen in the artificial system that the authors of the paper use to study this. They expect that cells lacking all three AMPA receptors will look so different from the normal cells that the results are difficult to interpret.

Either way, this paper opens new views and questions to how LTP works, and whether AMPA receptors are as important as we thought.


ResearchBlogging.orgGranger AJ, Shi Y, Lu W, Cerpas M, & Nicoll RA (2013). LTP requires a reserve pool of glutamate receptors independent of subunit type. Nature, 493 (7433), 495-500 PMID: 23235828
 
Sheng M, Malinow R, & Huganir R (2013). Neuroscience: Strength in numbers. Nature, 493 (7433), 482-3 PMID: 23344353

Friday, March 15, 2013

Is it 'Important' or is it 'valuable'?

We've recently discussed dopamine as a reward prediction signal. But that is really just the start of the complicated dopamine story.

Dopamine's role in reward and punishment (by the hiking artist)
Some research groups have also found that dopamine neurons respond to aversive stimuli, like an air puff to the face or an electric shock. This finding seems to be be completely incompatible with the idea that dopamine is a signal for reward.

Luckily some scientists took the time to try to resolve this discrepancy. Bromberg-Martin, Matsumoto, and Hikosaka (2010) have written an excellent review paper explaining that some dopamine neurons do code for value (reward), but other dopamine neurons code for salience (importance).

Differential Dopamine Coding (Bromberg-Martin et al., 2010 Fig 4)

When researchers are recording from a value coding dopamine neuron, it looks like the neuron responds to reward and actually reduces its response to the air puff. This makes sense as a 'dopamine = good' signal.

However, when a researcher is recording from a salience coding dopamine neuron, it looks like the neuron is responding equally to the good thing (reward) and the bad thing (air puff). This is confusing if you think 'dopamine = good', but makes sense if you think 'dopamine = important'. When the cue comes on (a light or a tone that signifies a reward is coming next or an air puff is coming next), these dopamine neurons fire if that cue means something.


Instead of just being confused about why sometimes dopamine would code for value and sometimes it would code for salience, Hikosaka's group showed that these two types of neurons are actually separate populations, and even seem separated in space.
(Bromberg-Martin et al., 2010 Fig 7B)
The value dopamine neurons are more ventral in the (monkey) brain, while the salience dopamine neurons are more dorsal-lateral. Importantly these two populations of neurons go to slightly different parts of the striatum and receive signals from different parts of the brain. The review paper suggests that the salience coding neurons receive their input from the central nucleus of the amygdala, while the value coding neurons receive their input from the lateral habenula-RMTg pathway.

The important thing here is that dopamine does not do just one thing to the brain. It doesn't just tell the rest of the brain 'yay, you won!' or 'you want that' etc... It says different things depending on different specific conditions. 

Dopamine doesn't 'mean' anything, the cell it comes from and the cell it goes to are what determine what it does. It certainly can't be classified as the 'love molecule'

 © TheCellularScale


ResearchBlogging.org
Bromberg-Martin ES, Matsumoto M, & Hikosaka O (2010). Dopamine in motivational control: rewarding, aversive, and alerting. Neuron, 68 (5), 815-34 PMID: 21144997


Monday, March 4, 2013

Honoring a Legend

The Cellular Scale is at the International Basal Ganglia Society meeting this week (#IBAGS2013), and finally has internet!

Sunrise over the Gulf of Aqaba (I took this picture)

It's already been two days of conferencing, and I plan to mainly write some follow up posts when I get back. But I will just briefly mention the "Lifetime Member" lecture that was given on the first evening of the conference.

Mahlon Delong (source)
This year's lifetime member is Mahlon DeLong.
I've written before about deep brain stimulation (DBS) as a treatment for Parkinson's Disease, and DeLong has done some fascinating work that has lead up to DBS in the  subthalamic nucleus (STN).

One particular treat was to see a video during the talk of the very first attempt at alleviating Parkinson's symptoms through a subthalamotomy, the lesion of the subthalamic nucleus.

A Parkinson's Disease monkey was given the subthalamotomy on only one side of the brain and the video shows Mahlon DeLong interacting with the monkey and noting that it's treated side is less stiff than the untreated side. A second video shows the monkey later able to move its arm with no problems.

It was exciting to see this sort of 'moment of discovery' from 1989. There were no cries of "Eureka!" or anything it was more of a 'hm, interesting' tone. You actually hear his post-doc on the video saying
(paraphrasing from memory) "the right side has better tone, at least Mahlon thinks so" and then start laughing.

(source)


One other cool thing about Dr. DeLong is that he is Muhammad Ali's physician.

© TheCellularScale

Wednesday, February 27, 2013

GABA, how exciting!

I would like to thank my good friend Anonymous for asking me a great question on a previous post.

Anonymous asks:
"Are there any known transmitters in the NS that activate both inhibitory receptor subtypes AND excitatory receptor subtypes? Or does every known transmitter activate EITHER a bunch of excitatory subtypes OR a bunch of inhibitory subtypes?"
 (btw. This doesn't qualify as a LMAYQ post because it's a real true question that someone directly asked, not a search term)

While I don't know of any instances of glutamate (excitatory) activating GABA (inhibitory) receptors or of GABA activating glutamate receptors, there is an interesting little way that GABA can activate an inhibitory receptor, but actually help excite the cell. 

GABA receptor (source)

 Here's how that works: GABA(A) receptors are permeable to chloride ions, and as the picture above shows, chloride ions (Cl-) are negatively charged. When GABA binds to the receptor, the receptor opens and chloride ions rush in, bringing their negative charge with them. This hyperpolarizes the cell, meaning it brings it lower and lower in total charge (membrane potential), which brings it further and further away from the threshold where it will fire an action potential.

BUT.... if there is a lot of chloride inside the cell already (or if the cell is resting more negatively than the chloride reversal potential), chloride will actually flow out of the cell, bringing its negative charge with it. Negative ions flowing out of the cell will depolarize the neuron increasing its total charge (membrane potential), which brings it closer and closer to the threshold where it will fire an action potential.

GABA reversing at -62mV (source)

A paper published last year in the Journal of Neuroscience shows that in a model of a hippocampal neuron, when a strong excitatory (glutamate) stimulation happens right after a GABA stimulation close by on the dendrite, the cell is actually more likely to fire than when the glutamate stimulation occurs on its own. This effect is dependent on the location of the GABA stimulation along the dendrite.


Chiang et al., 2012 Figure 4E (GPSP in the dendrite)

This figure shows that a GABA stimuation (first dotted line, blue trace) can push the glutamate (excitatory) stimulation (second dotted line, red trace) up to the point of firing an action potential (green trace). This paper also showed that GABA can still inhibit the action potential in these cells, it just has to be at the soma and almost the same time as the glutamatergic input.

Chiang et al., 2012 Figure 4G (GPSP in the soma)

 So there you have it, GABA enhancing the likelihood of an action potential and acting excitatory sometimes, and acting inhibitory other times. 


 © TheCellularScale



ResearchBlogging.org Chiang PH, Wu PY, Kuo TW, Liu YC, Chan CF, Chien TC, Cheng JK, Huang YY, Chiu CD, & Lien CC (2012). GABA is depolarizing in hippocampal dentate granule cells of the adolescent and adult rats. The Journal of neuroscience : the official journal of the Society for Neuroscience, 32 (1), 62-7 PMID: 22219270

Sunday, February 24, 2013

Scientizing Art

I've always been fascinated with the way the eye moves around a piece of art.

Andrew Wyeth's "Christina's World" (or as I looked up "that painting of a girl in a field looking at a house")

This piece by Andrew Wyeth is an obvious example of an artist completely controlling your gaze. There are pretty much no options here. You look at the girl and then you follow her gaze to the house. You probably then take a quick glance at that other house/barn to the left, and then maybe follow the edge of the light circle around the houses. (It's my opinion that that is how the eye should go on this painting, but I have no eye tracking data to support it.)

A paper last year in PLoS One really tries to "scientize' this process by testing what factors determine the eye movements, and the 'clusters' where the eye tended to fall. Massaro et al., (2012) compare dynamic and static images and images that contain human subjects or nature subjects. Their cluster analysis overlaying classic paintings makes for quite interesting images:

The next installment at MoMA

This one is a dynamic human image. Each patch of color shows where the parts of the painting where the eye lingers (face, hands, ....crotch...). The authors do all sorts of interesting analysis on this and other paintings, having participants rate the painting for 'movement' or for 'aesthetic value' and since the paper is open access, it is free to people who may not have university access to journal publications. Anyone can read the whole thing here.

One interesting thing that the authors find is that pictures containing humans have fewer clusters than pictures of nature. I expect this is because certain aspects of humans (faces, hands ...crotches...) are so salient and the brain focuses directly on them, while all the branches of a tree for example have about equal 'meaning' for a person.

science creates modern art
 Another great image from this paper. The authors show how much gazing was done at different parts of a painting through a heat map. This one is a human static image. The end result is actually quite haunting because the place that you want to look is blanked out (sort of like a Magritte painting).

So here are my questions: If someone looks at a blank page, where does their eye naturally go? Is there some sort of common pattern that most people use just to scan an area? Do chimpanzees use a similar pattern to scan a blank page? Does everyone have their own unique scanning pattern? Or is it just pretty much random? 

And here's an idea for artists: Buy yourself an eye tracker and have customers come use it and stare at a blank page. Trace their eye movements and then create a dynamic painting (or T-shirt, or napkin drawing) that follows the person's natural scanning patterns. This would be the ultimate in commissioned custom art! (Then give me one for free, because I think this sounds like fun.)

© TheCellularScale

ResearchBlogging.org
Massaro D, Savazzi F, Di Dio C, Freedberg D, Gallese V, Gilli G, & Marchetti A (2012). When art moves the eyes: a behavioral and eye-tracking study. PloS one, 7 (5) PMID: 22624007


Saturday, February 2, 2013

LMAYQ: Let me do your homework for you

Sometimes reading the textbook is just too hard. And sometimes it's much easier just to type your exact homework question into a search engine and find the answer. Before we get started you might want to take a look at Smith and Wren (2010) "What is Plagiarism and how can I avoid it?" 

This edition of Let Me Answer Your Questions will address 'homework questions.' As always, you can find previous LMAYQ questions here.

Tough Homework Questions are for the Internet (source)


1. "cells that fire together a) wire together. b) definitely don’t wire together. c) become overheated and die. d) wire with inactive cells."

Ok, student. If you have to turn to the Internet for this multiple choice question, you need a serious lesson in test-taking skills. Here's a tip. If you don't know the answer, eliminate some because they are obviously not the right answer. A good rule to follow is if it says definitely or all or always in the answer it is unlikely that that is the right one. So we can eliminate B. If you know anything about neurons from your class, you should at least know that neurons fire. If they didn't fire they wouldn't 'work'. So you can eliminate C. If all the cells that fired together overheated and died, you would basically not have any neurons left in your brain after reading this sentence. Now you have a 50-50 chance of guessing it correctly. Not too bad. But seriously, if one answer creates a cute little rhyme with the question... it's probably going to be that one. So yes, neurons that fire together wire together.

2. "explain simply where the hippocampus is"

The hippocampus is one of the most famous brain structures because it has to do with encoding new memories. It's name comes from the Greek hippo (horse) and campus (sea), because it looks like a seahorse:
Hippocampus = Seahorse (source)
(By the way, potamus=river, so a hippopotamus is a 'river-horse'). Back to the hippocampus: It is a structure in the brain and it is located subcortically meaning under the cortex. Specifically it's located under the temporal lobe of the cortex. There are two of them, one on each side of brain.

3. "Why are neurons and blood cells structured and shaped differently from each other?"

Neuron and Blood cell Knitting (by Estonia76)
This is a great question, but it has the ring of 'I need help with my homework' about it. I talk a lot about the shapes of neurons in this blog, usually speculating on why different neurons would be shaped differently from each other. But this is a good question, why do neurons have dendrites and axons in the first place?
Well basically neurons need to receive and transfer information, while blood cells need to physically move to transfer oxygen. Neurons stay in place, while blood cells travel all over the body. Blood cells need to be small and hydrodynamic to float through your blood vessels. Neurons need to 'cover space' to contact many other neurons, so they have branching dendrites and axons.


© TheCellularScale


ResearchBlogging.orgSmith N Jr, & Wren KR (2010). Ethical and legal aspects part 2: plagiarism--"what is it and how do I avoid it?". Journal of perianesthesia nursing : official journal of the American Society of PeriAnesthesia Nurses / American Society of PeriAnesthesia Nurses, 25 (5), 327-30 PMID: 20875892

Wednesday, January 30, 2013

Intuition or a sense of Smell?

I've long been fascinated by the idea that those feelings often attributed to 'intuition' or 'following your gut' might occur physiologically in the form of odor cues that we don't consciously register.

Intuition or Olfactuation? (source)
An example of this might me when you can just 'tell something is wrong' in a situation and decide to leave, and later found out that something bad happened later that evening. These sorts of stories are often used as evidence that people have psychic powers of some kind, and are equally often dismissed as just a coincidence.

But another possibility is that humans communicate through scents more than we realize. Maybe you could actually 'smell something is wrong' rather than supernaturally 'tell something is wrong' in the above hypothetical situation.

Researchers in the Netherlands tested whether the feelings of 'disgust' and 'fear' could be communicated through smell. They had guys watch scary parts of horror movies or disgusting graphic parts of MTV's Jackass while wearing 'sweat pads' in their armpits.

Who knew this would contribute to SCIENCE?

They then had female volunteers smell the sweat pads and measured their facial motions to see if the expressions they made were more like fear or disgust.

Importantly the protocol was double-blind, so neither the experimenters handing out the sweat pad vials, nor the participants had any idea what 'emotion' was sweated into those pads.

And they found what they thought they would find: the 'fear muscles' (Medial Frontalis) were most active for the women smelling the sweat of the horror-watching men, and the 'disgust muscles' (Levator Labii) were most active for the women smelling the sweat of the Jackass-watching men. In the authors words (stats taken out for readability):
"Moreover, fear chemosignals generated an expression of fear and not disgust, disgust chemosignals induced a facial configuration of disgust rather than fear, and neither fear, nor disgust, were evoked in the control condition" de Groot et al. (2012)
So at very very close range (like nose in armpit), it seems that emotional signals can be transmitted through scent.
The smell of fear (source)

A quick side note: the scent in this study was created by men and smelled by women. I wonder if this specific gender combination is necessary for the scent-based communication. You would think men smelling men and women smelling women would have the same effect, but they did not investigate other combinations.

If you learn anything from this, let it be to not go see a disgusting movie on a first date, you might end up repulsing each other with your 'disgust sweat' later.

© TheCellularScale

ResearchBlogging.org
de Groot JH, Smeets MA, Kaldewaij A, Duijndam MJ, & Semin GR (2012). Chemosignals communicate human emotions. Psychological science, 23 (11), 1417-24 PMID: 23019141

Tuesday, January 22, 2013

How to Build a Neuron: Step 5

And now, the final step in how to build your computational model of a neuron: Add Synaptic Channels. All the steps in this series can be found here.
Synapses connect neurons (source)
So you already have a neuron, and you've added intrinsic channels to it. The next thing you want to do is add synaptic channels so you can hook this neuron up to other cells.

The main synaptic channels you want to add are the excitatory channels: NMDA and AMPA and the inhibitory channel GABA. These channels don't have the same kind of activation and inactivation curves and the intrinsic channels do because they aren't activated by voltage, they are activated by a neurotransmitter.

AMPA and NMDA receptors are activated primarily by glutamate, and cause an influx of sodium and calcium ions. Since both sodium and calcium ions are positively charged, this depolarizes the cell membrane and brings it closer to firing an action potential.

AMPA receptors (source)
GABA receptors, on the other hand are primarily activated by GABA, and cause and influx of chloride ions into the cell. Because chloride ions are negatively charged, this hyperpolarizes the cell membrane and brings it further away from firing an action potential.

So if you want to have a realistic model of a neuron, you need to add an approximation of these channels. This is easier than adding intrinsic channels, because it is an on/off style (binary) rather than an analogue activation. So basically you just put in the parameters you want like how fast does the channel open and close, how much current does it allow through when activated, and where are they on the neuron.

Of course deciding these parameters is not always easy. A paper out this year in PLoS Computational Biology describes 4 different ways the NMDA receptor can be configured and analyzes the consequences during different stimulation patterns. 

Evans et al., (2012) Figure 3
The 4 NMDA configurations (based on the 4 different GluN2 subunits) vary in their sensitivity to a magnesium block, how fast they decay, and their maximal current. Above are their responses to the same stimulation patterns (an STDP protocol). Even though they were all receiving the same input pattern, they each show a very different response.

So when considering adding synaptic channels to your model neuron, take the time to find out what the configuration of the receptors should actually be in the type of neuron you are building.


© TheCellularScale

If you are good at following clues, you will realize that I am very, very familiar with this paper.


ResearchBlogging.orgEvans RC, Morera-Herreras T, Cui Y, Du K, Sheehan T, Kotaleski JH, Venance L, & Blackwell KT (2012). The effects of NMDA subunit composition on calcium influx and spike timing-dependent plasticity in striatal medium spiny neurons. PLoS computational biology, 8 (4) PMID: 22536151