Sunday, September 30, 2012

Almost Creating a Fake Memory Trace

Mouse Memories (source)
Last post, we talked about the fallibility of flashbulb memories. Today we're going to discuss a new paper in which scientists claim to have created a fake memory in a mouse. 

Garner et al., (2012) use the same kind of genetic trickery that Han et al. (2009) used to erase memories.  They genetically modified mice to express a foreign receptor that mice don't normally express. These kinds of receptors are called DREADDs which stands for "Designer Receptor Exclusively Activated by a Designer Drug." A DREADD can activate the cell, inactivate the cell, or even kill the cell. (Han et al., added a receptor that killed the the cells, but Garner et al. add a receptor than activates the cells.)

But, here's the real genetic trickery, the DREADD is promoted only in the cells that are active at a certain time. When something happens, the cells that are active during the event will express the DREADD. So later, when the designer drug is applied, only the cells which were active during the event will respond.

Using this DREADD system, Garner et al. try to trick mice into thinking that they were shocked in one room, when really they were shocked in another room.  They call this a 'generating a synthetic memory trace' and this is how they do it:

Garner et al., 2012 Figure 2A
First of all the kind of memory the authors are synthesizing is the association between a room (or context) and an electric shock. If you put a mouse in a room and then give it an electric shock, the next time it is in that same room it will 'remember' that that room is scary and will show a freezing behavior. The measurement of how good this memory is is simply counting the percent of the time that the mouse spends freezing in the room.

They have two rooms, context A (Ctx A) and context B (Ctx B).  First they take the mouse and put it in context A (but don't give it an electric shock).  This activates a certain subset of neurons and so the DREADD will get expressed in those neurons. Let's call them the "Context A neurons." Then they stop the creation of new DREADDs by adding in doxycyclin, which turns off the DREADD gene expression. This makes it so (in theory) the only cells that have DREADDs are the "Context A neurons." 

Then they put the mouse in context B, but at the same time they apply the "designer drug" to activate the DREADDs.  Since the DREADDs are (supposedly) only in the "Context A neurons," the neurons that the drug activates should trick the mouse into thinking it is actually in context A, when really it is in context B. Then they apply the shock to the mouse.

To see if they have 'generated a synthetic memory trace' the authors test whether the mouse freezes in context A (where it thinks it was shocked) or context B (where it was actually shocked). 

Garner et al., 2012 Figure 2B&C

Unfortunately the authors don't find something simple.  First of all, they find that the mice with the DREADDs (the filled black circles above) almost always freeze less than the normal control mice (grey triangles), and they don't really explain why that might be. Second of all, they find that the application of the designer drug (+CNO) increases freezing for the DREADD mice in both context A and context B. 

The mouse didn't learn that Context A is where it got shocked.  Instead it learned that Context B with the "Context A neurons" is where it got shocked.  It's like the "Context A neurons" become part of context B

The authors call this a 'hybrid memory trace' where the mouse learns to associate a combination of the "Context A neurons" and the actual context B environment with the shock.

So what if just adding this drug is enough to create a hybrid memory? The authors did a nice control experiment to test this. They did the exact same protocol, but put the mouse in context B every single time (never in context A). That way the neurons expressing the DREADD are the "Context B neurons" and should basically be the same set of neurons that are active anyway when the mouse is shocked in Context B. In this case, the mouse froze a lot to context B without the drug, and it froze the same amount to context B with the drug.  The drug caused no enhancement when it was activating the "Context B neurons." This is strong evidence that the hybrid memory trace has to involve the activation of a new set of neurons.

This is a really nice experimental design, but I think that the authors oversold their result a little bit in the title "Generation of a Synthetic Memory Trace." They didn't create a totally fake memory, they created a hybrid memory by adding in new neurons to the 'context' that the animal associated with the shock.  There is no evidence that  the mouse thought it was in context A or even that having a context A is important. If they had just stimulated a random, but new, set of neurons in context B and then stimulated that same random set of neurons when testing the mouse for freezing behavior, they might have seen the same results.

© TheCellularScale


ResearchBlogging.orgGarner AR, Rowland DC, Hwang SY, Baumgaertel K, Roth BL, Kentros C, & Mayford M (2012). Generation of a synthetic memory trace. Science (New York, N.Y.), 335 (6075), 1513-6 PMID: 22442487

Wednesday, September 26, 2012

you can't trust your brain: memory

"Flashbulb" memories are those vivid memories of specific salient events.  The 'everyone remembers exactly where they were when...' sort of events.  In the USA, and depending on how old you are, you might remember the assassination of JFK, or Martin Luther King Jr. in this way. In this century, most Americans remember exactly where they were when they heard about the 9/11 attacks on the world trade center and pentagon.
"Never Forget" (source)
It's widely acknowledged these days that the brain is not really a safe place to store information. Memories of events change over time. But for a while the "flashbulb" memory was thought to be immune from the memory-altering properties of time. Think about your own memories of 9/11 or another highly meaningful event. I bet you are pretty certain about the details. I, for example, was in my second year of college and I know exactly who told me that the first tower was hit, exactly where I was standing on the quad, and exactly what class I was going to....

...or do I? 

A study in 2003 tested the consistency of flashbulb memories over time and compared the details to 'control memories' of everyday events. They specifically recorded memories from people during the day after the 9/11 attacks, and then recorded memories of the same events from subsets of those same people 1 week, 6 weeks, and 32 weeks later. They found that the flashbulb memories did have different properties when compared to control memories, but that consistency was not one of them. 

Talarico and Rubin 2003, Figure 1a
Talarico and Rubin show that the flashbulb memories and the everyday memories had the same time-dependent decay (that x axis is in days), demonstrating that the flashbulb memory did not have some special property that protected it from corruption. 

However, they did find that the level of confidence in the memory was higher for flashbulb memories than for everyday memories. People thought (incorrectly) that their memories of the 9/11 attacks were more accurate than their other memories. 

So again we learn the lesson that we cannot trust ourselves.

In the authors words:
"The true 'mystery,' then, is not why flashbulb memories are so accurate for so long,... but why people are so confident in the accuracy of their flashbulb memories." Talarico and Rubin (2003)

But I think the most interesting finding in this paper was that the flashbulb memories of 9/11 were more likely to be recalled 'through ones own eyes' than the everyday memories. Everyday memories were seen 'through ones own eyes' at the beginning and a at 1 week, but at 6 and 32 weeks the everyday memories were more likely to be seen 'from an outside observer perspective.' The flashbulb memories, on the other hand, were seen 'through ones own eyes' at all time points. Indeed, when I think of my own 9/11 memory, I still see it through my own eyes.

The authors don't go into why that might be or what it might mean, so we are left to wonder.

© TheCellularScale


ResearchBlogging.org
Talarico JM, & Rubin DC (2003). Confidence, not consistency, characterizes flashbulb memories. Psychological science, 14 (5), 455-61 PMID: 12930476



Saturday, September 22, 2012

LMAYQ: Eating

Eating Questions (source)
Let Me Answer Your Questions, where I answer your important questions about things tangentially related to this blog. Today they are about eating. As always, these are real true 'search terms' that The Internet directed to The Cellular Scale. 


1."What physiological mechanisms makes food smell better when you are hungry?"

I almost address this in You can't trust your receptors: Smell, where I explain how the brain can actually modulate the sensitivity of the smell receptors themselves.

The real answer is that it is not exactly known, but it might have to do with grhelin. The hormone ghrelin is related to feeling hungry and a receptor for ghrelin is found in the olfactory (smell) pathways. One study actually tested whether ghrelin would affect a person's sense of smell.

Tong et al., 2011 gave people an IV injection of ghrelin and then tested how 'strongly they sniffed' with a 'sniff magnitude test (SMT)'. The higher levels of ghrelin correlated with a higher 'sniffing magnitude'. However, the sniffing magnitude was increased to both food and non-food smells. This means that people didn't necessarily inhale deeply because they liked the delicious smell of banana, they were just engaging in 'exploratory sniffing'. In addition, the authors had the smellers rate how pleasent the smell was, and the ghrelin did not increase the pleasentness ratings. 

So the actual physiological reason for food smelling better when you are hungry is still a mystery research question.


2. "best Madeleine recipe"

Well, this isn't exactly a question, but I am pretty sure this particular googler did not find what they wanted on my post on literature references in science. So here you go.  Though I have never made Madeleines, this one from Iamafoodblog.com looks delicious!


Earl Grey Madeleines Recipe adapted from 101 Cookbooks
yield: 7-8 large madeleines
  • 6 tablespoons butter
    1 egg
    3 tablespoons flour
    2.5 tablespoons sugar
    1/2 teaspoon loose leaf earl grey tea
    1/4 teaspoon vanilla
  • butter to grease madeleine pan
Preheat oven to 350 F.
Melt the butter in a small pot over medium heat. Add the tea and cool to room temperature. While the melted butter is cooling, grease the madeleine pan.
Put the egg in the bowl of an electric mixer with a whisk attachment. Whip on high speed until thick – approximately 3 minutes. The egg should double or triple in volume. Continuing to mix on high speed, and slowly add the sugar in a steady stream. Whip for 2 minutes or until mixture is thick. With a spatula, gently mix in the vanilla.
Sprinkle the flour on top of the egg batter, and gently fold in. Now fold in the butter mixture, stirring only enough to bring everything together. At this point, I like to refrigerate my batter for a bit. I find it helps with baking. Press saran wrap directly against the batter and refrigerate for at least 30 minutes.
Spoon the batter into the flutes, filling each 2/3 -3/4 full. Bake the madeleines for 12 – 14 minutes, or until the edges of the madeleines are golden brown. Remove from oven and unmold immediately.


3. "What does a mouse eat?"

Peanut head (source)
Mice eat lots of things. If you have a pet mouse, you should feed it normal pet-store mouse food because it is a complete mouse diet.  But mice love new things, so you should give them oatmeal or peanuts or other seeds and grains as treats.

In some labs, mice and rats get to eat froot loops when they find the reward cup at the end of a maze. 


© TheCellularScale


ResearchBlogging.orgTong J, Mannea E, Aimé P, Pfluger PT, Yi CX, Castaneda TR, Davis HW, Ren X, Pixley S, Benoit S, Julliard K, Woods SC, Horvath TL, Sleeman MM, D'Alessio D, Obici S, Frank R, & Tschöp MH (2011). Ghrelin enhances olfactory sensitivity and exploratory sniffing in rodents and humans. The Journal of neuroscience : the official journal of the Society for Neuroscience, 31 (15), 5841-6 PMID: 21490225

Monday, September 17, 2012

How to Build a Neuron: Shortcuts

So you want to build a neuron, but don't have the time to fill and stain it, digitally reconstruct it, or even to knit one.

Knitting Neuroscience from Knit a Neuron
Well you are in luck because a lot of scientists have collected a lot of data already and some of them are even willing to openly share their work. 

While it is great that people are willing to share their data, that willingness alone is not enough to actually make the data widely accessible (or searchable for that matter). To bridge the chasm, other scientists have developed databases and repositories.  These databases and repositories store large datasets and organize them in a searchable way. 

The first shortcut to building a neuron I will discuss is the Cell Centered Database (CCDB).

Sounds a little like "self-centered" but represents just the opposite: scientists willing to share their data with everyone

In 2003, Martone and colleagues created the CCDB as a repository for 2D, 3D, and 4D images of cells that could be downloaded and used by researchers around the globe. There is a ton of data here, protein stains, electron microscopy, and fluorescent confocal images just to name a few.  While you could do a lot with this kind of information, I am just going to give you one example of how it can be used as a major short cut in the process of building a neuron.

So say you want to make a model of a cerebellum purkinje cell, but you don't have the time or lab facilities to fill and stain your own neuron.  You could go to CCDB, type in 'purkinje neuron' in the search box and download whichever 3D image stack suits your fancy. 

example Purkinje neuron that I just got from CCDB

With this data you could go straight to step 2: reconstructing the neuron

But what if you don't have the time to digitally reconstruct the neuron?  We have already discussed how much time reconstructing a neuron can take, so it's pretty easy to see why you would want to bypass that step too. And in fact, there is a database for that!

Halavi et al (2008) developed Neuromorpho.org as a repository for neural reconstructions. Neuromorpho.org has almost 8,000 downloadable digital reconstructions of neurons, which as they say on the website represents over 200,000 hours of manual reconstruction time. 

NeuroMorpho.org, for all your neural needs.

Similar to CCDB, Neuromorpho offers much more than just a shortcut for lazy computational modelers. It has such detailed information about each neuron that a whole project could be done simply by comparing neural characteristics of different cell classes or different species. 

But my job here is to tell you how you can use it as a shortcut to building a neuron.

Say you want to build a computational model of a CA1 Hippocampal Pyramidal Cell, but you don't want to stain it and you don't want to reconstruct it.  Well, just go to Neuromorpho.org and click 'browse by brain region' and then on 'hippocampus'. Then look through the 1,000 hippocampal cells (organized by class) that have already been reconstructed for you...

Pyramidal cell in the Hippocampus from Neuromorpho.org
 
...and pick your favorite. 

Then you can jump right on through to step 3. (coming soon)


ResearchBlogging.orgHalavi M, Polavaram S, Donohue DE, Hamilton G, Hoyt J, Smith KP, & Ascoli GA (2008). NeuroMorpho.Org implementation of digital neuroscience: dense coverage and integration with the NIF. Neuroinformatics, 6 (3), 241-52 PMID: 18949582

Martone ME, Tran J, Wong WW, Sargis J, Fong L, Larson S, Lamont SP, Gupta A, & Ellisman MH (2008). The cell centered database project: an update on building community resources for managing and sharing 3D imaging data. Journal of structural biology, 161 (3), 220-31 PMID: 18054501

Thursday, September 13, 2012

How high are you exactly?

Your brain might not be sure.
Spiral stairs at the Vatican (I took this picture)
In a study out last year, Hayman et al., (2011) investigate whether the classic place cells and grid cells of the rat brain also encode vertical height. 
We've discussed place cells before, so read this if you want to get back to the basics. Grid cells are a sort of extension of place cells.  They are cells that fire in a regular pattern over an area while you move around in it. 
 
like this (source)
The red dots represent when the neuron fires and the black line represents the path that the animal (probably a rat) was traversing.  As you can see the neuron fires when the rat reaches any of the points that make up a regular grid.   
But this is just the rat crawling around on a flat surface. What happens if you have the rat move vertically?  Does a vertical grid show up? Hayman et al. tested exactly that by introducing the rats to the exciting world of rock climbing.
 
While the rats were climbing around on this rat-sized rock wall, the cells that had fired in a grid pattern on a flat surface actually fired in a striped pattern on the pegboard. 

Figure 2A Hayman et al., 2011

On the left is the cell firing like a normal grid cell on a flat surface, but on the right a grid cell (not the same one) is firing in a striped pattern on the vertical climbing wall.
The authors suggest that this might be just the normal grid showing up but extending along the vertical plane. In other words, each point of the grid includes the space directly above and directly below it and basically forms a grid of columns. 

This finding could mean a number of things:
1. The brain does not encode vertical space very specifically.
2. Vertical space is encoded, just not in the hippocampus and entorhinal cortex (where place cells and grid cells reside).
3. A rat's brain doesn't encode vertical space, but maybe brains in other animals (flying animals for example) do.
In a mini review of this paper, Savelli and Knierim (2011), suggest that future experiments on flying mammals known to have grid cells (such as bats) would shed light on the third point. 
Vertical grid cells in 'the flying squirrel'? (source)
I agree and I also wonder if the entorhinal cortex of humans could develop three dimensional grid cells under certain conditions.  Could people who really need to know where they are in vertical space, such as trapeze artists or gymnasts, develop a more specific sense of height?
ResearchBlogging.orgHayman R, Verriotis MA, Jovalekic A, Fenton AA, & Jeffery KJ (2011). Anisotropic encoding of three-dimensional space by place cells and grid cells. Nature neuroscience, 14 (9), 1182-8 PMID: 21822271

Savelli F, & Knierim JJ (2011). Coming up: in search of the vertical dimension in the brain. Nature neuroscience, 14 (9), 1102-3 PMID: 21878925

Sunday, September 9, 2012

Taste cells in weird parts of your body

Everyone knows that taste and smell are intimately related, but what you might not know is that you have actual 'taste' cells in your nose (the nasal epithelium to be exact). 

Don't drink this way (source).
But before you go try to drink through your nose, read on, the story gets weirder.  These 'taste' cells express the T2R receptor which senses 'bitterness'. However, if you sniff some 'bitter' molecules into your nose, you won't feel like you are tasting bitterness because these cells don't go to the official 'taste' part of the brain.  In fact, they do something even cooler.  I'll let a previously-blogged-about author, Dr. Finger, explain:
"Since the SCCs synapse onto polymodal pain fibers in the trigeminal nerve, activation of the SCCs by bitter ligands evokes trigeminally mediated reflex changes in respiration." (Finger and Kinnamon 2011)

The SCCs are the 'solitary chemosensory cells' which are the 'taste' cells in the nose that I was talking about. And basically what Dr. Finger is saying is that when stimulated, these cells cause you pain and change the rate at which you breath. This is probably because it is not evolutionarily healthy to have something bitter up your nose and you might not want to breath it in deeply. Might be poison. 

If taste cells in the nose isn't weird enough, here is a diagram of all the other strange places in your body where 'taste' cells have been found:

Taste cells in the body Figure 2 (Finger and Kinnamon 2011)
So why do you need taste cells in your stomach? Well these cells don't send signals to the taste center of the brain either, but they do release ghrelin, which is an appetite-inducting peptide.  Since the taste receptors in the stomach have T1R receptors which respond to sweetness and amino acids (glutamate), this could be a signal saying 'yum, this is good stuff, keep eating'.

But why would there be taste cells in the bile duct? 
The authors of this review paper don't have that answer either:
"The composition of fluid in the bile ducts is dictated by secretions of the liver, pancreas, and gall bladder, so why is it necessary to diligently monitor the composition of biliary fluids and they move from gall bladder to intestines?" (Finger and Kinnamon 2011)
The moral of the story: Even though cells in weird parts of the body are shaped like taste cells and have taste receptors on them, they don't necessarily make you feel the feeling of taste, but they might serve other important survival functions.

© TheCellularScale

ResearchBlogging.org
Finger TE, & Kinnamon SC (2011). Taste isn't just for taste buds anymore. F1000 biology reports, 3 PMID: 21941599

Thursday, September 6, 2012

LMAYQ: Relationship Advice

We have reached the part of our program where I answer your important google questions. As always, these are real true questions that I found on my 'keyword search terms'. You can see all of these posts and the questions they answer here

Today's theme: Relationships

1. "Is there a difference in dopamine level between men and women?"

Great question.  In my post "Neurosexism and Delusions of Gender", I present a graph from Di Liberto et al., 2012 showing that female and male (mice) brains show no difference in the amount of dopamine transporter (DAT) in the striatum.

No Difference in Male and Female Dopamine Transporter

This question is particularly interesting in light of the recent reactions to Naomi Wolf's new book: "Vagina: a New Biography"  (See also Here and Here) In "Vagina", Naomi Wolf explains that "Dopamine is the ultimate feminist chemical in the female brain." ... Which is a a pretty ridiculous simplification of what dopamine is... in that it is completely untrue. First of all, men have dopamine too. And while their may be some differences in dopamine positive cells in the hypothalamus (Lansing and Lonstein 2006), the hypothalamus is  not the brain's main source of dopamine. Basically, the death of the Substantia Nigra (which feeds tons of dopamine into the striatum), results in Parkinson's Disease for men just like it does for women.

Second of all, dopamine doesn't have anything to do with your worldview (except maybe to help color it rosy). Misogynists have dopamine just like feminists do, and there is no reason to think that the amounts differ between them.

Dopamine: the Love molecule... a grossly simplified necklace (source)

However, having not read the book, I am guessing that Naomi Wolf claims dopamine is a feminist chemical because it is released during sex. But guess what, it's released during sex for both males and females, and it's not ONLY released during sex, it is also released when you eat food, do cocaine, or talk about yourself. It is a very complex molecule, with many receptors (at least 5 different kinds).  And dopamine literally does OPPOSITE things depending on whether it binds to the D1 type receptors or the D2 type receptors.


2. "Why are all women bitches?"

Look, you sound like a "Nice Guy", so I'll give you a tip: Some women do bitchy things sometimes, just like some guys do bitchy things sometimes, but your attitude that all women are bitches is probably one of the main reasons that most women are bitches to you. For a couple of reasons:

1. Self-fulfilling prophecy: If you expect to be screwed over by every woman you meet, you will see 'bitchiness' in their actions no matter what, and will probably act preemptively defensive. This will result in women acting on average more bitchy to you.

2. You are an asshole.


 3. "Do guys like cerebral women?"

Yes, guys like cerebral women.  Do all guys like cerebral women? probably not, some might be threatened by a woman who is smart.  But that's their loss.  Those guys probably end up asking the Internet questions like "why are all women bitches?"  But really, most guys I've met do not want vapid self-absorbed or shallow ladies in their lives.

One of my favorite fantasy authors, Robert Jordan has a moderately related quote:
"When I was a boy, just old enough to be starting to date in a fumbling way, I complained something about girls. And my father said to me, “Would you rather hunt leopards or would you rather hunt rabbits? Which is going to be more fun?” And I decided I’d rather hunt leopards." -Robert Jordan

Guys don't want wide eyed rabbit easy to get simple women, they want a challenge. If you are a cerebral woman, I am sure that you are smart enough to know that acting stupid to get a guy is basically the worst possible thing you could do to yourself. You don't want a guy who would fall for that crap.


But the real advice I want to give you is to stop caring what guys like. Do what you like, find yourself an awesome cause and fight for it, or an awesome project and do it. Check out cool things like Girls Who Code and play some Mass Effect. If you are really worried about 'having a partner' doing the things that you love to do will put you near other people who like doing those same things. In my opinion that is the best way to start a deep relationship anyway.

For more on gender et cetera, see the XX tab

© TheCellularScale


ResearchBlogging.orgLansing SW, & Lonstein JS (2006). Tyrosine hydroxylase-synthesizing cells in the hypothalamus of prairie voles (Microtus ochrogaster): sex differences in the anteroventral periventricular preoptic area and effects of adult gonadectomy or neonatal gonadal hormones. Journal of neurobiology, 66 (3), 197-204 PMID: 16329116

Di Liberto V, Mäkelä J, Korhonen L, Olivieri M, Tselykh T, Mälkiä A, Do Thi H, Belluardo N, Lindholm D, & Mudò G (2012). Involvement of estrogen receptors in the resveratrol-mediated increase in dopamine transporter in human dopaminergic neurons and in striatum of female mice. Neuropharmacology, 62 (2), 1011-8 PMID: 22041555


Monday, September 3, 2012

The Optimism Bias in Science

"I have always believed that scientific research is another domain where a form of optimism is essential to success: I have yet to meet a successful scientist who lacks the ability to exaggerate the importance of what he or she is doing, and I believe that someone who lacks a delusional sense of significance will wilt in the face of repeated experiences of multiple small failures and rare successes, the fate of most researchers"     -Daniel Kahneman

The Brain: Irrational, Positive, Deceptive
I just finished reading The Optimism Bias by Tali Sharot.  The book explains that most people have an "Optimism Bias," a tendency to over-estimate how smart, good-looking, and capable they are as well as the likelihood that good things will happen to them. 

Sharot points out that in a 1981 study (Swenson O) 93% of participants rated themselves as in the top 50th percentile (i.e. 'above average') for driving ability.  Other studies have shown that this "Better than Average Effect" applies to many aspects of our self-image.  Think about yourself right now... do you think you are smarter than average? better looking than average? nicer than average? etc.  You probably do.  And even though it is logically impossible for 93% of people to be better than the 50% mark, you probably still think that you are actually  better/smarter/nicer. 

So even though you think you are smarter than most people, the reality is that most people think they are smarter than most people.

Similarly people under-estimate the likelihood that bad things will happen in their life and over estimate the likelihood that good things will happen. Ask any newly engaged couple what they think their chances of divorce are, and if not too offended by such a rude question, they will probably rate the chance of divorce as very low or even zero.  However reality says that they actually have a 41-50% chance of divorce. 

divorce cake (source)

But as Sharot claims, this optimistic skew to reality is actually beneficial. Which newly engaged couple would actually get married if they fully realized and believed that their chances of staying married were no better than the chance of flipping heads or tails on a coin? The irrational belief that we are somehow exceptional is motivating. Sharot even suggests that the optimism bias is so prevalent in our species and culture that people who realistically evaluate their situation are not the norm, and may even be clinically depressed. 

While The Optimism Bias has a great premise and recounts some exciting research, I thought the book in general was way too long.  Some very simple concepts (like that people have an optimism bias) were repeated over and over and over, and some (interesting) concepts were introduced that had pretty much nothing to do with optimism (like that memories are unreliable). 

The book didn't really teach me much about how the brain works, but it did set me thinking about how a strong optimism bias is an essential trait in academia.  As the Kahneman quote above states, most scientists face critique after critique and failure after failure.  Successes are few and far between and the same sense of realism that would prevent many a marriage, would also prevent a potential scientist from entering a Ph.D. program. Who would even apply to graduate school if they fully understood and believed the dismal statistics about finishing Ph.D. programs and the subsequent tenure-track job search. 

We have to believe that we are special, that our work is crucial, and that our contributions are significant.  No scientist will succeed if they get their peer-reviewed paper back from a journal and immediately think: 'yep, the third reviewer is correct, this work is flawed and has little impact, I should quit and become a cab driver.' A near-delusional sense of significance and an "it's not me, it's them" attitude is required to stand by your ideas and abilities in the face of these kinds of criticisms. 


© TheCellularScale


ResearchBlogging.org
Sharot T (2011). The optimism bias. Current biology : CB, 21 (23) PMID: 22153158