Tuesday, August 27, 2013

Philosophy of Computational Neuroscience

Just like experimental neuroscience, computational neuroscience can be done well or poorly.

computational models look beautiful (source)
This post was motivated by Janet Stemwedel's recent post in Adventures in Ethics and Science about the philosophy of computational neuroscience. There seem to be three views of the use of computational models in biology and neuroscience:

1. All models are bullshit.
2. Models rely on MATH, so of course they are right.
3. Some models are good and some are bad.

Obviously the first two are extremes and usually posited by people who don't know anything about computational neuroscience, and I am clearly advocating the third view. The only problem is that it is hard to tell if a model is good or bad unless you know a lot about it.

So here are some general principles that can help you divide the good and the bad in computational neuroscience.

1. The authors use the correct level of detail.

devil's in the details (source)
If you are trying to test how brain regions interact with each other, you don't need to model every single cell in each region, but you need to have enough detail to differentiate the brain regions from one another. Similarly, if you are trying to test how molecules diffuse within a dendrite, you don't need to model a whole cell, but you need to have enough detail to differentiate one molecule type from another. If you are trying to test how a cell processes information, you need to have a cell, as you may have learned in how to build a neuron.  Basically a model can be bad simply because it is applied to the wrong question.

2. The authors tune and validate their model using separate data.

When you are making a model you tune it to fit data. For example, in a computational model of a neuron you want to make sure your particular composition of channels produces the right spiking pattern. However, you also want to validate it against data. So how is tuning different from validating? Tuning is when you change the parameters of the model to make it match data. Validating is when you check the tuned model to see if it matches data. Good practice in computational neuroscience is to tune your model to one set of data, but to validate it against a different set of data.
For example, if a cell does X and Y, you can tune your model to effect X, but then check to see that the parameters that make it do X also make it do Y. Sometimes this is not possible. Maybe there is not enough experimental data out there. But if it is not possible, you should at least test the robustness of your model (see point 3).

3. The authors test the robustness of their model.

A robust computational model can be delicious (source)
One problem with computational models is that the specific set of parameters you've found by tuning the model might not be the 'right ones.' In fact they probably aren't the right ones. There are many different sets of parameters that can make a neuron spike slowly, for example.  And the chance that you hit on exactly the correct combination of things is very low. But that doesn't mean the model is not useful. You can still use the model to test effects that are not strongly altered by small changes in these parameters. So you need to test whether the specific effect you are testing is robust to parameter variation. If you are testing effect Q, you can increase the sodium channels by 10%, or the network size by 20% and see if you still get effect Q. In other words is 'effect Q' robust to changes in sodium channels or network size? If it is, then great! Your effect is not some weird fluke due to the exact combination of parameters that you have used.

These are the main things I try to pay attention to, but I am sure there are other important things to keep in mind when making models and reading about them. What are your thoughts?

© TheCellularScale


2 comments:

  1. This is a GREAT post. And actually I think this is releveant not only to computational neuroscience but to any numerical modelling (which is kind what I've been doing for a last half year). Suprising how this simple rules are difficult to explain to some people.

    ReplyDelete