Being able to appropriately address uncertainty and error is fundamental to the pursuit of science. Without it, results and theory would never match up since theory usually involves a level of abstraction that permits simplification of the problem and observational results are never perfect and include all sorts of uncertainty. Recently I’ve been trying to quantify and approximate the error terms for the data that powers my model.

The Philosophy

Most people envision that uncertainty metrics such as standard deviation or confidence intervals are those final pieces of the puzzle that scientists work on after all the ‘real’ science is over, but the truth is a bit less orderly. Statistics such as the ones I’ve mentioned must be appreciated even before the experiment or model even run. No matter what experiment or field you’re working in, the first step is always the same: figure out what outcomes are possible and to predict the outcome. Now I’m not saying to prejudge the outcome, but rather have an idea of what range of outcomes you are willing to accept.

You should expect very different results from mixing  water and table salt than you would expect in mixing sodium metal and water, and that’s a good thing. Understanding the uncertainty, limitations and goals of the experiment are critical first steps for successful science.

The Work

While most of the work and analysis are dreadfully boring, but I want to include a sampling here anyway. In terms of tools, I suppose I should mention that I use a combination of Excel, R and scrap paper for all my error analysis. One of the best way’s I’ve found to rigorously and properly run an analysis for a small section of work is to write an R script that walks you through the analysis. By doing so I not only can easily cite where a particular number came from, but I can also remember how I did it exactly.

Not only does it provide me with excellent notes to refer too, but I can make nice images to go along with it:

Screen Shot 2015-04-27 at 11.19.17