Statistical Computing

The origins of probability and statistics are deeply rooted in the study of gambling during the sixteenth century and in the need rulers had to monitor people's wealth for more effective taxation. For several decades advances were mainly made analytically, using mathematics and theory. The reason is probably related to the paucity of data available at the time and, even more, to the very long times needed to carry out manually analytical and numerical operations. The advent of computers and the omnipresent use of the internet are shifting very rapidly the focus on data and on the fast processing and analysis of such data.

For this reason recently have come to prominence computer programs, packages and even large platforms that promise to carry out data analysis and processing in a systematic, powerful and robust way. In addition, the new statistical software is becoming very effective in communicating and translating the meaning of data through graphical displays. Among recently developed platforms the R project is securely establishing itself as one of the most used and quickly evolving pieces of statistical software. I will not spend time here to explain why people whose work depends on data should use R and, rather will re-direct you to comments by other analysts (see for example [1], [2], [3], [4], [5] ). In this page I am planning to include material related to the use of the R software and, more generally, on computational tools for developing statistical calculations.

A relatively-short introduction to R (with examples and exercises)
I have prepared this material for a 2-days "intensive" course directed to scientists at the Diamond Light Source Ltd, in Oxfordshire, in 2013. You are welcome to use the material for your own purpose (and, please, acknowledge my contribution in case it's useful to you).

REFERENCES

  1. R bloggers
  2. Inside R
  3. The New York Times
  4. Why biologists should use R
  5. How Google uses R

(back to james foadi's home page)