Data Intelligence, Business Analytics
For many analytic problems, computer simulation is used where problems are simply too hard to solve deterministically. I can’t directly calculate tomorrow’s closing value of a market index for example (if I could I wouldn’t be here) but with an elaborate enough pricing model, I can pre-compute the values of the financial instruments that comprise an index or portfolio over many market scenarios.
By aggregating and plotting calculated values for each market scenario, I can get a pretty good idea of future values and exposures. Monte Carlo simulation is used in financial risk for exactly this purpose. The fidelity of the result depends not only on the accuracy of the model, but on the number of self-consistent market scenarios that I have the computing capacity to generate and evaluate.
It turns out that this type of stochastic analysis is used across many fields – from engineering to computational biology to applied statistics. What these fields have in common is that the computational requirements can be simply enormous. Until Intel ships their one Terahertz CPU (not likely to happen soon) solving these problems with parallelism is the only game in town. Analysts and researchers routinely use distributed computing clusters or general-purpose GPUs to parallelize and run problems that would otherwise take days or weeks to run.
Single-threaded calculations in R
Most statisticians are familiar with R. R is a comprehensive statistical analysis package for managing, manipulating and visualizing data. Because R is open-source and freely available, usage of R has exploded. Thousands of packages have been developed for R in diverse fields including econometrics, medical imaging, survival analysis and more.
A dirty little secret about R is that the powerful functions that statisticians know and love are for the most part single-threaded. This means they are able to execute only on a single processor core and within the memory limitations of a single computer. While commercial R offerings from Revolution Analytics are addressing some of these limitations with improved compilation techniques and libraries that take advantage of multi-core processors directly and transparently, in this age of big data many problems are simply too large to solve on a single computer without overflowing memory or tying up an analysts desktop or laptop for days.
Solving bigger R-based problems faster with parallelism
Luckily, there are many approaches to exploiting parallelism with R. At the time of this writing there are no less than 80 packages in the CRAN High-Performance Computing task view specifically designed to boost performance and provide parallelism. While approaches clearly exist, practical challenges remain:
Fortunately, for those who need to run large simulations, these barriers are rapidly falling away.
The combination of these factors is making large-scale parallel computing with R much more practical. In future articles I’ll examine the topics above in more depth and provide examples of how frequently used analytic workloads can be accelerated using these parallel techniques. You can learn more about the use of parallel techniques by visiting my R language cluster computing blog at http://teraproc.com/r-blog.