Subscribe to DSC Newsletter

Accelerating Analysis with Parallelism

For many analytic problems, computer simulation is used where problems are simply too hard to solve deterministically.  I can’t directly calculate tomorrow’s closing value of a market index for example (if I could I wouldn’t be here) but with an elaborate enough pricing model, I can pre-compute the values of the financial instruments that comprise an index or portfolio over many market scenarios.

By aggregating and plotting calculated values for each market scenario, I can get a pretty good idea of future values and exposures. Monte Carlo simulation is used in financial risk for exactly this purpose. The fidelity of the result depends not only on the accuracy of the model, but on the number of self-consistent market scenarios that I have the computing capacity to generate and evaluate.

It turns out that this type of stochastic analysis is used across many fields – from engineering to computational biology to applied statistics. What these fields have in common is that the computational requirements can be simply enormous. Until Intel ships their one Terahertz CPU (not likely to happen soon) solving these problems with parallelism is the only game in town. Analysts and researchers routinely use distributed computing clusters or general-purpose GPUs to parallelize and run problems that would otherwise take days or weeks to run.

Single-threaded calculations in R

Most statisticians are familiar with R. R is a comprehensive statistical analysis package for managing, manipulating and visualizing data. Because R is open-source and freely available, usage of R has exploded. Thousands of packages have been developed for R in diverse fields including econometrics, medical imaging, survival analysis and more.

A dirty little secret about R is that the powerful functions that statisticians know and love are for the most part single-threaded. This means they are able to execute only on a single processor core and within the memory limitations of a single computer. While commercial R offerings from Revolution Analytics are addressing some of these limitations with improved compilation techniques and libraries that take advantage of multi-core processors directly and transparently, in this age of big data many problems are simply too large to solve on a single computer without overflowing memory or tying up an analysts desktop or laptop for days.

Solving bigger R-based problems faster with parallelism

Luckily, there are many approaches to exploiting parallelism with R. At the time of this writing there are no less than 80 packages in the CRAN High-Performance Computing task view specifically designed to boost performance and provide parallelism. While approaches clearly exist, practical challenges remain:

  • Most algorithms need to be re-expressed so that they can readily execute across multiple computers in parallel. To do this takes effort and testing, and not everyone has easy access to a high-performance computing cluster. As a result, developing parallel algorithms has historically been a problem.
  • The same goes for general-purpose GPUs and specialty co-processors. While GPUs and software frameworks like NVIDIA’s CUDA can dramatically boost performance, these types of computing assets are expensive and this is not hardware that most analysts have on their desktops.

Fortunately, for those who need to run large simulations, these barriers are rapidly falling away.

  • Multi-core parallelism is starting to come for free from a programmer perspective with dramatic performance gains that require little or no work on the part of the developer.
  • With the emergence of cloud-based R-as-a-service offerings from Teraproc and similar services from Cycle Computing and others, the cost and complexity of deploying multi-node clusters is essentially gone. Developers can deploy clusters comprised of a few nodes in minutes (and essentially for free) to prove techniques that can then be applied to larger clusters in production.
  • Finally, by taking advantage of spot-pricing on Amazon EC2 to secure access to otherwise expensive computing resources like GPUs, cost of access is falling dramatically here as well.

The combination of these factors is making large-scale parallel computing with R much more practical. In future articles I’ll examine the topics above in more depth and provide examples of how frequently used analytic workloads can be accelerated using these parallel techniques. You can learn more about the use of parallel techniques by visiting my R language cluster computing blog at http://teraproc.com/r-blog.

 

Views: 1024

Comment

You need to be a member of AnalyticBridge to add comments!

Join AnalyticBridge

Follow Us

On Data Science Central

On DataViz

On Hadoop

© 2017   AnalyticBridge.com is a subsidiary and dedicated channel of Data Science Central LLC   Powered by

Badges  |  Report an Issue  |  Terms of Service