The Big Analytics Blog

Comments by Thomas W. Dinsmore

R Interface to Apache Spark

The team at AMPLab has announced a developer preview of SparkR, an R package enabling R users to run jobs on an Apache Spark cluster.   Spark is an open source project that supports distributed in-memory computing for advanced analytics, such as fast queries, machine learning, streaming analytics and graph engines.  Spark works with every data format supported in Hadoop, and supports YARN 2.2.

SparkR exposes the Spark API as distributed lists in R and automatically serializes the necessary variables to execute a function on the cluster.

SparkR is available now on GitHub.  It requires Scala 2.10, Spark version 0.9.0 or higher and depends on the rjava and testthat R packages.

About these ads

One comment on “R Interface to Apache Spark

  1. Pingback: Machine Learning in Hadoop: Part Two | Building The Analytic Enterprise

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Information

This entry was posted on January 17, 2014 by in News Analysis and tagged , , , .
Follow

Get every new post delivered to your Inbox.

Join 1,265 other followers

%d bloggers like this: