Logistic Regression
This is an iterative machine learning algorithm that seeks to find the best hyperplane that separates two sets of points in a multi-dimensional feature space. It can be used to classify messages into spam vs non-spam, for example. Because the algorithm applies the same MapReduce operation repeatedly to the same dataset, it benefits greatly from caching the input data in RAM across iterations.
val points = spark.textFile(...). map(parsePoint). cache() var w = Vector.random(D) // current separating plane for (i <- 1 to ITERATIONS) { val gradient = points. map( p => (1 / (1 + exp(-p.y*(w dot p.x))) - 1) * p.y * p.x ). reduce( _ + _) w -= gradient } println( "Final separating plane: " + w)
Note that w gets shipped automatically to the cluster with every map call.
The graph below compares the performance of this Spark program against a Hadoop implementation on 30 GB of data on an 80-core cluster, showing the benefit of in-memory caching: