Skip to main content

Unit testing Spark applications in Scala (Part 2): Intro to spark-testing-base

In the first part of this series we became familiar with ScalaTest. When it comes to unit test Scala Spark applications ScalaTest isn't enough: you need to add to the roster spark-testing-base. It is an Open Source framework which provides base classes for the main Spark abstractions like SparkContext, RDD, DataFrame, DataSet and Streaming. Let's start to explore all of the facilities provided by this framework and how it works along with ScalaTest with some simple examples. Let's consider the following Scala word count example found on the web:

import org.apache.spark.{SparkConf, SparkContext}

object SparkWordCount { 
  def main(args: Array[String]) { 
   val inputFile = args(0) 
   val outputFile = args(1) 
   val conf = new SparkConf().setAppName("SparkWordCount") 
   // Create a Scala Spark Context. 
   val sc = new SparkContext(conf) 
   // Load our input data. 
   val input = sc.textFile(inputFile) 
   // Split up into words. 
   val words = input.flatMap(line => line.split(" ")) 
   // Transform into word and count. 
   val counts = words.map(word => (word, 1)).reduceByKey{case (x, y) => x + y} 
   // Save the word count back out to a text file, causing evaluation.   counts.saveAsTextFile(outputFile) 
  } 
}

This class expects a text file as input and first splits all of the words this contains and finally counts all of the occurrences for each one.
Add the ScalaTest dependency to sbt or Maven as described in the first part of this series. Then add the scala-testing-base dependency to sbt:
libraryDependencies += "com.holdenkarau" % "spark-testing-base_2.11" % "2.1.0_0.6.0"
or Maven:

<dependency>
    <groupId>com.holdenkarau</groupId>
    <artifactId>spark-testing-base_2.11</artifactId>
    <version>2.1.0_0.6.0</version>
</dependency>

The code above is definitively not the best way to implement this class. Probably a better way (in terms of readability and maintenance) could be the following, implementing two separate methods (one for the mapping and one for the reduction by key):

import org.apache.spark.{SparkConf, SparkContext}
import org.apache.spark.rdd.RDD

object SparkWordCount {
  def main(args: Array[String]) {
    val conf = new SparkConf().setAppName("SparkWordCount")

    val sc = new SparkContext(conf)
  
    val file = sc.textFile(args(0))
    countWordsInFile(file).saveAsTextFile(args(1))
  }
 
  def countWordsInFile = splitFile _ andThen countWords _
 
  def splitFile(wordsByLine: RDD[String]): RDD[String] = {
    wordsByLine.flatMap(line => line.split(" "))
  }

  def countWords(words: RDD[String]): RDD[(String, Int)] = {
    words.map(word => (word, 1)).reduceByKey(_ + _)
  }
}


Let's implement a test class for it now. We are going to combine ScalaTest and spark-testing-base together. Let's choose a ScalaTest style (for example FunSuite) and the SharedSparkContext trait:

class SparkWordCountScalaTestingBaseSuite extends FunSuite with SharedSparkContext

SharedSparkContext initializes (before the tests start) and provides a SparkContext instance to be shared through all of the test in the class and stops it at the end of the test method execution. So no need to add extra code to initialize it and stop it. This way the development of a test class focuses only on the body of the test methods. For the class under test above is:

val fileLines = Array("Line One", "Line Two", "Line Three", "Line Four")

test("splitFile should split the file into words"){
    val inputRDD: RDD[String] = sc.parallelize[String](fileLines)
    val wordsRDD = SparkWordCount.splitFile(inputRDD)
    assert(wordsRDD.count() == 8)
}


test("countWordsInFile should count words") {
    val inputRDD: RDD[String] = sc.parallelize[String](fileLines)
    val results = SparkWordCount.countWordsInFile(inputRDD).collect
    assert(results.contains(("Line", 4)))
}


sc is the variable name of the built-in SparkContext provided by SharedSparkContext.
In the next post(s) we will walk through all of the other base classes provided by the spark-testing-base framework, starting from those related to RDDs.

Comments

Post a Comment

Popular posts from this blog

Streamsets Data Collector log shipping and analysis using ElasticSearch, Kibana and... the Streamsets Data Collector

One common use case scenario for the Streamsets Data Collector (SDC) is the log shipping to some system, like ElasticSearch, for real-time analysis. To build a pipeline for this particular purpose in SDC is really simple and fast and doesn't require coding at all. For this quick tutorial I will use the SDC logs as example. The log data will be shipped to Elasticsearch and then visualized through a Kibana dashboard. Basic knowledge of SDC, Elasticsearch and Kibana is required for a better understanding of this post. These are the releases I am referring to for each system involved in this tutorial: JDK 8 Streamsets Data Collector 1.4.0 ElasticSearch 2.3.3 Kibana 4.5.1 Elasticsearch and Kibana installation You should have your Elasticsearch cluster installed and configured and a Kibana instance pointing to that cluster in order to go on with this tutorial. Please refer to the official documentation for these two products in order to complete their installation (if you do

Exporting InfluxDB data to a CVS file

Sometimes you would need to export a sample of the data from an InfluxDB table to a CSV file (for example to allow a data scientist to do some offline analysis using a tool like Jupyter, Zeppelin or Spark Notebook). It is possible to perform this operation through the influx command line client. This is the general syntax: sudo /usr/bin/influx -database '<database_name>' -host '<hostname>' -username '<username>'  -password '<password>' -execute 'select_statement' -format '<format>' > <file_path>/<file_name>.csv where the format could be csv , json or column . Example: sudo /usr/bin/influx -database 'telegraf' -host 'localhost' -username 'admin'  -password '123456789' -execute 'select * from mem' -format 'csv' > /home/googlielmo/influxdb-export/mem-export.csv

Using Rapids cuDF in a Colab notebook

During last Spark+AI Summit Europe 2019 I had a chance to attend a talk from Miguel Martinez  who was presenting Rapids , the new Open Source framework from NVIDIA for GPU accelerated end-to-end Data Science and Analytics. Fig. 1 - Overview of the Rapids eco-system Rapids is a suite of Open Source libraries: cuDF cuML cuGraph cuXFilter I enjoied the presentation and liked the idea of this initiative, so I wanted to start playing with the Rapids libraries in Python on Colab , starting from cuDF, but the first attempt came with an issue that I eventually solved. So in this post I am going to share how I fixed it, with the hope it would be useful to someone else running into the same blocker. I am assuming here you are already familiar with Google Colab. I am using Python 3.x as Python 2 isn't supported by Rapids. Once you have created a new notebook in Colab, you need to check if the runtime for it is set to use Python 3 and uses a GPU as hardware accelerator. You