Skip to main content

Starting a new Scala project in the Scala Eclipse IDE

Few weeks ago we had a need to move to Scala in order to do some things in Spark using APIs available for that language only. In order to speed up things while starting to learn it and minimize the impact on the existing components and the ongoing CI process, we found the dirty way discussed in this post to quickly start with. Started to use the Scala IDE for Eclipse and did the following actions to create new Scala projects:
 - Create a new Scala project (File -> New -> Scala Project).
 - Add the Maven nature to it (Configure -> Convert to Maven Project). All of our existing projects in the same area are built through Maven and our Jenkins CI servers use Maven to build after any code change and do a lot of other actions (Unit Tests execution, static analysis of the code, code coverage and many others) through it. That's the main reason we are not using sbt. The m2eclipse-scala and the m2e plugins are bundled with the Scala IDE, so no need to install them.
 - Remove the reference to the maven-compiler-plugin in the project POM file and replace it with the scala-maven-plugin:
    <plugin>
          <groupId>net.alchim31.maven</groupId>
          <artifactId>scala-maven-plugin</artifactId>
          <version>3.1.3</version>
          <executions>
            <execution>
              <goals>
                <goal>compile</goal>
                <goal>testCompile</goal>
              </goals>
            </execution>
          </executions>
  </plugin>

 - Update the project to apply the new changes to the POM file: right click on the project name and then click on Maven -> Update Project. There is a bug in the Maven plugin that resets the Java library for the project to 1.5. After this step you need to restore the version you really need to use.
 - Manually create the src/main/scala directory and replace the original value in the POM file:
     <build>
        <sourceDirectory>src/main/scala</sourceDirectory>
        <plugins>
            <plugin>
              <groupId>net.alchim31.maven</groupId>
              ...

    Do the same for the src/test/scala directory.
 - Remove the src source folder in the project build path (Properties -> Java Build Path -> Source tab) and add the new ones (src/main/scala and src/test/scala).
 - Add the Scala dependency to the project POM file:
      <properties>
           <scala.version>2.10.6</scala.version>
      </properties>

      <dependencies>
          <dependency>
              <groupId>org.scala-lang</groupId>
              <artifactId>scala-library</artifactId>
             <version>${scala.version}</version>
          </dependency>
      <dependencies>

- Add Spark and all of the other dependencies you need to the POM file the usual way.

Comments

Popular posts from this blog

Streamsets Data Collector log shipping and analysis using ElasticSearch, Kibana and... the Streamsets Data Collector

One common use case scenario for the Streamsets Data Collector (SDC) is the log shipping to some system, like ElasticSearch, for real-time analysis. To build a pipeline for this particular purpose in SDC is really simple and fast and doesn't require coding at all. For this quick tutorial I will use the SDC logs as example. The log data will be shipped to Elasticsearch and then visualized through a Kibana dashboard. Basic knowledge of SDC, Elasticsearch and Kibana is required for a better understanding of this post. These are the releases I am referring to for each system involved in this tutorial: JDK 8 Streamsets Data Collector 1.4.0 ElasticSearch 2.3.3 Kibana 4.5.1 Elasticsearch and Kibana installation You should have your Elasticsearch cluster installed and configured and a Kibana instance pointing to that cluster in order to go on with this tutorial. Please refer to the official documentation for these two products in order to complete their installation (if you do

Exporting InfluxDB data to a CVS file

Sometimes you would need to export a sample of the data from an InfluxDB table to a CSV file (for example to allow a data scientist to do some offline analysis using a tool like Jupyter, Zeppelin or Spark Notebook). It is possible to perform this operation through the influx command line client. This is the general syntax: sudo /usr/bin/influx -database '<database_name>' -host '<hostname>' -username '<username>'  -password '<password>' -execute 'select_statement' -format '<format>' > <file_path>/<file_name>.csv where the format could be csv , json or column . Example: sudo /usr/bin/influx -database 'telegraf' -host 'localhost' -username 'admin'  -password '123456789' -execute 'select * from mem' -format 'csv' > /home/googlielmo/influxdb-export/mem-export.csv

Using Rapids cuDF in a Colab notebook

During last Spark+AI Summit Europe 2019 I had a chance to attend a talk from Miguel Martinez  who was presenting Rapids , the new Open Source framework from NVIDIA for GPU accelerated end-to-end Data Science and Analytics. Fig. 1 - Overview of the Rapids eco-system Rapids is a suite of Open Source libraries: cuDF cuML cuGraph cuXFilter I enjoied the presentation and liked the idea of this initiative, so I wanted to start playing with the Rapids libraries in Python on Colab , starting from cuDF, but the first attempt came with an issue that I eventually solved. So in this post I am going to share how I fixed it, with the hope it would be useful to someone else running into the same blocker. I am assuming here you are already familiar with Google Colab. I am using Python 3.x as Python 2 isn't supported by Rapids. Once you have created a new notebook in Colab, you need to check if the runtime for it is set to use Python 3 and uses a GPU as hardware accelerator. You