giovedì 12 aprile 2018

DataWorks Summit 2018, Berlin Edition: come to attend my talk.

AI, Machine Learning and Deep Learning are getting an hype nowadays even if most part of the algorithms and models at their core are around since long time:

1805 Least Squares
1812 Bayes' Theorem
1913 Markov Chains
1950 Turing's Learning Machine
1957 Perceptron
1967 Nearest Neighbor
1970 Automatic Differentiation
1972 TF-IDF
1980 Neocognitron
1981 Explanation Based Learning
1982 Recurrent Neural Network
1970 Back Propagation
1989 Reinforcement Learning
1995 Random Forest Algorithm
1995 Support Vector Machines
1997 LSTM

So what are the reasons that speed up and accelerated the implementation and made possible today for the theory to become reality?
There are several factors:
 - Cheaper computation: in the past hardware was a constraining factor for AI/ML/DL. Late advance in hardware (coupled with improved tools and software
frameworks) and new computational models (in particular around GPUs) have accelerated AI/ML/DL adoption.
 - Cheaper storage: the increased number of available data means more space needed for storage. Advance in hardware, cost reduction and improved performance made possible the implementation of new storage systems without the typical limitations of relational databases.
 - More advanced algorithms: less expensive compute and storage enable development and training of more advanced algorithms. As a result, DL is nowsolving specific problems like image classification or fraud detection with astonishing accuracy (and more sophisticated algorithms will continue to improve the state of the art).
 - More and bigger investments: investment in AI is no longer confined to universities or research institutes, but is done from many other entities such as tech giants, governments, startups and large enterprises across almost every industry sector.
 - Bigger data availability: AI/ML/DL need a huge amount of data to learn. The digital transformation of society is providing tons of raw material to fuel their advances. Big data coming from diverse sources such as IoT sensors, social and mobile computing, healthcare and many more new applications can be used to train models.
But often just getting data from any possible data source, in particular from the edge, requires moving mountains. Please attend my talk at the DataWorks Summit in Berlin on April 18th if you want to learn how to make edge data ingestion and analytics easier using a single tool, StreamSets Data Collector Edge, which is an ultralight, platform independent and small-footprint Open Source solution written in Go for streaming data from resource-constrained sensors and personal devices (like medical equipment or smartphones) to Apache Kafka, HDFS, Elastic Search and many other destinations.

martedì 6 marzo 2018

Java with The Best Conference

Do you believe that Java is dead? Please join the Java with the Best online conference on April 17th-18th and then probably you will change your mind. Of course if you are a Java technology passionate you need to attend as well ;) 2 days, 50 talks, 3 parallel sessions (Core Java, Big Data, Machine Learning and Cloud Development with Java, Java Frameworks, Libraries and Languages).  I hope you will get a chance to attend my talk:


giovedì 15 febbraio 2018

ScalaUA Conference 2018

An interesting conference on Scala is going to happen in Kiev (Ukraine) on April 20th-21st: https://www.scalaua.com/
Some early bird tickets should be still available.
Here's a list of the speakers already confirmed for this event.
Have a look at this short video to get some insight from the 2017 edition.

sabato 25 novembre 2017

Quick start with Apache Livy (part 2): the REST APIs

The second post of this series focuses on how to run a Livy server instance and start playing with its REST APIs. The steps below are meant for a Linux environment (any distribution).

Prerequisites

The prerequisites to start a Livy server are the following:
  • The JAVA_HOME env variable set to a JDK/JRE 8 installation.
  • A running Spark cluster.

Starting the Livy server

Download the latest version (0.4.0-incubating at the time this post is written) from the official website and extract the archive content (it is a ZIP file). Then setup the SPARK_HOME env variable to the Spark location in the server (for simplicity in this post I am assuming that the cluster is in the same machine as for the Livy server, but in the next post I will go through the customization of the configuration files, including the connection to a remote Spark cluster, wherever it is). By default Livy writes its logs into the $LIVY_HOME/logs location: you need to manually create this directory. Finally you can start the server:

$LIVY_HOME/bin/livy-server

Verify that the server is running by connecting to its web UI, which uses port 8998 by default:

http://<livy_host>:8998/ui

Using the REST APIs with Python

Livy offers REST APIs to start interactive sessions and submit Spark code the same way you can do with a Spark shell or a PySpark shell. The examples in this post are in Python. Let's create an interactive session through a POST request first:

curl -X POST --data '{"kind": "pyspark"}' -H "Content-Type: application/json" localhost:8998/sessions

The kind attribute specifies which kind of language we want to use (pyspark is for Python). Other possible values for it are spark (for Scala) or sparkr (for R). If the request has been successful, the JSON response content contains the id of the open session:

{"id":0,"appId":null,"owner":null,"proxyUser":null,"state":"starting","kind":"pyspark","appInfo":{"driverLogUrl":null,"sparkUiUrl":null},"log":["stdout: ","\nstderr: "]}

You can double check through the web UI:

You can check the status of a given session any time through the REST API:

curl localhost:8998/sessions/<session_id> | python -m json.tool

Let's execute a code statement:

curl localhost:8998/sessions/0/statements -X POST -H 'Content-Type: application/json' -d '{"code":"2 + 2"}'

The code attribute contains the Python code you want to execute. The response of this POST request contains the id of the statement and the its execution status:

{"id":0,"code":"2 + 2","state":"waiting","output":null,"progress":0.0}

To check if a statement has been completed and get the result:

curl localhost:8998/sessions/0/statements/0

If a statement has been completed, the result of the execution is returned as part of the response:

{"id":0,"code":"2 + 2","state":"available","output":{"status":"ok","execution_count":0,"data":{"text/plain":"4"}},"progress":1.0}

This information is available through the web UI as well:



The same way you can submit any PySpark code:

curl localhost:8998/sessions/0/statements -X POST -H 'Content-Type: application/json' -d '{"code":"sc.parallelize([1, 2, 3, 4, 5]).count()"}'


When you're done, you can close the session:

curl localhost:8998/sessions/0 -X DELETE

martedì 21 novembre 2017

Quick start with Apache Livy (part 1)

I have started doing evaluation of Livy for potential case scenarios where this technology could help and I'd like to share some findings with others who would like to approach this interesting Open Source project. It has been started by Cloudera and Microsoft and it is currently in the process of being incubated by the Apache Software Foundation. The official documentation isn't comprehensive at the moment, so I hope my posts on this topic could help someone else.

Apache Livy is a service to interact with Apache Spark through a REST interface. It enables both submissions of Spark jobs or snippets of Spark code. The following features are supported:
  • The jobs can be submitted as pre-compiled jars, snippets of code or via Java/Scala client API.
  • Interactive Scala, Python, and R shells.
  • Support for Spark 2.x and Spark1.x, Scala 2.10 and 2.11.
  • It doesn't require any change to Spark code.
  • It allows long running Spark Contexts that can be used for multiple Spark jobs, by multiple clients.
  • Multiple Spark Contexts can be managed simultaneously: they run on the cluster instead of the Livy Server, in order to have good fault tolerance and concurrency.
  • Possibility to share cached RDDs or Dataframes across multiple jobs and clients.
  • Secure authenticated communication.
The following image, taken from the official website, shows what happens when submitting Spark jobs/code through the Livy REST APIs:


In the second part of this series I am going to cover the details on starting a Livy server and submitting PySpark code.