giovedì 15 febbraio 2018

ScalaUA Conference 2018

An interesting conference on Scala is going to happen in Kiev (Ukraine) on April 20th-21st: https://www.scalaua.com/
Some early bird tickets should be still available.
Here's a list of the speakers already confirmed for this event.
Have a look at this short video to get some insight from the 2017 edition.

sabato 25 novembre 2017

Quick start with Apache Livy (part 2): the REST APIs

The second post of this series focuses on how to run a Livy server instance and start playing with its REST APIs. The steps below are meant for a Linux environment (any distribution).

Prerequisites

The prerequisites to start a Livy server are the following:
  • The JAVA_HOME env variable set to a JDK/JRE 8 installation.
  • A running Spark cluster.

Starting the Livy server

Download the latest version (0.4.0-incubating at the time this post is written) from the official website and extract the archive content (it is a ZIP file). Then setup the SPARK_HOME env variable to the Spark location in the server (for simplicity in this post I am assuming that the cluster is in the same machine as for the Livy server, but in the next post I will go through the customization of the configuration files, including the connection to a remote Spark cluster, wherever it is). By default Livy writes its logs into the $LIVY_HOME/logs location: you need to manually create this directory. Finally you can start the server:

$LIVY_HOME/bin/livy-server

Verify that the server is running by connecting to its web UI, which uses port 8998 by default:

http://<livy_host>:8998/ui

Using the REST APIs with Python

Livy offers REST APIs to start interactive sessions and submit Spark code the same way you can do with a Spark shell or a PySpark shell. The examples in this post are in Python. Let's create an interactive session through a POST request first:

curl -X POST --data '{"kind": "pyspark"}' -H "Content-Type: application/json" localhost:8998/sessions

The kind attribute specifies which kind of language we want to use (pyspark is for Python). Other possible values for it are spark (for Scala) or sparkr (for R). If the request has been successful, the JSON response content contains the id of the open session:

{"id":0,"appId":null,"owner":null,"proxyUser":null,"state":"starting","kind":"pyspark","appInfo":{"driverLogUrl":null,"sparkUiUrl":null},"log":["stdout: ","\nstderr: "]}

You can double check through the web UI:

You can check the status of a given session any time through the REST API:

curl localhost:8998/sessions/<session_id> | python -m json.tool

Let's execute a code statement:

curl localhost:8998/sessions/0/statements -X POST -H 'Content-Type: application/json' -d '{"code":"2 + 2"}'

The code attribute contains the Python code you want to execute. The response of this POST request contains the id of the statement and the its execution status:

{"id":0,"code":"2 + 2","state":"waiting","output":null,"progress":0.0}

To check if a statement has been completed and get the result:

curl localhost:8998/sessions/0/statements/0

If a statement has been completed, the result of the execution is returned as part of the response:

{"id":0,"code":"2 + 2","state":"available","output":{"status":"ok","execution_count":0,"data":{"text/plain":"4"}},"progress":1.0}

This information is available through the web UI as well:



The same way you can submit any PySpark code:

curl localhost:8998/sessions/0/statements -X POST -H 'Content-Type: application/json' -d '{"code":"sc.parallelize([1, 2, 3, 4, 5]).count()"}'


When you're done, you can close the session:

curl localhost:8998/sessions/0 -X DELETE

martedì 21 novembre 2017

Quick start with Apache Livy (part 1)

I have started doing evaluation of Livy for potential case scenarios where this technology could help and I'd like to share some findings with others who would like to approach this interesting Open Source project. It has been started by Cloudera and Microsoft and it is currently in the process of being incubated by the Apache Software Foundation. The official documentation isn't comprehensive at the moment, so I hope my posts on this topic could help someone else.

Apache Livy is a service to interact with Apache Spark through a REST interface. It enables both submissions of Spark jobs or snippets of Spark code. The following features are supported:
  • The jobs can be submitted as pre-compiled jars, snippets of code or via Java/Scala client API.
  • Interactive Scala, Python, and R shells.
  • Support for Spark 2.x and Spark1.x, Scala 2.10 and 2.11.
  • It doesn't require any change to Spark code.
  • It allows long running Spark Contexts that can be used for multiple Spark jobs, by multiple clients.
  • Multiple Spark Contexts can be managed simultaneously: they run on the cluster instead of the Livy Server, in order to have good fault tolerance and concurrency.
  • Possibility to share cached RDDs or Dataframes across multiple jobs and clients.
  • Secure authenticated communication.
The following image, taken from the official website, shows what happens when submitting Spark jobs/code through the Livy REST APIs:


In the second part of this series I am going to cover the details on starting a Livy server and submitting PySpark code.

mercoledì 8 novembre 2017

How to Get Metrics From a Java Application Inside a Docker Container Using Telegraf

My latest article on DZone is online. There you can learn how to configure Telegraf to pull metrics from a Java application running inside a Docker container.
The Telegraf Jolokia plugin configuration presented as example in the article is set up to collect metrics about the heap memory usage, thread count and class count, but these aren't the only metrics you can collect this way. When running a container hosting the Java app with the Jolokia agent, you can get the full list of available metrics through the following GET request:
curl -X GET http://<jolokia_host>:<jolokia_port>/jolokia/list
and pick up their names and attributes to be added to the plugin configuration.

venerdì 3 novembre 2017

Setting up a quick dev environment for Kafka, CSR and SDC

Few days ago Pat Patterson published an excellent article on DZone about Evolving Avro Schemas With Apache Kafka and StreamSets Data Collector. I recommend reading this interesting article. I followed this tutorial and today I want to share the details on how I quickly setup the environment for this purpose, just in case you should be interested on doing the same. I did it on a Linux Red Hat Server 7 (but the steps are the same for any other Linux distro) and using only images available in the Docker Hub.
First start a Zookeeper node (which is required by Kafka):

sudo docker run --name some-zookeeper --restart always -d zookeeper

and then a Kafka broker, linking the container to that for Zookeeper:

sudo docker run -d --name kafka --link zookeeper:zookeeper ches/kafka

Then start the Confluent Schema Registry (linking it to Zookeeper and Kafka):

sudo docker run -d --name schema-registry -p 8081:8081 --link zookeeper:zookeeper --link kafka:kafka confluent/schema-registry

and the REST proxy for it:

sudo docker run -d --name rest-proxy -p 8082:8082 --link zookeeper:zookeeper --link kafka:kafka --link schema-registry:schema-registry confluent/rest-proxy

Start an instance of the Streamsets Data Collector:

sudo docker run --restart on-failure -p 18630:18630 -d --name streamsets-dc streamsets/datacollector

Finally you can do an optional step in order to make more user friendly (compared to using the CSR APIs) the registration/update of Avro schema in CSR: start the OS CSR UI provided by Landoop:

sudo docker run -d --name schema-registry-ui -p 8000:8000 -e "SCHEMAREGISTRY_URL=http://<csr_host>:8081" -e "PROXY=true" landoop/schema-registry-ui

connecting it to your CSR instance.
You can create a topic in Kafka executing the following commands:

export ZOOKEEPER_IP=$(sudo docker inspect --format '{{ .NetworkSettings.IPAddress }}' zookeeper) 
sudo docker run --rm ches/kafka kafka-topics.sh --create --zookeeper $ZOOKEEPER_IP:2181 --replication-factor 1 --partitions 1 --topic csrtest

The environment is ready to play with and to be used to follow Pat's tutorial.