martedì 21 novembre 2017

Quick start with Apache Livy (part 1)

I have started doing evaluation of Livy for potential case scenarios where this technology could help and I'd like to share some findings with others who would like to approach this interesting Open Source project. It has been started by Cloudera and Microsoft and it is currently in the process of being incubated by the Apache Software Foundation. The official documentation isn't comprehensive at the moment, so I hope my posts on this topic could help someone else.

Apache Livy is a service to interact with Apache Spark through a REST interface. It enables both submissions of Spark jobs or snippets of Spark code. The following features are supported:
  • The jobs can be submitted as pre-compiled jars, snippets of code or via Java/Scala client API.
  • Interactive Scala, Python, and R shells.
  • Support for Spark 2.x and Spark1.x, Scala 2.10 and 2.11.
  • It doesn't require any change to Spark code.
  • It allows long running Spark Contexts that can be used for multiple Spark jobs, by multiple clients.
  • Multiple Spark Contexts can be managed simultaneously: they run on the cluster instead of the Livy Server, in order to have good fault tolerance and concurrency.
  • Possibility to share cached RDDs or Dataframes across multiple jobs and clients.
  • Secure authenticated communication.
The following image, taken from the official website, shows what happens when submitting Spark jobs/code through the Livy REST APIs:


In the second part of this series I am going to cover the details on starting a Livy server and submitting PySpark code.

mercoledì 8 novembre 2017

How to Get Metrics From a Java Application Inside a Docker Container Using Telegraf

My latest article on DZone is online. There you can learn how to configure Telegraf to pull metrics from a Java application running inside a Docker container.
The Telegraf Jolokia plugin configuration presented as example in the article is set up to collect metrics about the heap memory usage, thread count and class count, but these aren't the only metrics you can collect this way. When running a container hosting the Java app with the Jolokia agent, you can get the full list of available metrics through the following GET request:
curl -X GET http://<jolokia_host>:<jolokia_port>/jolokia/list
and pick up their names and attributes to be added to the plugin configuration.

venerdì 3 novembre 2017

Setting up a quick dev environment for Kafka, CSR and SDC

Few days ago Pat Patterson published an excellent article on DZone about Evolving Avro Schemas With Apache Kafka and StreamSets Data Collector. I recommend reading this interesting article. I followed this tutorial and today I want to share the details on how I quickly setup the environment for this purpose, just in case you should be interested on doing the same. I did it on a Linux Red Hat Server 7 (but the steps are the same for any other Linux distro) and using only images available in the Docker Hub.
First start a Zookeeper node (which is required by Kafka):

sudo docker run --name some-zookeeper --restart always -d zookeeper

and then a Kafka broker, linking the container to that for Zookeeper:

sudo docker run -d --name kafka --link zookeeper:zookeeper ches/kafka

Then start the Confluent Schema Registry (linking it to Zookeeper and Kafka):

sudo docker run -d --name schema-registry -p 8081:8081 --link zookeeper:zookeeper --link kafka:kafka confluent/schema-registry

and the REST proxy for it:

sudo docker run -d --name rest-proxy -p 8082:8082 --link zookeeper:zookeeper --link kafka:kafka --link schema-registry:schema-registry confluent/rest-proxy

Start an instance of the Streamsets Data Collector:

sudo docker run --restart on-failure -p 18630:18630 -d --name streamsets-dc streamsets/datacollector

Finally you can do an optional step in order to make more user friendly (compared to using the CSR APIs) the registration/update of Avro schema in CSR: start the OS CSR UI provided by Landoop:

sudo docker run -d --name schema-registry-ui -p 8000:8000 -e "SCHEMAREGISTRY_URL=http://<csr_host>:8081" -e "PROXY=true" landoop/schema-registry-ui

connecting it to your CSR instance.
You can create a topic in Kafka executing the following commands:

export ZOOKEEPER_IP=$(sudo docker inspect --format '{{ .NetworkSettings.IPAddress }}' zookeeper) 
sudo docker run --rm ches/kafka kafka-topics.sh --create --zookeeper $ZOOKEEPER_IP:2181 --replication-factor 1 --partitions 1 --topic csrtest

The environment is ready to play with and to be used to follow Pat's tutorial.

sabato 28 ottobre 2017

Exporting InfluxDB data to a CVS file

Sometimes you would need to export a sample of the data from an InfluxDB table to a CSV file (for example to allow a data scientist to do some offline analysis using a tool like Jupyter, Zeppelin or Spark Notebook). It is possible to perform this operation through the influx command line client. This is the general syntax:

sudo /usr/bin/influx -database '<database_name>' -host '<hostname>' -username '<username>'  -password '<password>' -execute 'select_statement' -format '<format>' > <file_path>/<file_name>.csv

where the format could be csv, json or column.
Example:

sudo /usr/bin/influx -database 'telegraf' -host 'localhost' -username 'admin'  -password '123456789' -execute 'select * from mem' -format 'csv' > /home/googlielmo/influxdb-export/mem-export.csv

domenica 22 ottobre 2017

All Day DevOps 2017 is coming!

All Day DevOps 2017 online conference is coming on October 24th 2017. Last year I had a chance to attend it and I can confirm that the overall quality of the talks was excellent. Looking at this year's agenda it seems more promising than 2016's. I heartily recommend attending it to everyone interested in DevOps matters.