Skip to main content

Posts

Showing posts from June, 2016

Shipping and analysing MongoDB logs using the Streamsets Data Collector, ElasticSearch and Kibana

In order to show that the considerations done in my last post are general for any log shipping purpose, let's see now how the same process applies to a more real use case scenario: the log shipping and analysis of a MongoDB database logs. MongoDB logs pattern Starting from the release 3.0 (I am considering the release 3.2 for this post) the MongoDB logs come with the following pattern: <timestamp> <severity> <component> [<context>] <message> where:     timestamp is in iso8601-local format.     severity is the level associated to each log message. It is a single character field. Possible values are F (Fatal), E (Error), W (Warning), I (Informational) and D (Debug).     component is for a functional categorization of the log message. Please refer to the specific release of MongoDB you're using to know the full list of possible values.     context is the specific context for a message.     message : don't think you need some

Streamsets Data Collector log shipping and analysis using ElasticSearch, Kibana and... the Streamsets Data Collector

One common use case scenario for the Streamsets Data Collector (SDC) is the log shipping to some system, like ElasticSearch, for real-time analysis. To build a pipeline for this particular purpose in SDC is really simple and fast and doesn't require coding at all. For this quick tutorial I will use the SDC logs as example. The log data will be shipped to Elasticsearch and then visualized through a Kibana dashboard. Basic knowledge of SDC, Elasticsearch and Kibana is required for a better understanding of this post. These are the releases I am referring to for each system involved in this tutorial: JDK 8 Streamsets Data Collector 1.4.0 ElasticSearch 2.3.3 Kibana 4.5.1 Elasticsearch and Kibana installation You should have your Elasticsearch cluster installed and configured and a Kibana instance pointing to that cluster in order to go on with this tutorial. Please refer to the official documentation for these two products in order to complete their installation (if you do

Building Rapid Ingestion Data Pipelines talk @ Hadoop User Group Ireland Meetup on June 13th

If you're involved in Big Data stuff and in the Dublin area on Monday June 13th 2016 I hope you have a chance to attend the monthly meetup event of the Hadoop User Group Ireland: http://www.meetup.com/hadoop-user-group-ireland/events/231165491/ The event will be hosted by Bank of Ireland in their premises in Grand Canal Square ( https://goo.gl/maps/bbh1XghukfJ2 ). I will do the second talk " Building a data pipeline to ingest data into Hadoop in minutes using Streamsets Data Collector ". The event will start at 6 PM Irish Time. I hope to meet you there.