Skip to main content

JDBC Producer destination setting in the Streamsets Data Collector.

One of the most used managed destinations in the SDC is the JDBC Producer. It allows data writing to a relational database table using a JDBC connection. The SDC release I am referring to in this post is the 1.5.1.2 running in the JVM 8.

Installing a specific JDBC driver.

In order to insert data in a database table SDC requires the specific JDBC driver for the database you need to use. This applies to the JDBC consumer origin as well.
The first time you plan to add  a JDBC Producer destination to a pipeline you need to create a local directory in the SDC host machine external to the SDC installation directory. Example: /home/sdc-user/sdc-extras
Then create the following sub-directory structure for all of the JDBC drivers:
/home/sdc-user/sdc-extras/streamsets-datacollector-jdbc-lib/lib/
Finally copy the JDBC driver in that folder.
Now it is time to make SDC aware of this directory. First you have to add the STREAMSETS_LIBRARIES_EXTRA_DIR environment variable and make it pointing to the JDBC drivers directory. You need to add the following line to the $SDC_DIST/libexec/sdc-env.sh or the $SDC_DIST/libexec/sdcc-env.sh file:

export STREAMSETS_LIBRARIES_EXTRA_DIR="/home/sdc-user/sdc-extras/"

By default SDC comes with the Java Security Manager enabled. In that case you have to update the Data Collector security policy to include the driver directory. You need to edit the $SDC_CONF/sdc-security.policy file adding the following lines:

// user-defined additional driver directory
grant codebase "file:///home/sdc-user/sdc-extras/-" {
  permission java.security.AllPermission;
};


Restart SDC in order to apply the configuration changes.

JDBC Producer configuration.

Here is the standard configuration for a JDBC Producer destination:
Tab General:
  • Name: the name of the stage.
  • On Record Error: to set what to do in case of errors for one or more records.
Tab JDBC:
  • JDBC Connection String: the connection string to the database. It depends by the database you are using. Example: in case of PostgreSQL it has the following syntax:
              jdbc:postgresql://<db_host>:<port>/<db_name>
       
          Please check the specific JDBC official documentation you are going to use for the correct string syntax.
  • Table Name: the name of the table into which the records would be inserted.
  • Use Multi-Row Insert: uncheck this box if you want the fields written to columns of the same name. 
  • Use Credentials: check this box if authentication is enabled for the database. 
Tab Credentials:
  • Username: the username to connect to the database.
  • Password: the password to connect to the database.
Tab Advanced:
      Set here all of the properties for the connection pool.

Comments

Popular posts from this blog

Streamsets Data Collector log shipping and analysis using ElasticSearch, Kibana and... the Streamsets Data Collector

One common use case scenario for the Streamsets Data Collector (SDC) is the log shipping to some system, like ElasticSearch, for real-time analysis. To build a pipeline for this particular purpose in SDC is really simple and fast and doesn't require coding at all. For this quick tutorial I will use the SDC logs as example. The log data will be shipped to Elasticsearch and then visualized through a Kibana dashboard. Basic knowledge of SDC, Elasticsearch and Kibana is required for a better understanding of this post. These are the releases I am referring to for each system involved in this tutorial: JDK 8 Streamsets Data Collector 1.4.0 ElasticSearch 2.3.3 Kibana 4.5.1 Elasticsearch and Kibana installation You should have your Elasticsearch cluster installed and configured and a Kibana instance pointing to that cluster in order to go on with this tutorial. Please refer to the official documentation for these two products in order to complete their installation (if you do

Exporting InfluxDB data to a CVS file

Sometimes you would need to export a sample of the data from an InfluxDB table to a CSV file (for example to allow a data scientist to do some offline analysis using a tool like Jupyter, Zeppelin or Spark Notebook). It is possible to perform this operation through the influx command line client. This is the general syntax: sudo /usr/bin/influx -database '<database_name>' -host '<hostname>' -username '<username>'  -password '<password>' -execute 'select_statement' -format '<format>' > <file_path>/<file_name>.csv where the format could be csv , json or column . Example: sudo /usr/bin/influx -database 'telegraf' -host 'localhost' -username 'admin'  -password '123456789' -execute 'select * from mem' -format 'csv' > /home/googlielmo/influxdb-export/mem-export.csv

Using Rapids cuDF in a Colab notebook

During last Spark+AI Summit Europe 2019 I had a chance to attend a talk from Miguel Martinez  who was presenting Rapids , the new Open Source framework from NVIDIA for GPU accelerated end-to-end Data Science and Analytics. Fig. 1 - Overview of the Rapids eco-system Rapids is a suite of Open Source libraries: cuDF cuML cuGraph cuXFilter I enjoied the presentation and liked the idea of this initiative, so I wanted to start playing with the Rapids libraries in Python on Colab , starting from cuDF, but the first attempt came with an issue that I eventually solved. So in this post I am going to share how I fixed it, with the hope it would be useful to someone else running into the same blocker. I am assuming here you are already familiar with Google Colab. I am using Python 3.x as Python 2 isn't supported by Rapids. Once you have created a new notebook in Colab, you need to check if the runtime for it is set to use Python 3 and uses a GPU as hardware accelerator. You