Skip to main content

Implementing a Jenkins plugin from scratch in 5 steps and less then 5 minutes (no kidding)

I have written several posts in this blog about useful Jenkins plugins available in the Jenkins Update Center. But what about the plugin you need for specific purposes hasn't been implemented yet? Do it by yourself! If you have Java development skills and know a little bit of Maven this task isn't impossible.
You need to use Maven 3.1.1 or later in order to successfully complete the process detailed below.
Before you start you need to update the settings.xml file of you local Maven this way:

1) Add a mirror to the mirrors list for the Jenkins Update Centre:
<mirrors>
...
    <mirror>
      <id>repo.jenkins-ci.org</id>
      <url>http://repo.jenkins-ci.org/public/</url>
      <mirrorOf>m.g.o-public</mirrorOf>
    </mirror>

...
</mirrors>

2) Add a profile for Jenkins to the profiles list:
<profiles>
...
<profile>
      <id>jenkins</id>
      <activation>
        <activeByDefault>true</activeByDefault>
      </activation>
      <repositories>
        <repository>
          <id>repo.jenkins-ci.org</id>
          <url>http://repo.jenkins-ci.org/public/</url>
        </repository>
      </repositories>
      <pluginRepositories>
        <pluginRepository>
          <id>repo.jenkins-ci.org</id>
          <url>http://repo.jenkins-ci.org/public/</url>
        </pluginRepository>
      </pluginRepositories>
    </profile>

...
</profiles>

Step 1: Create the plugin structure
From a shell command run:

mvn -U org.jenkins-ci.tools:maven-hpi-plugin:create

This will create the structure of the plugin project in your file system:
  • project root: Contains the pom.xml and all the project subdirectories.
  • src/main/java: Contains the deliverable Java source code for the project.
  • src/main/resources: Contains the deliverable resources for the project, such as property files. For Jenkins plugins it contains the Jelly files for the plugin UI.
  • src/test/java: Contains the testing classes (JUnit or TestNG test cases, for example) for the project.
  • src/test/resources: Contains resources necessary for testing.
Step 2: Create the project for your favourite IDE
To transform the project into an Eclipse project, run from a shell command:

mvn -DdownloadSources=true -DdownloadJavadocs=true -DoutputDirectory=target/eclipse-classes -Declipse.workspace=/path/to/workspace eclipse:eclipse eclipse:add-maven-repo

Similar commands exist to transform the project in a NetBeans or IntelliJ project. Now you can import the project into an Eclipse workspace the usual way:



Step 3: Add your code
Start to code. You can use every Java stuff and library you need for the business logic. The plugin UIs are implemented in Jelly (http://commons.apache.org/proper/commons-jelly/).

Step 4: Execute it locally
You can run (and debug) a plugin at development time locally. From a shell command execute:

mvn hpi:run

The Maven Jenkins plugin will start a local Jenkins sandbox to emulate the CI server environment:




Step 5: Package it
When you are happy with your implementation and have a stable version you can package the plugin running the following command:

mvn package

It will create the .hpi installer ready to be deployed on any Jenkins server the usual way:


Whether you implement a simple or complex Jenkins plugin, the steps to follow are always the same described in this post. Happy coding :)

Comments

Popular posts from this blog

Streamsets Data Collector log shipping and analysis using ElasticSearch, Kibana and... the Streamsets Data Collector

One common use case scenario for the Streamsets Data Collector (SDC) is the log shipping to some system, like ElasticSearch, for real-time analysis. To build a pipeline for this particular purpose in SDC is really simple and fast and doesn't require coding at all. For this quick tutorial I will use the SDC logs as example. The log data will be shipped to Elasticsearch and then visualized through a Kibana dashboard. Basic knowledge of SDC, Elasticsearch and Kibana is required for a better understanding of this post. These are the releases I am referring to for each system involved in this tutorial: JDK 8 Streamsets Data Collector 1.4.0 ElasticSearch 2.3.3 Kibana 4.5.1 Elasticsearch and Kibana installation You should have your Elasticsearch cluster installed and configured and a Kibana instance pointing to that cluster in order to go on with this tutorial. Please refer to the official documentation for these two products in order to complete their installation (if you do

Exporting InfluxDB data to a CVS file

Sometimes you would need to export a sample of the data from an InfluxDB table to a CSV file (for example to allow a data scientist to do some offline analysis using a tool like Jupyter, Zeppelin or Spark Notebook). It is possible to perform this operation through the influx command line client. This is the general syntax: sudo /usr/bin/influx -database '<database_name>' -host '<hostname>' -username '<username>'  -password '<password>' -execute 'select_statement' -format '<format>' > <file_path>/<file_name>.csv where the format could be csv , json or column . Example: sudo /usr/bin/influx -database 'telegraf' -host 'localhost' -username 'admin'  -password '123456789' -execute 'select * from mem' -format 'csv' > /home/googlielmo/influxdb-export/mem-export.csv

Turning Python Scripts into Working Web Apps Quickly with Streamlit

 I just realized that I am using Streamlit since almost one year now, posted about in Twitter or LinkedIn several times, but never wrote a blog post about it before. Communication in Data Science and Machine Learning is the key. Being able to showcase work in progress and share results with the business makes the difference. Verbal and non-verbal communication skills are important. Having some tool that could support you in this kind of conversation with a mixed audience that couldn't have a technical background or would like to hear in terms of results and business value would be of great help. I found that Streamlit fits well this scenario. Streamlit is an Open Source (Apache License 2.0) Python framework that turns data or ML scripts into shareable web apps in minutes (no kidding). Python only: no front‑end experience required. To start with Streamlit, just install it through pip (it is available in Anaconda too): pip install streamlit and you are ready to execute the working de