Skip to main content

Adding a Maven behavior to not Maven projects

Not Maven projects built through Jenkins cannot be automatically uploaded to a build artifact repository (like Apache Archiva or JFrog Artifactory) at the end of a successful build. In order to allow this level of automation whenever you won't (or can't) transform a standard Java project into a Maven project (but you need to have a uniform build management process) you need to simply add a pom.xml file to the root of your project and commit it to your source code repository along with all the other project files. You can create a basic template following the official documentation and then add/update the specific sections as described in this post. This way developers wouldn't need to change anything else in the current projects' structure. And no need to have Maven on their local development machine or a Maven plugin in the IDE they use.
Here's a description of all the sections of the pom.xml file. The values to overwrite are highlighted in bold.

Maven coordinates

Here you need to specify the groupId (the packages path), the artifactId (the name for your final package), the version (this property is a string, so it could be something like 1.0 or 2.0-alpha or 2.1-snapshot, etc.), the package type (jar, war, ear, hpi, etc.), the project name and a project URL (a Wiki page, for example):

<groupId>org.googlielmo</groupId>
<artifactId>mynomavenproject</artifactId>
<version>1.0</version>
<packaging>jar</packaging>

<name>mynomavenproject</name>
<url>http://maven.apache.org</url>


Repositories

Contains the reference to the your artifact repository:
<repositories>
    <repository>
      <id>myinternalmaven.mycompany.com</id>
      <url>http://<artifact_repo_host>:<port>/<context></url>
    </repository>
</repositories>

Properties

Nothing to change here unless you use a different encoding.
<properties>
    <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
</properties>

Maven default structure overwrite

This is the most important section. It allows to overwrite some default Maven parameters in order to assign a Maven behavior to a project. You need to overwrite only the params highlighted in bold. Set up the sourceDirectory (the root of the source code of your project), the testSourceDirectory (the root of the Unit Tests of your project), the resource directory (the root of your configuration files) and the test resource directory (the root of your configuration files used by the Unit Tests only).

<directory>target</directory>
<outputDirectory>target/classes</outputDirectory>
<finalName>${project.artifactId}-${project.version}</finalName>
<testOutputDirectory>target/test-classes</testOutputDirectory>
<sourceDirectory>src</sourceDirectory>
<testSourceDirectory>src/
org/googlielmo/mynomavenproject/unitest</testSourceDirectory>
<resources>
    <resource>
        <directory>properties</directory>
     </resource>
</resources>
<testResources>
    <testResource>
        <directory>properties</directory>
     </testResource>
</testResources>

Shade plugin

The section below allows the Shade plugin usage. It creates a second jar including all the dependencies for the project. You can customize the name of the generated file, but a best practice is to keep a reference to the project name and release as shown in the example below.

<plugin>
      <groupId>org.apache.maven.plugins</groupId>
      <artifactId>maven-shade-plugin</artifactId>
      <executions>
          <execution>
              <phase>package</phase>
              <goals>
                  <goal>shade</goal>
              </goals>
          </execution>
      </executions>
      <configuration>
          <finalName>runnable-${project.artifactId}-${project.version</finalName>
      </configuration>
</plugin>

Find Bugs plugin

The section below allows the Find Bugs plugin usage. This way Jenkins can perform static analysis of the byte code for a new build just finding this reference to the plugin in the project POM file. You can copy and paste the section below without changes.
<plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-site-plugin</artifactId>
    <version>3.0-beta-2</version>
    <configuration>
        <reportPlugins>
            <plugin>
               <groupId>org.codehaus.mojo</groupId>
               <artifactId>findbugs-maven-plugin</artifactId>
               <version>2.3.1</version>
               <configuration>
                   <effort>Max</effort>
                   <xmlOutput>true</xmlOutput>
               </configuration>
            </plugin>
        </reportPlugins>
    </configuration>
</plugin>

Dependencies

You need to specify all the dependencis of your project as <dependency> child tags for the <dependencies> tag. The <dependency> tags need to follow the usual Maven convention. You can look for their exact values (groupId, artifactId, version) in the Maven Central repository (http://search.maven.org/) for third party libraries or in your internal artifact repository for your custom libraries. Example:
<dependencies>
      <dependency>
            <groupId>javax.sip</groupId>
            <artifactId>jain-sip-api</artifactId>
            <version>1.2</version>
            <scope>compile</scope>
      </dependency>
     <dependency>
            <groupId>log4j</groupId>
            <artifactId>log4j</artifactId>
            <version>1.2.15</version>
      </dependency>
</dependencies>
Only the dependencies listed in this section would be considered for the build. Any other library in the project workspace will be ignored.

Unit Tests reports

The section below allows to generate reports for the Unit Tests results through the Surefire plugin. The results of the tests execution can be then shown in the Jenkins Dashboard through tables and graphs and in text mode too in the console output. You can copy and paste the section below without changes.
<reporting>
     <plugins>
         <plugin>
           <groupId>org.apache.maven.plugins</groupId>
           <artifactId>maven-surefire-report-plugin</artifactId>
           <version>2.12.4</version>
        </plugin>
     </plugins>
</reporting>

Comments

Popular posts from this blog

Streamsets Data Collector log shipping and analysis using ElasticSearch, Kibana and... the Streamsets Data Collector

One common use case scenario for the Streamsets Data Collector (SDC) is the log shipping to some system, like ElasticSearch, for real-time analysis. To build a pipeline for this particular purpose in SDC is really simple and fast and doesn't require coding at all. For this quick tutorial I will use the SDC logs as example. The log data will be shipped to Elasticsearch and then visualized through a Kibana dashboard. Basic knowledge of SDC, Elasticsearch and Kibana is required for a better understanding of this post. These are the releases I am referring to for each system involved in this tutorial: JDK 8 Streamsets Data Collector 1.4.0 ElasticSearch 2.3.3 Kibana 4.5.1 Elasticsearch and Kibana installation You should have your Elasticsearch cluster installed and configured and a Kibana instance pointing to that cluster in order to go on with this tutorial. Please refer to the official documentation for these two products in order to complete their installation (if you do

Exporting InfluxDB data to a CVS file

Sometimes you would need to export a sample of the data from an InfluxDB table to a CSV file (for example to allow a data scientist to do some offline analysis using a tool like Jupyter, Zeppelin or Spark Notebook). It is possible to perform this operation through the influx command line client. This is the general syntax: sudo /usr/bin/influx -database '<database_name>' -host '<hostname>' -username '<username>'  -password '<password>' -execute 'select_statement' -format '<format>' > <file_path>/<file_name>.csv where the format could be csv , json or column . Example: sudo /usr/bin/influx -database 'telegraf' -host 'localhost' -username 'admin'  -password '123456789' -execute 'select * from mem' -format 'csv' > /home/googlielmo/influxdb-export/mem-export.csv

Using Rapids cuDF in a Colab notebook

During last Spark+AI Summit Europe 2019 I had a chance to attend a talk from Miguel Martinez  who was presenting Rapids , the new Open Source framework from NVIDIA for GPU accelerated end-to-end Data Science and Analytics. Fig. 1 - Overview of the Rapids eco-system Rapids is a suite of Open Source libraries: cuDF cuML cuGraph cuXFilter I enjoied the presentation and liked the idea of this initiative, so I wanted to start playing with the Rapids libraries in Python on Colab , starting from cuDF, but the first attempt came with an issue that I eventually solved. So in this post I am going to share how I fixed it, with the hope it would be useful to someone else running into the same blocker. I am assuming here you are already familiar with Google Colab. I am using Python 3.x as Python 2 isn't supported by Rapids. Once you have created a new notebook in Colab, you need to check if the runtime for it is set to use Python 3 and uses a GPU as hardware accelerator. You