Skip to main content

Automatically sending artifacts from Jenkins to a repository manager: the quickest (and not so dirty) way

In order to store artifacts in a repository at the end of a successful build some plugins are available in the Jenkins update centre. The configuration of these plugins is sometimes tricky, the documentation poor or not updated and some of them are tested (and work) only against one of the available Open Source and commercial build artifact repository managers and fail against others. A quick and not so dirty general way to complete this action is described in this post. This solution works fine against Apache Archiva, but it should work fine with Artifactory or Nexus as well.
You need to add a Invoke top-level Maven target as latest build step for the build job you're running to build your Java application/library and configure it the following way:

Maven version: any Maven version you need (you can use the default for your Jenkins instance or one among the different releases that you may have set up for it).
Goals: deploy:deploy-file -Dfile=<application_root>\target\<artifact_name>.jar -DpomFile=<application_root>\pom.xml -DrepositoryId=<artifact_repository_id> -Durl=<artifact_repository_url>
where
  • application_root is the root of the Java application/library in the build job workspace
  • artifact_name is the choosen name for the artifact
  • artifact_repository_id is the ID of the Archiva repository inside the Maven settings.xml file to use (let's go back on this later in this post)
  •  artifact_repository_url is the HTTP URL of the destination Archiva repository. 
Use private Maven repository: checked
Settings file: Settings file in filesystem and the path to the settings.xml to use
Global Settings file: Settings file in filesystem and the path to the settings.xml file to use

The settings file could belong to a different Maven installation but accessible by Jenkins. If authentication is required to access the Archiva repository the settings file to use should contain the authentication credentials. You need to add a new <server> tag to the list of servers (<servers> tag):

<servers>
...
<server>
        <id>repoId</id>
        <username>username</username>
        <password>password</password>

</server>
 ...
</servers>

repoId is the id to be used as value for the repositoryId argument of the deploy-file Maven goal.
Finally, in order to skip the deploy of the artifact just in case one of the previous build steps (build, unit tests, code coverage, static analysis, whatever) should fail you should add a conditional step (you need the Conditional Build Step plugin (https://wiki.jenkins-ci.org/display/JENKINS/Conditional+BuildStep+Plugin) for this) as well. The following snapshots show the final configuration:






Comments

Popular posts from this blog

Streamsets Data Collector log shipping and analysis using ElasticSearch, Kibana and... the Streamsets Data Collector

One common use case scenario for the Streamsets Data Collector (SDC) is the log shipping to some system, like ElasticSearch, for real-time analysis. To build a pipeline for this particular purpose in SDC is really simple and fast and doesn't require coding at all. For this quick tutorial I will use the SDC logs as example. The log data will be shipped to Elasticsearch and then visualized through a Kibana dashboard. Basic knowledge of SDC, Elasticsearch and Kibana is required for a better understanding of this post. These are the releases I am referring to for each system involved in this tutorial: JDK 8 Streamsets Data Collector 1.4.0 ElasticSearch 2.3.3 Kibana 4.5.1 Elasticsearch and Kibana installation You should have your Elasticsearch cluster installed and configured and a Kibana instance pointing to that cluster in order to go on with this tutorial. Please refer to the official documentation for these two products in order to complete their installation (if you do

Exporting InfluxDB data to a CVS file

Sometimes you would need to export a sample of the data from an InfluxDB table to a CSV file (for example to allow a data scientist to do some offline analysis using a tool like Jupyter, Zeppelin or Spark Notebook). It is possible to perform this operation through the influx command line client. This is the general syntax: sudo /usr/bin/influx -database '<database_name>' -host '<hostname>' -username '<username>'  -password '<password>' -execute 'select_statement' -format '<format>' > <file_path>/<file_name>.csv where the format could be csv , json or column . Example: sudo /usr/bin/influx -database 'telegraf' -host 'localhost' -username 'admin'  -password '123456789' -execute 'select * from mem' -format 'csv' > /home/googlielmo/influxdb-export/mem-export.csv

Using Rapids cuDF in a Colab notebook

During last Spark+AI Summit Europe 2019 I had a chance to attend a talk from Miguel Martinez  who was presenting Rapids , the new Open Source framework from NVIDIA for GPU accelerated end-to-end Data Science and Analytics. Fig. 1 - Overview of the Rapids eco-system Rapids is a suite of Open Source libraries: cuDF cuML cuGraph cuXFilter I enjoied the presentation and liked the idea of this initiative, so I wanted to start playing with the Rapids libraries in Python on Colab , starting from cuDF, but the first attempt came with an issue that I eventually solved. So in this post I am going to share how I fixed it, with the hope it would be useful to someone else running into the same blocker. I am assuming here you are already familiar with Google Colab. I am using Python 3.x as Python 2 isn't supported by Rapids. Once you have created a new notebook in Colab, you need to check if the runtime for it is set to use Python 3 and uses a GPU as hardware accelerator. You