Skip to main content

Inspect code with FindBugs

Today I'd like to introduce an Open Source software we use for three years at least to perform code review and identify potential bugs in our Java code. Its name is FindBugs. It works performing static analysis of the Java bytecode. It covers different categories of bugs, including those related to the web applications security (this was the main reason we started to use it, but later we begun to use it for different categories of bugs). The software is distributed as a standalone GUI application or as plug-in available for the most famous IDEs. In this post I will refer to the plugin for Eclipse Indigo.
You can download FindBugs from the the official website
If you plan to use FindBugs as Eclipse plug-in you don’t need to download and install the standalone application too.
The plug-in requires the Eclipse release 3.3 or later and the JDK release 1.5 or later. The installation of the plug-in is very simple:
  • In Eclipse, click on Help -> Install New Software...
  • Click on the Add button.
  • Enter the following and after click OK:  Name: FindBugs update site , Location: one of the following (note: no final slash on the url) , http://findbugs.cs.umd.edu/eclipse for official releases , http://findbugs.cs.umd.edu/eclipse-candidate for candidate releases and official releases , http://findbugs.cs.umd.edu/eclipse-daily for all releases, including developmental ones
  • Click on the Select All button and the on the Next button.
  • In the Installation Details window click on the Next button.
  • Accept the terms of the license agreement and then click on the Finish button.
  • The plugin is not digitally signed. Go ahead and install it anyway.
  • Click Restart Now to make Eclipse restart itself.

FindBugs plugin provides a FindBugs perspective (see Figure 1). To execute on a project double click on the project name and then click on Find Bugs



Figure 1 – The FindBugs perspective

At the end of the project code analysis the Bug Explorer will show a list of the found bugs grouped by category. Clicking on a bug you will see the fragment of code affected by the bug and a more detailed description of the bug with suggestions to eliminate it. You can refer to the official documentation on the website for a full detailed explanation of the error categories.
You can customize the plugin by the project properties menu (see Figure 2). It is possible to include/exclude bug categories, set the analysis effort, set the XML report parameters and add filters.



Figure 2 – FindBugs preference settings

Comments

  1. This week the blog reached (and went beyond) the 1000 visits. I want to say "Thank you" to the readers and to the people that contacted me to share viewpoints about the arguments of my posts.

    ReplyDelete

Post a Comment

Popular posts from this blog

Streamsets Data Collector log shipping and analysis using ElasticSearch, Kibana and... the Streamsets Data Collector

One common use case scenario for the Streamsets Data Collector (SDC) is the log shipping to some system, like ElasticSearch, for real-time analysis. To build a pipeline for this particular purpose in SDC is really simple and fast and doesn't require coding at all. For this quick tutorial I will use the SDC logs as example. The log data will be shipped to Elasticsearch and then visualized through a Kibana dashboard. Basic knowledge of SDC, Elasticsearch and Kibana is required for a better understanding of this post. These are the releases I am referring to for each system involved in this tutorial: JDK 8 Streamsets Data Collector 1.4.0 ElasticSearch 2.3.3 Kibana 4.5.1 Elasticsearch and Kibana installation You should have your Elasticsearch cluster installed and configured and a Kibana instance pointing to that cluster in order to go on with this tutorial. Please refer to the official documentation for these two products in order to complete their installation (if you do

Exporting InfluxDB data to a CVS file

Sometimes you would need to export a sample of the data from an InfluxDB table to a CSV file (for example to allow a data scientist to do some offline analysis using a tool like Jupyter, Zeppelin or Spark Notebook). It is possible to perform this operation through the influx command line client. This is the general syntax: sudo /usr/bin/influx -database '<database_name>' -host '<hostname>' -username '<username>'  -password '<password>' -execute 'select_statement' -format '<format>' > <file_path>/<file_name>.csv where the format could be csv , json or column . Example: sudo /usr/bin/influx -database 'telegraf' -host 'localhost' -username 'admin'  -password '123456789' -execute 'select * from mem' -format 'csv' > /home/googlielmo/influxdb-export/mem-export.csv

Using Rapids cuDF in a Colab notebook

During last Spark+AI Summit Europe 2019 I had a chance to attend a talk from Miguel Martinez  who was presenting Rapids , the new Open Source framework from NVIDIA for GPU accelerated end-to-end Data Science and Analytics. Fig. 1 - Overview of the Rapids eco-system Rapids is a suite of Open Source libraries: cuDF cuML cuGraph cuXFilter I enjoied the presentation and liked the idea of this initiative, so I wanted to start playing with the Rapids libraries in Python on Colab , starting from cuDF, but the first attempt came with an issue that I eventually solved. So in this post I am going to share how I fixed it, with the hope it would be useful to someone else running into the same blocker. I am assuming here you are already familiar with Google Colab. I am using Python 3.x as Python 2 isn't supported by Rapids. Once you have created a new notebook in Colab, you need to check if the runtime for it is set to use Python 3 and uses a GPU as hardware accelerator. You