Skip to main content

SoapUI - Part 2: Testing SOAP Web Services

In this second part you will discover how to prepare tests for SOAP Web Services using SoapUI. For a better understanding of this post you have to know the basics of SOAP WS. As stated in part 1 (http://googlielmo.blogspot.com/2013/12/soapui-part-1.html), I am always referring to the Open Source edition of SoapUI.
First of all let's have a quick look at the User Interface.



On the upper left part of the UI you can find the project navigator (1). It contains a tree structure of the projects in the workspace. Selecting an item in the navigator you can see its general properties on the bottom left part (2). The central area (3) is the main working area. At the bottom of it there's the logging area (4) containing different tabs, one for each log scope (general SoapUI, Jetty, http requests and responses, errors, memory consumption, etc.). SoapUi uses log4j (http://logging.apache.org/log4j/1.2/) for logging, so if you need to customize the log settings you can edit the $SOAPUI_HOME/bin/soapui-log4j.xml file and restart the tool. 
Let's start creating our first project. In this post we are going to create a project and some tests for this (http://www.webservicex.net/globalweather.asmx) web service. Go to the project navigator and right click on the tree root (Projects). Select New SOAP Project. Choose a name for it (WheaterForecastTest, for example) and then set the Web Service WSDL (http://www.webservicex.net/globalweather.asmx?WSDL). Leave the Create Requests checkbox set as for default and click on the OK button. SoapUI, starting from the given WSDL, creates the project with two interfaces for the WS (see the figure below), one for the SOAP protocol release 1.1 and another for the SOAP protocol release 1.2:



You can delete the one for the release 1.1. We are going to use the one for the release 1.2 only. The GlobalWeather WS provides 2 methods:
  • GetCitiesByCountry: gets all the major cities covered by the WS by country name;
  • GetWheater: gets the weather report for all the major cities for a given country.
SoapUI generates a template request for each method provided by the WS under test. In the figure below you can see the one for the GetCitiesByCountry method:



Replace the ? placeholder in the template with a value, for example Ireland and execute the request clicking on the green arrow button on the upper left of the request window. SoapUI will execute the request and you should receive the following response:



You can see both request and response in raw format too:




Do the same for the other method of the WS.
Now that we checked the WS methods let's go to add a test suite. Right click on the WS interface name, choose Generate TestSuite and create it leaving the default values proposed by SoapUI:



A new TestSuite with two TestCases (one for each method of the WS under test) shoud be created by the tool:



Expand the GetCitiesByCountry TestCase and the relative Test Step and double click on it. The sample request created for it will be displayed in XML format. Repeat the same steps as for template requests for the WS (replace the ? placeholder with the Ireland value and run the request). Now you need to add some assertion to your test case to make it helpful. Click on the label Assertions at the bottom of the request editor. It will expand the asserions editor (it should be empty). Click on the + icon. You should see the Add Assertion dialog box:



Select Contains, then click on the Add button and add Dublin:


Click on the OK button and the TestCase will execute. If the response contains the Dublin value, then the assertion is verified and the Assertion and the TestCase will be marked in green. If not they will be both marked in red. The same way you could add more assertions for a single TestCase. You can decide to run a single TestCase only or the whole TestSuite.
These are the general steps for any SOAP Web Service you want to put under test through SoapUI.

Comments

Popular posts from this blog

Streamsets Data Collector log shipping and analysis using ElasticSearch, Kibana and... the Streamsets Data Collector

One common use case scenario for the Streamsets Data Collector (SDC) is the log shipping to some system, like ElasticSearch, for real-time analysis. To build a pipeline for this particular purpose in SDC is really simple and fast and doesn't require coding at all. For this quick tutorial I will use the SDC logs as example. The log data will be shipped to Elasticsearch and then visualized through a Kibana dashboard. Basic knowledge of SDC, Elasticsearch and Kibana is required for a better understanding of this post. These are the releases I am referring to for each system involved in this tutorial: JDK 8 Streamsets Data Collector 1.4.0 ElasticSearch 2.3.3 Kibana 4.5.1 Elasticsearch and Kibana installation You should have your Elasticsearch cluster installed and configured and a Kibana instance pointing to that cluster in order to go on with this tutorial. Please refer to the official documentation for these two products in order to complete their installation (if you do

Exporting InfluxDB data to a CVS file

Sometimes you would need to export a sample of the data from an InfluxDB table to a CSV file (for example to allow a data scientist to do some offline analysis using a tool like Jupyter, Zeppelin or Spark Notebook). It is possible to perform this operation through the influx command line client. This is the general syntax: sudo /usr/bin/influx -database '<database_name>' -host '<hostname>' -username '<username>'  -password '<password>' -execute 'select_statement' -format '<format>' > <file_path>/<file_name>.csv where the format could be csv , json or column . Example: sudo /usr/bin/influx -database 'telegraf' -host 'localhost' -username 'admin'  -password '123456789' -execute 'select * from mem' -format 'csv' > /home/googlielmo/influxdb-export/mem-export.csv

Using Rapids cuDF in a Colab notebook

During last Spark+AI Summit Europe 2019 I had a chance to attend a talk from Miguel Martinez  who was presenting Rapids , the new Open Source framework from NVIDIA for GPU accelerated end-to-end Data Science and Analytics. Fig. 1 - Overview of the Rapids eco-system Rapids is a suite of Open Source libraries: cuDF cuML cuGraph cuXFilter I enjoied the presentation and liked the idea of this initiative, so I wanted to start playing with the Rapids libraries in Python on Colab , starting from cuDF, but the first attempt came with an issue that I eventually solved. So in this post I am going to share how I fixed it, with the hope it would be useful to someone else running into the same blocker. I am assuming here you are already familiar with Google Colab. I am using Python 3.x as Python 2 isn't supported by Rapids. Once you have created a new notebook in Colab, you need to check if the runtime for it is set to use Python 3 and uses a GPU as hardware accelerator. You