Skip to main content

TagUI: an Excellent Open Source Option for RPA - Introduction


 Photo by Dinu J Nair on Unsplash

Today I want to introduce TagUI, an RPA (Robotic Process Automation) Open Source tool I am using to automate test scenarios for web applications. It is developed and maintained by the AI Singapore national programme. It allows writing flows to automate repetitive tasks, such as regression testing of web applications. Flows are written in natural language: English and other 20 languages are currently supported. Works on Windows, Linux and macOS.
The TagUI official documentation can be found here.
The tool doesn't require installation: just go the official GitHub repository and download the archive for your specific OS (ZIP for Windows, tar.gz for Linux or macOS). After the download is completed, unpack its content in the local hard drive. The executable to use is named tagui (.cmd in Windows, .sh for other OS) and it is located into the <destination_folder>/tagui/src directory.
In order to use it, the Google Chrome web browser needs to be installed.

TagUI scripts are plain text files that require just the .tag extension to their names. A TagUI script is referred as flow, because it contains the actions that need to be automated and their execution order.
Here's a quick reference to the principal TagUI steps available for web applications RPA:

  • click: Click on any element, region of image of a web page.
  • visit: Navigation to a web page. It can be omitted and it is possible to simply specify the URL of the destination web page.
  • type: Input text into web input fields.
  • assign: Assign a value to a variable.
  • read: Save text from web elements or screen into a variable.
  • if...else: The usual if...else statement common to any programming language.
  • for: The usual for loop common to any programming language.
  • select: Selection of a dropdown option.
  • table: Save HTML content to a CSV file.
  • popup: Execution of steps in a new tab.
  • frame: Execution of steps in a frame.
  • download to: Specify a location to store downloading files.
  • upload: Upload a file to a website.
  • snap: Take a screenshot of a web page or element.
  • echo: Print a message to the command line.
  • show: Show element text to the command line.
  • check: Validate something.
  • wait: Pause execution for some time.
  • //: Add a comment.

Steps interact with web elements through identifiers, which include web identifiers, image snapshots, screen coordinates. The following is an example of a full TagUI flow to perform a search in Google.com:

// Visit google.com
https://www.google.com
 
// Look on the web page for an element with 'q' in its text, id or name
// and type the search text TagUI RPA, then Enter
type //*[@name="q"] as TagUI RPA[enter]
 
// Click the first result using XPath
click (//*[@class="g"])[1]//a
 
// Wait 3 seconds so the page can load
wait 3
 
// Save a screenshot of the web page to top_result.png
snap page to top_result.png

TagUI flows can be executed in the following modes:
    • Browser based: an instance of Chrome is started and it is possible to visually watch everything happening at flow execution time.
    • Headless: flow execution happens in the background with no visible browser.
    • No Browser: flow execution without any browser (suitable for automation in machines with no browser nor graphical interface).

Execution logs can be generated in text or HTML format.

My personal feedback on this tool. Definitely it is easy to use and gives the possibility also to people having no technical background to automate tasks. No major drawbacks so far. The documentation is comprehensive in terms of steps syntax and general practices, but lacks complex examples, so when applying flows to web applications implemented using modern web frameworks some practices can only be learned by trial and error. The live mode execution helps a lot in those situations.

In a future post I will share some tips about few particular situations I have met when using it.

Comments

  1. Thanks Googlielmo on your blog post introducing TagUI! I share your blog post to my network - https://www.linkedin.com/posts/kensoh_tagui-an-excellent-open-source-option-for-activity-6818379246864596992-6V8V

    ReplyDelete
  2. It keeps me to engage on the content and it gives good explanation of the topics.
    Java Web Application Frameworks
    ORM Framework In Java

    ReplyDelete
  3. The blog which you have shared is more creative... Waiting for your upcoming data...
    Future Scpoe Of Cloud Computing
    Future Of Cloud Computing

    ReplyDelete

Post a Comment

Popular posts from this blog

Streamsets Data Collector log shipping and analysis using ElasticSearch, Kibana and... the Streamsets Data Collector

One common use case scenario for the Streamsets Data Collector (SDC) is the log shipping to some system, like ElasticSearch, for real-time analysis. To build a pipeline for this particular purpose in SDC is really simple and fast and doesn't require coding at all. For this quick tutorial I will use the SDC logs as example. The log data will be shipped to Elasticsearch and then visualized through a Kibana dashboard. Basic knowledge of SDC, Elasticsearch and Kibana is required for a better understanding of this post. These are the releases I am referring to for each system involved in this tutorial: JDK 8 Streamsets Data Collector 1.4.0 ElasticSearch 2.3.3 Kibana 4.5.1 Elasticsearch and Kibana installation You should have your Elasticsearch cluster installed and configured and a Kibana instance pointing to that cluster in order to go on with this tutorial. Please refer to the official documentation for these two products in order to complete their installation (if you do

Exporting InfluxDB data to a CVS file

Sometimes you would need to export a sample of the data from an InfluxDB table to a CSV file (for example to allow a data scientist to do some offline analysis using a tool like Jupyter, Zeppelin or Spark Notebook). It is possible to perform this operation through the influx command line client. This is the general syntax: sudo /usr/bin/influx -database '<database_name>' -host '<hostname>' -username '<username>'  -password '<password>' -execute 'select_statement' -format '<format>' > <file_path>/<file_name>.csv where the format could be csv , json or column . Example: sudo /usr/bin/influx -database 'telegraf' -host 'localhost' -username 'admin'  -password '123456789' -execute 'select * from mem' -format 'csv' > /home/googlielmo/influxdb-export/mem-export.csv

Using Rapids cuDF in a Colab notebook

During last Spark+AI Summit Europe 2019 I had a chance to attend a talk from Miguel Martinez  who was presenting Rapids , the new Open Source framework from NVIDIA for GPU accelerated end-to-end Data Science and Analytics. Fig. 1 - Overview of the Rapids eco-system Rapids is a suite of Open Source libraries: cuDF cuML cuGraph cuXFilter I enjoied the presentation and liked the idea of this initiative, so I wanted to start playing with the Rapids libraries in Python on Colab , starting from cuDF, but the first attempt came with an issue that I eventually solved. So in this post I am going to share how I fixed it, with the hope it would be useful to someone else running into the same blocker. I am assuming here you are already familiar with Google Colab. I am using Python 3.x as Python 2 isn't supported by Rapids. Once you have created a new notebook in Colab, you need to check if the runtime for it is set to use Python 3 and uses a GPU as hardware accelerator. You