Skip to main content

Turning Python Scripts into Working Web Apps Quickly with Streamlit

 I just realized that I am using Streamlit since almost one year now, posted about in Twitter or LinkedIn several times, but never wrote a blog post about it before.

Communication in Data Science and Machine Learning is the key. Being able to showcase work in progress and share results with the business makes the difference. Verbal and non-verbal communication skills are important. Having some tool that could support you in this kind of conversation with a mixed audience that couldn't have a technical background or would like to hear in terms of results and business value would be of great help. I found that Streamlit fits well this scenario.

Streamlit is an Open Source (Apache License 2.0) Python framework that turns data or ML scripts into shareable web apps in minutes (no kidding). Python only: no front‑end experience required.

To start with Streamlit, just install it through pip (it is available in Anaconda too):

pip install streamlit

and you are ready to execute the working demo application:

streamlit hello

accessible to a web browser at the http://localhost:8501 address.

Streamlit is based on these three principles:

  • Scripting: building an app has to happen in a few lines of code. Developers can then see it automatically update as they iteratively save the source file. Typically you can implement a full working app writing around 50 lines of Python code.
  • Weave in interaction: adding a widget is the same as declaring a variable in Python. No need to do other things such as write a backend, define routes, handle HTTP requests, connect a frontend and write HTML, CSS and JavaScript.
  • Instant deployment: the Streamlit company offers a platform for rapid apps deployment and sharing and to enhance collaboration. It is possible anyway to deploy Streamlit web apps in different platforms such as Heroku or AWS but you have to manage SSO, security, scalability, etc. as required for the specific platform used.
All the major Python visualization libraries are compatible with Streamlit (I have worked mostly with Plotly and Matplotlib, but you can find dozens of examples with others). Same in terms of ML/DL frameworks (my experience so far is with Keras and TensorFlow, but these aren't the only ones supported). And of course the common libraries to any DS or ML project such as Numpy, Pandas or OpenCV can be used with Streamlit.
With Streamlit you can build really powerful apps: it maintains the promises you can read in the official website. Rather that going through the API, I prefer to share here some Streamlit apps available in my GitHub space, so that you can check yourself what it is possible to do with this framework:

Detection, segmentation and classification of materials inside mostly transparent vessels in images using a CNN: https://github.com/virtualramblas/streamlit-materials-segmentation-in-vessels



A simple frontend for a pre-trained MobileNet CNN model + OpenCV for face mask detection in images: https://github.com/virtualramblas/streamlit-face-mask-detector

What are you waiting for to start evaluating it and join the community?
I will write blog posts on specific Streamlit topics in the future. Stay tuned!

Comments

  1. Sharing thoughts and information is great and here we get impressive information by you. thanks for sharing with us.
    Data Science training in Sydney

    ReplyDelete
  2. Thank you for sharing this post with us I really appreciate your efforts if possible you can check out
    data science course in bangalore
    data science course

    ReplyDelete
  3. Nice Information About Web Application thanks for sharing this.

    ReplyDelete
  4. Hi, I read your whole blog. This is very nice. Good to know about the career in. Python Training & Certification , anyone interested can Python Training for making their career in this field.

    ReplyDelete
  5. 1. Thanks for sharing the best information and suggestions, I love your content, and they are very nice and very useful to us. If you are looking for the best Custom Web Application Development Services, then visit Symentix Technologies Private Limited. I appreciate the work you have put into this.

    ReplyDelete

Post a Comment

Popular posts from this blog

Streamsets Data Collector log shipping and analysis using ElasticSearch, Kibana and... the Streamsets Data Collector

One common use case scenario for the Streamsets Data Collector (SDC) is the log shipping to some system, like ElasticSearch, for real-time analysis. To build a pipeline for this particular purpose in SDC is really simple and fast and doesn't require coding at all. For this quick tutorial I will use the SDC logs as example. The log data will be shipped to Elasticsearch and then visualized through a Kibana dashboard. Basic knowledge of SDC, Elasticsearch and Kibana is required for a better understanding of this post. These are the releases I am referring to for each system involved in this tutorial: JDK 8 Streamsets Data Collector 1.4.0 ElasticSearch 2.3.3 Kibana 4.5.1 Elasticsearch and Kibana installation You should have your Elasticsearch cluster installed and configured and a Kibana instance pointing to that cluster in order to go on with this tutorial. Please refer to the official documentation for these two products in order to complete their installation (if you do

Exporting InfluxDB data to a CVS file

Sometimes you would need to export a sample of the data from an InfluxDB table to a CSV file (for example to allow a data scientist to do some offline analysis using a tool like Jupyter, Zeppelin or Spark Notebook). It is possible to perform this operation through the influx command line client. This is the general syntax: sudo /usr/bin/influx -database '<database_name>' -host '<hostname>' -username '<username>'  -password '<password>' -execute 'select_statement' -format '<format>' > <file_path>/<file_name>.csv where the format could be csv , json or column . Example: sudo /usr/bin/influx -database 'telegraf' -host 'localhost' -username 'admin'  -password '123456789' -execute 'select * from mem' -format 'csv' > /home/googlielmo/influxdb-export/mem-export.csv

Using Rapids cuDF in a Colab notebook

During last Spark+AI Summit Europe 2019 I had a chance to attend a talk from Miguel Martinez  who was presenting Rapids , the new Open Source framework from NVIDIA for GPU accelerated end-to-end Data Science and Analytics. Fig. 1 - Overview of the Rapids eco-system Rapids is a suite of Open Source libraries: cuDF cuML cuGraph cuXFilter I enjoied the presentation and liked the idea of this initiative, so I wanted to start playing with the Rapids libraries in Python on Colab , starting from cuDF, but the first attempt came with an issue that I eventually solved. So in this post I am going to share how I fixed it, with the hope it would be useful to someone else running into the same blocker. I am assuming here you are already familiar with Google Colab. I am using Python 3.x as Python 2 isn't supported by Rapids. Once you have created a new notebook in Colab, you need to check if the runtime for it is set to use Python 3 and uses a GPU as hardware accelerator. You