Skip to main content

The Moka pattern

If you like me are an espresso coffee lover (the real espresso, http://en.wikipedia.org/wiki/Espresso , not that sort of dark filthy water imitation you can find in many american coffee bars) you should recognize the object in figure 1:


Fig. 1 - The traditional Moka


It is a Moka machine from a famous Italian brand. It allows you to prepare just espresso coffee, but it does its job very well. Some other coffee machine producers got some other ideas to add features to this basic but brilliant concept. Here an example:


Fig. 2 - Advanced Moka

If you are an espesso lover and a lazy guy too you should appreciate it. The producer of the advanced moka shown in figure 2 implemented the following:
  • Scheduling: you can program the exact time to have the coffee ready to be consumed.
  • Validation: Before starting the Moka checks if coffee and water were provided. If not, it warns the user with human readable messages.
  • Automation: the moka itself manages the preparation (no human intervention needed to check if the coffee is ready and to remove the moka from the fire (and no fire at all: you need just to plug the Moka to a wall socket)).
  • Alert: the Moka can be used as alarm in the morning to wake up and start a brand new day with a good coffee.
  • Stress reduction: it keeps the coffee warm up to 30 minutes waiting for you to overcome the morning awakening setback.
So we have two different models (the basic one and its specialization) that at the end have the exact same goal. Some improvements were made to the start concept (and others could be done), but keeping the same main functionality. It is up to you to choose the basic or the advanced according to your needs or tastes.
Now suppose you are a pasta lover. In order to prepare pasta you need the following distributed system:


Fig. 3 - Basic system to cook pasta

The one shown in figure 3 includes the basic configuration. Depending by the kind of pasta you want to cook it could be specialized or extended with some plugins (figure 4 shows a plugin for spaghetti).


Fig. 4 - The spaghetti server plugin

But at the end its main goal is to help you cooking pasta. If some coffee and pasta lover would ask you to add the make espresso coffee functionality to the system in figure 3 what would you reply to him/her? I am sure the 99% of people would suggest to use both systems of figure 1 and 3 and maybe found some sort of integration between them rather than start to waste time and money trying to build a useless and absurd feature. Always use pot and colander to cook pasta and Moka to make espresso coffee.
What's the meaning of this example? If you work in the IT you should have faced at least once in your professional life this kind of situation: someone asking you to use a system for a different purpose than the one it was designed for while there is another one that totally fits the need and does it better. It seems sometimes so hard for some people to understand this basic concept. The next time a customer, a sales representative or a manager with no technical skills comes to you with this kind of request it could be helpful for you to reply recycling the example proposed in this post: Moka is perfect for coffee ;) That's it. And ask him/her to think in terms of integration than complication.

Comments

Popular posts from this blog

Streamsets Data Collector log shipping and analysis using ElasticSearch, Kibana and... the Streamsets Data Collector

One common use case scenario for the Streamsets Data Collector (SDC) is the log shipping to some system, like ElasticSearch, for real-time analysis. To build a pipeline for this particular purpose in SDC is really simple and fast and doesn't require coding at all. For this quick tutorial I will use the SDC logs as example. The log data will be shipped to Elasticsearch and then visualized through a Kibana dashboard. Basic knowledge of SDC, Elasticsearch and Kibana is required for a better understanding of this post. These are the releases I am referring to for each system involved in this tutorial: JDK 8 Streamsets Data Collector 1.4.0 ElasticSearch 2.3.3 Kibana 4.5.1 Elasticsearch and Kibana installation You should have your Elasticsearch cluster installed and configured and a Kibana instance pointing to that cluster in order to go on with this tutorial. Please refer to the official documentation for these two products in order to complete their installation (if you do

Exporting InfluxDB data to a CVS file

Sometimes you would need to export a sample of the data from an InfluxDB table to a CSV file (for example to allow a data scientist to do some offline analysis using a tool like Jupyter, Zeppelin or Spark Notebook). It is possible to perform this operation through the influx command line client. This is the general syntax: sudo /usr/bin/influx -database '<database_name>' -host '<hostname>' -username '<username>'  -password '<password>' -execute 'select_statement' -format '<format>' > <file_path>/<file_name>.csv where the format could be csv , json or column . Example: sudo /usr/bin/influx -database 'telegraf' -host 'localhost' -username 'admin'  -password '123456789' -execute 'select * from mem' -format 'csv' > /home/googlielmo/influxdb-export/mem-export.csv

Using Rapids cuDF in a Colab notebook

During last Spark+AI Summit Europe 2019 I had a chance to attend a talk from Miguel Martinez  who was presenting Rapids , the new Open Source framework from NVIDIA for GPU accelerated end-to-end Data Science and Analytics. Fig. 1 - Overview of the Rapids eco-system Rapids is a suite of Open Source libraries: cuDF cuML cuGraph cuXFilter I enjoied the presentation and liked the idea of this initiative, so I wanted to start playing with the Rapids libraries in Python on Colab , starting from cuDF, but the first attempt came with an issue that I eventually solved. So in this post I am going to share how I fixed it, with the hope it would be useful to someone else running into the same blocker. I am assuming here you are already familiar with Google Colab. I am using Python 3.x as Python 2 isn't supported by Rapids. Once you have created a new notebook in Colab, you need to check if the runtime for it is set to use Python 3 and uses a GPU as hardware accelerator. You