Skip to main content

Deploying and scaling an Oracle database on a multi-node Kubernetes cluster

In this post I am going to explain how to deploy and scale a Oracle Express database on a multi-node Kubernetes cluster. I am going to use this Docker container by Maxym Bylenko.  I am referring to the container for the Oracle XE 11g because of the following open issue with that for Oracle XE 12c at the time I did the process described below. I am assuming the readers have at least basic or middle level knowledge of the Kubernetes concepts.
First thing to do is to create a Pod. We can do this (and other operations described in this post) declaratively through a YAML file:

apiVersion: v1
kind: Pod
metadata:
  name: "oradb"
labels:
  name: "oradb"
spec:
    containers:
      - image: "sath89/oracle-xe-11g:latest"
        name: "oradb"
        ports:
          - containerPort: 1521
        restartPolicy: Always


Once the Pod has been successfully created, we need to create a Service for it:

apiVersion: v1
kind: Service
metadata:
  name: "oradb"
  labels:
    app: "oradb"
spec:
  ports:
    - port: 1521
  selector:
    app: "oradb"


Now we need to create a ReplicationController. It enables to easily create multiple pods and then ensure that that number of pods always exists: if a pod crashes, the Replication Controller replaces it. Here's how we can declaratively create a ReplicationController, specifying we want 2 replicas:

apiVersion: v1
kind: ReplicationController
metadata:
  name: "oradb"
  labels:
    app: "oradb"
spec:
  replicas: 2
  selector:
    app: "oradb"
  template:
    metadata:
      labels:
        app: "oradb"
    spec:
      containers:
        - image: "sath89/oracle-xe-11g:latest"
          name: "oradb"


We can check if the ReplicationController has been created successfully from a shell through kubectl:

kubectl get rc

or, if in OpenShift Origin:

oc get rc

NAME      DESIRED   CURRENT   AGE
oradb       2            2                  1d


Let's check for the pods:

kubectl get pods

or, in OpenShift Origin:

oc get pods

NAME          READY     STATUS    RESTARTS   AGE
oradb           1/1         Running        0          1d
oradb-6rs8h   1/1       Running        0          1d
oradb-cq2x9   1/1       Running       0          1d


Imagine now we need to scale the cluster from 2 to 3 Pods. It is possible to do this simply with the kubectl scale command:

kubectl scale rc oradb --replicas=3

or the oc scale command:
 
oc scale rc oradb --replicas=3

As soon as the command above has been completed, we would find a new pod in the list:

NAME          READY     STATUS    RESTARTS   AGE
oradb           1/1         Running        0          1d
oradb-6rs8h   1/1       Running        0          1d
oradb-cq2x9   1/1       Running       0          1d

oradb-rplzj   1/1       Running       0          1d
 
And that's the new situation for the ReplicationController:

NAME      DESIRED   CURRENT   AGE
oradb       3            3                  1d


The database sid is xe and the credentials to connect are:
username: system
password: oracle




Comments

Post a Comment

Popular posts from this blog

Streamsets Data Collector log shipping and analysis using ElasticSearch, Kibana and... the Streamsets Data Collector

One common use case scenario for the Streamsets Data Collector (SDC) is the log shipping to some system, like ElasticSearch, for real-time analysis. To build a pipeline for this particular purpose in SDC is really simple and fast and doesn't require coding at all. For this quick tutorial I will use the SDC logs as example. The log data will be shipped to Elasticsearch and then visualized through a Kibana dashboard. Basic knowledge of SDC, Elasticsearch and Kibana is required for a better understanding of this post. These are the releases I am referring to for each system involved in this tutorial: JDK 8 Streamsets Data Collector 1.4.0 ElasticSearch 2.3.3 Kibana 4.5.1 Elasticsearch and Kibana installation You should have your Elasticsearch cluster installed and configured and a Kibana instance pointing to that cluster in order to go on with this tutorial. Please refer to the official documentation for these two products in order to complete their installation (if you do

Exporting InfluxDB data to a CVS file

Sometimes you would need to export a sample of the data from an InfluxDB table to a CSV file (for example to allow a data scientist to do some offline analysis using a tool like Jupyter, Zeppelin or Spark Notebook). It is possible to perform this operation through the influx command line client. This is the general syntax: sudo /usr/bin/influx -database '<database_name>' -host '<hostname>' -username '<username>'  -password '<password>' -execute 'select_statement' -format '<format>' > <file_path>/<file_name>.csv where the format could be csv , json or column . Example: sudo /usr/bin/influx -database 'telegraf' -host 'localhost' -username 'admin'  -password '123456789' -execute 'select * from mem' -format 'csv' > /home/googlielmo/influxdb-export/mem-export.csv

Using Rapids cuDF in a Colab notebook

During last Spark+AI Summit Europe 2019 I had a chance to attend a talk from Miguel Martinez  who was presenting Rapids , the new Open Source framework from NVIDIA for GPU accelerated end-to-end Data Science and Analytics. Fig. 1 - Overview of the Rapids eco-system Rapids is a suite of Open Source libraries: cuDF cuML cuGraph cuXFilter I enjoied the presentation and liked the idea of this initiative, so I wanted to start playing with the Rapids libraries in Python on Colab , starting from cuDF, but the first attempt came with an issue that I eventually solved. So in this post I am going to share how I fixed it, with the hope it would be useful to someone else running into the same blocker. I am assuming here you are already familiar with Google Colab. I am using Python 3.x as Python 2 isn't supported by Rapids. Once you have created a new notebook in Colab, you need to check if the runtime for it is set to use Python 3 and uses a GPU as hardware accelerator. You