In order to show that the considerations done in my last post are general for any log shipping purpose, let's see now how the same process applies to a more real use case scenario: the log shipping and analysis of a MongoDB database logs.
where:
Please notice that there are 2 spaces between the component and the context.
MongoDB logs pattern
Starting from the release 3.0 (I am considering the release 3.2 for this post) the MongoDB logs come with the following pattern:<timestamp> <severity> <component> [<context>] <message>
where:
- timestamp is in iso8601-local format.
- severity is the level associated to each log message. It is a single character field. Possible values are F (Fatal), E (Error), W (Warning), I (Informational) and D (Debug).
- component is for a functional categorization of the log message. Please refer to the specific release of MongoDB you're using to know the full list of possible values.
- context is the specific context for a message.
- message: don't think you need some explanation here ;)
%{TIMESTAMP_ISO8601:timestamp} %{WORD:severity} %{WORD:component} %{DATA:context} %{GREEDYDATA:message}
Please notice that there are 2 spaces between the component and the context.
Create an index on Elasticsearch
Now that we know the pattern of the MongoDB logs we can create an index for them in Elasticsearch:curl -XPUT 'http://<es_host>:<es_port>/mdblogs' -d '{
"mappings": {
"nodelogs" : {
"properties" : {
"timestamp": {"type": "date"},
"severity": {"type": "string"},
"component": {"type": "string"},
"context": {"type": "string"},
"message": {"type": "string"}
}
}
}
}'
Comments
Post a Comment