The processing of incoming logs into structured attributes is facilitated by Pipelines and processors, which enables easier querying and analysis.

Atatus Log Pipeline

Atatus log pipelines seamlessly convert logs from diverse formats into a unified structure. Embrace our efficient log processing strategy to establish a standardized attribute naming convention for your organization, ensuring better insights and analysis.

Pipelines offer a sequential chain of processors to parse and enrich logs, extracting valuable attributes from semi-structured text for reuse as facets. Each log undergoes filtering against all pipeline filters, applying processors sequentially upon matching. Experience streamlined log processing and enhanced data utilization with our powerful pipeline architecture.

Enjoy the flexibility of applying pipelines and processors to logs of any type effortlessly. Our system eliminates the need for logging configuration changes or server-side rule deployment. Seamlessly configure everything within the user-friendly pipeline configuration page for a hassle-free experience.

To create a pipeline:

  1. Navigate to the Logs tab in the Atatus dashboard and click on Pipelines in the side panel.

  2. Click on the New Pipeline button to create a new pipeline.

  3. By default, you will only see the Date and Message fields; if you need more, click on Options and select your desired fields. There are three options: Source, Host, and Service.

  4. From the preview, select a log and apply the filter. Select a filter from the Filter drop down menu to limit which logs the pipeline applies to.

  5. Give your pipeline a name. You can add a description to the pipeline, which can be used to provide information about the purpose of the pipeline and the team responsible for it.

  6. To create a pipeline, click the Create Pipeline button.

Atatus Log Pipeline

Add a Processor to the pipeline

Pipelines will have one or more processors which are applied sequentially. You can add processors as follows.

  1. To add a processor, click on the pipeline to which it should be added.

  2. To proceed, click the button labeled Add processor. This action will trigger a popup window to appear.

  3. Select the processor type in the given drop down list.

  4. Give a name to the processor.

  5. You can add log samples by clicking the Add Sample button.

  6. Define the parsing rules based on the log samples.

  7. To add a processor to your pipeline, click the Create Processor button.

Atatus Log Pipeline

Processors and their types

Processors operating within a Pipeline are responsible for executing precise data-structuring actions on logs. They aim to manipulate log data by generating additional attributes that enhance the logs with pertinent information.

Processors execute actions like parsing log lines using the Grok parser or severity remapper. These actions help extract valuable data from log entries, create new attributes based on the extracted information, and remap existing attributes to enhance the log data.

Grok Parser

The Grok parser uses predefined or custom patterns defined using regular expressions to match log lines and break them down into meaningful fields or attributes. It allows log data to be transformed from a raw text format into structured data, making it easier to analyze and search.

With a Grok parser, you can define patterns that match specific log formats or known patterns within log messages. For example, you can define patterns to extract timestamps, log levels, error codes, IP addresses, or any other relevant information from your logs.

Example:

# Sample Log Events:

[Sat Aug 12 04:05:51 2006] [notice] Apache/1.3.11 (Unix) mod_perl/1.21 -- configured resuming normal operations

# Grok Rules:

  \[%{DATA:logdate}\] \[%{DATA:status}\] %{DATA:source} --\ %{DATA:Message}

Let's break down the provided Grok parser rule and its sample log event:

%{DATA:logdate}: This part of the rule matches and captures the timestamp in the format "Sat Aug 12 04:05:51 2022" from the log event. The captured timestamp is assigned to the field "logdate".

%{DATA:severity}: Matches and captures any non-empty string of characters as the status. The captured value is assigned to the field "severity".

%{DATA:source}: Matches and captures an string as the source of the log. The captured value is assigned to the field "source".

%{DATA:os}: Matches and captures an string representing the which operating system used. The captured value is assigned to the field "os".

%{DATA:module}: Matches and captures an integer representing the number of sixes scored. The captured value is assigned to the field "Sixes".

%{DATA:Message}: Matches and captures any non-empty string of characters as the log message. The captured value is assigned to the field "Message".

Application to the Sample Log Event:

Using the provided Grok rule, we can apply it to the sample log event [Sat Aug 12 04:05:51 2006] [notice] Apache/1.3.11 (Unix) mod_perl/1.21 -- configured resuming normal operations to extract structured data:

Output Attributes:

  logdate: Sat Aug 12 04:05:51 2006
  severity: notice
  source: Apache/1.3.11 (Unix) mod_perl/1.21
  Message: resuming normal operations

Severity Remapper

It is used to modify or map the severity levels of log events. It allows you to redefine the severity or importance of log messages according to your specific needs or standards.

The primary purpose of a Severity Remapper is to adjust the severity level assigned to log events, providing more meaningful and actionable insights during log analysis and troubleshooting processes. It helps categorize and prioritize log messages based on their impact or criticality.

With a Severity Remapper, you can map the severity levels based on the other attributes.

To create a Severity Remapper processor:

  1. Select the processor type as Severity Remapper.

  2. Enter the name of the processor.

  3. Set the attribute. You can also set one or more attributes by separating them with commas.

  4. Click the Create Processor button to add the processor to the pipeline.

Category Processor

The Category processor allows you to create a new attribute with a specific value based on filter conditions.

For example, consider creating the attribute status_label based on the following status code ranges:

  • OK for status codes between 200 and 299.
  • REDIRECT for status codes between 300 and 399.
  • ERROR for status codes between 400 and 499.
  • CRITICAL for status codes greater than 500.

Categories help to logically group and organize data based on one or more properties.

To create a category processor:

  1. Select the processor type as Category processor.

  2. Enter the name of the category processor.

  3. Set the target category attribute (For ex: my.attribute.path)

  4. Populate Category:

    a) Choose one or more filters from the provided drop down list to select all events that match the criteria.

    b) Specify the value to be assigned to the target category.

  5. Review the added entries and remove any unwanted entries from the list, if necessary and click on the Create Processor button to add the processor.