Logstash Configuration for TD
Logstash is a highly customizable part of Elastic Stack that retrieves data. It can be configured to collect data from many different sources, such as log files, REST API requests, and more, to be sent to Elasticsearch and later visualized in Kibana. Hundreds of plugins are available to expand its functionality, and many are included with the software at installation. We will be using the input plugin file to read the json
logs generated by the Python script. A complete list of available plugins and links to their documentation can be found at Support Matrix | Elastic.
Logstash configuration is governed by special configuration files. Where to retrieve data, how to filter it, and where to output it are configured by these files. For this demo, Elastic Stack was installed on Ubuntu 18.04 via apt-get
, so these files are set by default to live in the /etc/logstash/conf.d
directory. Your directories may be different depending on how Elastic Stack was installed. More information about the Logstash directory layout can be found at Logstash Directory Layout.
Let’s configure Logstash to grab the DNS security data found in the json
files generated by the Python script.
Access the machine where your Logstash instance is installed. Note: For this demo Elastic Stack was installed on Ubuntu 18.04.
Open a terminal.
Navigate to where your Logstash configuration (.
conf
) files are located. In this demonstrative environment, these files are located in/etc/logstash/conf.d
. Input the following command to navigate to the correct directory:
cd /etc/logstash/conf.d
Create a new file called
csp-dns-events.conf
:
sudo touch csp-dns-events.conf
Open the file with
gedit
for editing:
sudo gedit csp-dns-events.conf
Copy and paste the following into the file. Save and close the file when finished.
input {
file {
path => "/tmp/rpz/*"
codec => "json"
mode => "read"
sincedb_path => "/dev/null"
}
}
filter {
split {
field => ["result"]
terminator => ","
}
mutate {
remove_field => ["status_code"]
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
index => "csp-dns-events"
}
}
Let’s get a breakdown of what is happening in this code.
Input: Here is where we read the
json
files created by the Python script. We use the input plugin file.Filter: This is where we split every record found in the
json
files into individual hits in Kibana. By doing so, we can directly search and organize by any field returned by the GET request in the Python script. The returned body of the request is placed in one field calledresult
with each record terminated by a comma, so we tell that to Logstash. Using the mutate plugin we remove the extrastatus_code
field, since it creates unnecessary clunky data.Output: Send the data to Elasticsearch and give this index a name. This is the name that will appear in Kibana when creating a new index.
Logstash config files follow a specific schema. More information on the structure of config files can be found at Structure of a pipeline | Logstash Reference [8.8] | Elastic
Navigate to your home directory for Logstash. For this demo, this is
/usr/share/logstash/
. Input the following command to navigate to the correct directory:
cd /usr/share/logstash
Run Logstash with your new configuration:
sudo bin/logstash -f /etc/logstash/conf.d/csp-dns-events.conf
Allow several minutes of processing. The console will inform you if there are any syntax errors with your config file.
Alternatively, you can simply restart the Logstash service, but the console will not warn you of any errors with your config file:
sudo systemctl restart logstash