Setup ELK on Linux

Reza Mohammadi
3 min readNov 4, 2023

--

Pre-requisites for ELK:

  1. At least 2 GB of RAM
  2. At least 20 GB storage
  3. JAVA

ELK Installation

ELK Components

Installing JAVA on the instance:

$ sudo apt update
$ sudo apt install -y openjdk-8-jdk

Installing Nginx on the instance:

$ sudo apt update
$ sudo apt -y install nginx
$ sudo systemctl enable nginx

Installing ElasticSearch, Kibana, Logstach:

$ wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.2.0-amd64.deb

$ sudo dpkg -i elasticsearch-7.2.0-amd64.deb
----------------------------------------------------------
$ wget https://artifacts.elastic.co/downloads/kibana/kibana-7.2.0-amd64.deb

$ sudo apt install -y apt-transport-https

$ sudo dpkg -i kibana-7.2.0-amd64.deb
----------------------------------------------------------
$ wget https://artifacts.elastic.co/downloads/logstash/logstash-7.2.0.deb

$ sudo dpkg -i logstash-7.2.0.deb
----------------------------------------------------------
$ wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.2.0-amd64.deb

$ sudo dpkg -i filebeat-7.2.0-amd64.deb

ELK Hands-on

  1. collect static Apache logs using Logstash and analyze them using Kibana.
  2. Collect real-time weblogs & configure beats to inject them into Elasticsearch, and analyze them using Kibana.

Configuration

Uncomment the below lines in the /etc/elasticsearch/elasticsearch.yml

cluster.name: my-application
node.name: node-1
network.host: localhost
http.port: 9200

Then start the elastic search:

$ sudo systemctl start elasticsearch
$ sudo systemctl status elasticsearch

Uncomment the below lines in the /etc/kibana/kibana.yml

server.port: 5601
server.host: "localhost"

Start it:

$ sudo systemctl start kibana
$ sudo systemctl status kibana

Install htpasswd to configure Kibana with user & password:

$ sudo apt install -y apache2-utils                    # htpassword package

Generate a hash password for a user:

$ sudo htpasswd -c /etc/nginx/htpasswd.users <user>  # Then it will ask about password and enter the password

Check the password has been entered: cat /etc/nginx/htpasswd.users

Connect the Nginx to Kibana:

$ sudo vi /etc/nginx/sites-available/default
> server {
listen 80;

server_name <Instance Private IP>

auth_basic "Restricted Access";
auth_basic_user_file /etc/nginx/htpasswd.users;

location / {
proxy_pass http://localhost:5601;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}

Then start the nginx: systemctl start nginx

For more practice

Download the below example of the log file:

$ sudo wget https://logz.io/sample-data

# Name it clearly:
$ mv sample-data apache.log

Or you can add the data from the Kibana home page (sample data).

Inject the data to use in the Kibana

$ cd /etc/logstash/conf.d/
$ vi apachelog.conf

In the new file create a pipeline to use it to visualize what you want.

Writing Pipeline

A pipeline mainly consists of three things:

  • input: the source of the data.
  • data filter: what kind of data is not to be sent.
  • output: where do we have to send the data to?
input {
file {
path => "/home/ubuntu/apache.log" // this where the log is located
start_position => "beginning"
sincedb_path => "/dev/null" // since there is no DB we won't link one here
}
}

filter {
grok {
match => { "message" => "%{COMBINEDAPACHELOG}" }
}
data {
match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]
}
geoip {
source => "clientip"
}
}

output {
elasticsearch {
hosts => ["localhost:9200"]
index => "petclinic-prd-1" // so if you put petclinc-prd* this means all the logs with this would be brought up

Then start the log stash: sudo systemctl start logstash.

FileBeats

Using file beats for real data. In file beat, you don’t need to write a pipeline.

See file beats modules that are enabled or disabled:

$ sudo filebeat modules list

Enabling a module:

$ sudo filebeat modules enable <one-of-the-modules-in-the-list>
$ sudo filebeat modules enable nginx
$ sudo filebeat modules enable system

To configure all data to collect:

# change the directory
$ cd /etc/filebeat/modules.d/

# open the one you want to config
$ vi nginx.yml

# determine where the logs are
> - module: nginx
access:
enabled: true
var.paths: ["/var/log/nginx/access.log*"] # The line we add.

error:
enabled: true
var.paths: ["/var/log/nginx/error.log*"] # The line we add.


# another config
$ vi system.yml
> - module: system
syslog:
enabled: true
var.paths: ["/var/log/syslog*"]

auth:
enabled: true
var.paths: ["/var/log/auth.log*"]

# Then start the filebeat
$ sudo systemctl start filebeat

Then create an index pattern using Kibana.

Note

Elasticsearch for being accessible from outside needs the below config to be added:

# vi /etc/kibana/kibana.yml
> server.host: 0.0.0.0
elasticsearch.hosts: ["http://<server-ip>:9200"]
### This line should be uncommented and edited (elasticsearch.hosts line).



# vi /etc/elasticsearch/elasticsearch.yml
> network.host: 0.0.0.0
discovery.seed_hosts: ["127.0.0.1"]

Create dashboards

In the console of your ELK, enter the below command to import and set the default dashboards for your real-time data:

$ sudo filebeat setup -e

--

--

No responses yet