ELK Stack: Logstash shows that it's receiving log entries from Filebeat, but Elasticsearch is not creating my index

38 Views Asked by At

I am new to the ELK stack and I wanted to try and test it out to see if I wanted to use it. I have elasticsearch, kibana, and logstash installed on one virtual machine and I have filebeat and nginx installed on another virtual machine.

I have a custom log format for my nginx access.log that looks like this:

<IP> - - [21/Dec/2023:00:46:10 +0000] "GET /favicon.ico HTTP/1.1" 404 134 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/16.1 Safari/605.1.15" "-" "<IP>" sn="test.com" rt=0.000 ua="-" us="-" ut="-" ul="-" cs=-

#log format

log_format  main_ext  '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for" '
'"$host" sn="$server_name" '
'rt=$request_time '
'ua="$upstream_addr" us="$upstream_status" '
'ut="$upstream_response_time" ul="$upstream_response_length" '
'cs=$upstream_cache_status' ;

I have everything configured and have the Kibana dashboard up and running with data being sent to the dashboard. The only problem I am having is that the correct indexes are not showing up in elasticsearch or my kibana dashboard. The only index that is showing up is the default filebeat-* and I am not able to see my nginx-access-logs or nginx-error-logs indices.

Here is my logstash config file path /etc/logstash/conf.d/beats.conf and here is what it looks like:

input {
  beats {
    port => 5044
  }
}

filter {
  if [type] == "syslog" {
    grok {
      match => { "message" => "%{SYSLOGLINE}" }
    }
    date {
      match => [ "timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
    }
  }

  if [type] == "nginxaccess" {
    grok {
      match => { "message" => '%{IPORHOST:clientip} - - \[%{HTTPDATE:timestamp}\] "%{WORD:method} %{URIPATHPARAM:request} HTTP/%{NUMBER:httpversion}" %{NUMBER:response} %{NUMBER:bytes} "%{URI:referrer}" "%{DATA:agent}" "%{IPORHOST:x_forwarded_for}" sn="%{DATA:sn}" rt=%{NUMBER:request_time} ua="%{DATA:upstream_addr}" us="%{DATA:upstream_status}" ut="%{DATA:upstream_response_time}" ul="%{DATA:upstream_response_length}" cs=%{DATA:upstream_cache_status}' }
    }
    date {
      match => [ "timestamp", "dd/MMM/yyyy:HH:mm:ss Z", "ISO8601" ]
    }
  }

  if [type] == "nginxerror" {
    grok {
      match => { "message" => '%{TIMESTAMP_ISO8601:timestamp} \[%{WORD:log_level}\] %{NUMBER:pid}#%{NUMBER:tid}: %{GREEDYDATA:message}' }
    }
    date {
      match => [ "timestamp", "yyyy/MM/dd HH:mm:ss" ]
    }
  }
}



output {
  elasticsearch {
    hosts => ["localhost:9200"]
    index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
  }
  stdout { codec => rubydebug }
}

And here is what my etc/filebeat/filebeat.yml file looks like:

filebeat.inputs:

#Each - is an input. Most options can be set at the input level, so
#you can use different inputs for various configurations.
#Below are the input-specific configurations.

#filestream is an input for collecting log messages from files.
- type: filestream
  #Unique ID among all inputs, an ID is required.
  id: my-filestream-id
  #Change to true to enable this input configuration.
  enabled: true
  #Paths that should be crawled and fetched. Glob based paths.
  paths:
    - /var/log/*.log
  fields:
    type: syslog

- type: filestream
  id: nginx-access-logs
  enabled: true
  paths:
    - /var/log/nginx/access.log*
  fields:
    type: nginxaccess  # Set the log type to nginx_access
    beat: nginxaccess

- type: filestream
  id: nginx-error-logs
  enabled: true
  paths:
    - /var/log/nginx/error.log*
  fields:
    type: nginxerror  # Set the log type to nginx_error
    beat: nginxerror

filebeat.config.modules:
  #Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml

  #Set to true to enable config reloading
  reload.enabled: false

setup.template.settings:
  index.number_of_shards: 1

setup.kibana:

output.logstash:
  #The Logstash hosts
  hosts: ["<IP>:5044"]

processors:
  - add_host_metadata:
      when.not.contains.tags: forwarded
  - add_cloud_metadata: ~
  - add_docker_metadata: ~
  - add_kubernetes_metadata: ~

Now after configuring those files I restarted both logstash and filebeat and ran the commands:

`sudo filebeat setup --index-management -E output.logstash.enabled=false -E 'output.elasticsearch.hosts=[":9200"]'

sudo filebeat setup -E output.logstash.enabled=false -E output.elasticsearch.hosts=[':9200'] -E setup.kibana.host=:5601`

Then I went and checked my indexes and my nginx-access-logs and nginx-error-logs were still not showing in my indices. The only ones that show are the default filebeat-*

curl -X GET "http://localhost:9200/_cat/indices?v" health status index uuid pri rep docs.count docs.deleted store.size pri.store.size dataset.size yellow open filebeat-2023.12.20 _nckH4WZSmuaI1umnjVR-w 1 1 91772 0 36.1mb 36.1mb 36.1mb yellow open filebeat-2023.12.14 yA8Sl4lXSYG_vB8d67Cqfg 1 1 47876 0 19.6mb 19.6mb 19.6mb yellow open filebeat-2023.12.15 OwUwZMdBR3myvZkuvUgF2A 1 1 75513 0 28.5mb 28.5mb 28.5mb yellow open .ds-filebeat-8.11.2-2023.12.08-000001 PPRThZq3RIK490NVmW605A 1 1 0 0 249b 249b 249b yellow open filebeat-2023.12.16 4EOUSNCKRlOzAy1zdih6hg 1 1 79795 0 29.4mb 29.4mb 29.4mb yellow open filebeat-2023.12.17 JL7TRkUgTzeGbT-M0bBD5g 1 1 64067 0 24.4mb 24.4mb 24.4mb yellow open filebeat-2023.12.10 m3aWcEayTnu3r_iTxd_5aA 1 1 77669 0 27.9mb 27.9mb 27.9mb yellow open filebeat-2023.12.21 mvbk8wNiQ9-rT9Vs-W-Vqg 1 1 62321 0 27.9mb 27.9mb 27.9mb yellow open filebeat-2023.12.11 bXq4al_xQ62eMjAnEKR5Xw 1 1 81750 0 29mb 29mb 29mb yellow open filebeat-2023.12.12 V2ojtGRTR4ixSGT_tgkhHg 1 1 70454 0 27mb 27mb 27mb yellow open filebeat-2023.12.13 eRuR2uf2QdqF00VagDnjpw 1 1 72317 0 27.4mb 27.4mb 27.4mb yellow open filebeat-2023.12.18 Q_IEBhszSOSK9305LsvXOg 1 1 82494 0 30.5mb 30.5mb 30.5mb yellow open filebeat-2023.12.19 KsJZ2um5Q8e7v2ckP9MlGA 1 1 77330 0 29.6mb 29.6mb 29.6mb yellow open filebeat-2023.12.08 C8ih6TUMRdm2AsSO5idwkw 1 1 13953 0 5.8mb 5.8mb 5.8mb yellow open filebeat-2023.12.09 2Taw_nROSCiBeYJFk-kyXA 1 1 58190 0 21.2mb 21.2mb 21.2mb

Can someone please help me figure out what I'm doing wrong or what is going on. I am lost at this point!

0

There are 0 best solutions below