如何创建索引管理(ELKB架构监控多个目录下文件变化并创建多个索引)
如何创建索引管理(ELKB架构监控多个目录下文件变化并创建多个索引)server.host: 0.0.0.0server.port: 5601[root@Aruen local]# tar -zxvf kibana-5.5.0-linux-x86_64.tar.gz编辑配置文档:[root@Aruen config]# vi kibana.yml
一、Elasticsearch、Kibana、Logstash 和 Filebeat 安装配置
1.1、下载 Kibana 包并解压:
[root@Aruen ~]# cd /usr/local/
[root@Aruen local]# wget https://artifacts.elastic.co/downloads/kibana/kibana-5.5.0-linux-x86_64.tar.gz
[root@Aruen local]# tar -zxvf kibana-5.5.0-linux-x86_64.tar.gz
编辑配置文档:
[root@Aruen config]# vi kibana.yml
server.port: 5601
server.host: 0.0.0.0
elasticsearch.url: "http://192.168.162.69:9200"
后台运行 Kibana:
[root@Aruen local]# cd /kibana-5.5.0-linux-x86_64/bin
[root@Aruen bin]# nohup ./kibana &
浏览器访问:http://192.168.162.69:5601/
1.2、下载 Logstash 包并解压:
[root@Aruen local]#
wget https://artifacts.elastic.co/downloads/logstash/logstash-5.5.0.tar.gz
[root@Aruen local]# tar -zvxf logstash-5.5.0.tar.gz
创建logstash.conf配置文档:
[root@Aruen conf.d]# vi logstash.conf
根据需求对logstash做的配置内容:
input {
beats {
port => 5044
}
}
filter {
json {
source => "message"
}
mutate {
remove_field => "message"
remove_field => "@version"
remove_field => "host"
remove_field => "path"
remove_field => "@timestamp"
remove_field => "offset"
remove_field => "input_type"
remove_field => "source"
remove_field => "tags"
remove_field => "beat"
}
}
output {
if "_jsonparsefailure" not in [tags] {
if [type] == "test1"{
elasticsearch {
hosts => ["192.168.162.69:9200"]
index => "human-%{date}"
document_type => "human"
codec => json {
charset => ["UTF-8"]
}
template => "/usr/template/test1.json"
template_name => "human"
template_overwrite => true
document_id => "%{id}"
}
}
if [type] == "test2"{
elasticsearch {
hosts => ["192.168.162.69:9200"]
index => "original-%{date}"
document_type => "original"
codec => json {
charset => ["UTF-8"]
}
template => "/usr/template/test2.json"
template_name => "original"
template_overwrite => true
document_id => "%{id}"
}
}
if [type] == "test3"{
elasticsearch {
hosts => ["192.168.162.69:9200"]
index => "vehicle-%{date}"
document_type => "vehicle"
codec => json {
charset => ["UTF-8"]
}
template => "/usr/template/test3.json"
template_name => "vehicle"
template_overwrite => true
document_id => "%{id}"
}
}
if [type] == "test4"{
elasticsearch {
hosts => ["192.168.162.69:9200"]
index => "face-%{date}"
document_type => "face"
codec => json {
charset => ["UTF-8"]
}
template => "/usr/template/test4.json"
template_name => "face"
template_overwrite => true
document_id => "%{id}"
}
}
}
}
后台启动 Logstash:
[root@Aruen local]# cd /logstash-5.5.0/bin
[root@Aruen bin]# nohup ./logstash -f ../conf.d/logstash.conf &
1.3、下载 Filebeat 包并解压:
[root@Aruen local]#
wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-5.5.0-linux-x86_64.tar.gz
[root@Aruen local]# tar -zvxf filebeat-5.5.0-linux-x86_64.tar.gz
编辑配置文档:
[root@Aruen local]# vi /filebeat-5.5.0-linux-x86_64/filebeat.yml
根据需求做了如下配置内容:
filebeat.prospectors:
- input_type: log
paths:
- /var/log/test1/*.dat
document_type: test1
- input_type: log
paths:
- /var/log/test2/*.dat
document_type: test2
- input_type: log
paths:
- /var/log/test3/*.dat
document_type: test3
- input_type: log
paths:
- /var/log/test4/*.dat
document_type: test4
output.logstash:
hosts: ["192.168.162.69:5044"]
后台启动 Filebeat:
[root@Aruen local]# cd /filebeat-5.5.0-linux-x86_64
[root@Aruen bin]# nohup ./filebeat -e -c filebeat.yml &
关闭服务:
ps -ef |grep logstash
Kill -9 进程号
二、监控数据与索引模板
将test1.json、test1.json、test3.json、test4.json的索引模板传到/usr/template/目录下,
/var/log/test1/*.dat、/var/log/test2/*.dat、/var/log/test3/*.dat、/var/log/test4/*.dat为filebeat监控的目录数据
根据不同目录下数据的添加,Elasticsearch端创建的索引及数据:
Elasticsearch数据传入kibana