当前位置: 代码迷 >> 综合 >> EHR和oa系统增加elk日志系统全解析,elasticsearch+logstash+kibana+filebeat搭建elk日志系统
  详细解决方案

EHR和oa系统增加elk日志系统全解析,elasticsearch+logstash+kibana+filebeat搭建elk日志系统

热度:28   发布时间:2023-11-21 18:56:32.0

1、创建虚拟机

2、安装docker和docker-compose

 #docker安装

add-apt-repository "deb [arch=amd64] http://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable"apt-get updateapt-get install docker-ce=5:19.03.11~3-0~ubuntu-xenial  

编辑/etc/docker/daemon.json

{"registry-mirrors": ["https://dockerhub.azk8s.cn"],"data-root": "/data/docker","metrics-addr" : "0.0.0.0:9323","experimental" : true ,"bip": "172.31.0.1/24","default-address-pools":[{"base":"172.31.0.0/16","size":24}]}systemctl enable dockersystemctl start docker

#docker-compose下载

curl -L https://github.com/docker/compose/releases/download/1.24.1/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-composechmod +x * 

3、编写logstash+elasticsearch+kibana的docker-compose文件

(注意 要先创建了es,然后进去创建用户角色开启访问验证

es配置xpack.security.enabled: true

discovery.type: single-node

创建启动然后进入容器执行

./bin/elasticsearch-setup-passwords auto

 

密码不生效别写在环境配置里,写在配置文件然后映射

这只是个例子

version: '2.3'services:elasticsearch:image: docker.elastic.co/elasticsearch/elasticsearch:6.8.10volumes:- esdata:/usr/share/elasticsearch/data:rw- /etc/localtime:/etc/localtime- /var/log/elasticsearch:/usr/share/elasticsearch/logs- ./elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.ymlenvironment:- TZ="Asia/Shanghai"- _JAVA_OPTIONS=-Xmx1024m -Xms1024m- bootstrap.memory_lock=false- cluster.name=elk-logs- network.host=0.0.0.0- xpack.security.enabled=true- discovery.type=single-node- node.master=true- node.data=true- discovery.zen.minimum_master_nodes=1- discovery.zen.ping.unicast.hosts=10.0.0.11:9300ulimits:memlock:soft: -1hard: -1network_mode: hostrestart: unless-stoppedlogstash:image: docker.elastic.co/logstash/logstash:6.8.10volumes:- ./logstash/pipeline:/usr/share/logstash/pipelinenetwork_mode: hostenvironment:- TZ="Asia/Shanghai"- LS_JAVA_OPTS=-Xmx256m -Xms256m- node.name= "logstash"- http.host= "0.0.0.0"- pipeline.id= "pipeline"- xpack.monitoring.elasticsearch.url="http://10.0.0.11:9201"#- xpack.monitoring.elasticsearch.username="logstash_system"#- xpack.monitoring.elasticsearch.password=""- xpack.monitoring.enabled=falserestart: unless-stoppedkibana:image: docker.elastic.co/kibana/kibana:6.8.10volumes:- ./kibana/config/kibana.yml:/usr/share/kibana/config/kibana.ymlnetwork_mode: hostenvironment:- LS_JAVA_OPTS=-Xmx512m -Xms512m- SERVER_NAME="kibana"- XPACK_SECURITY_ENABLED=true- XPACK_MONITORING_ENABLED=true- ELASTICSEARCH_HOSTS="http://10.0.0.11:9200"#- elasticsearch.username="elastic"#- elasticsearch.password=""restart: unless-stoppedlogstash-nginx:image: docker.elastic.co/logstash/logstash:6.8.10volumes:- ./logstash/pipeline_nginx:/usr/share/logstash/pipelinenetwork_mode: hostenvironment:- TZ="Asia/Shanghai"- LS_JAVA_OPTS=-Xmx256m -Xms256m- node.name= "logstash"- http.host= "0.0.0.0"- pipeline.id= "pipeline"- xpack.monitoring.elasticsearch.url="http://10.0.0.11:9201"#- xpack.monitoring.elasticsearch.username="logstash_system"#- xpack.monitoring.elasticsearch.password=""- xpack.monitoring.enabled=falserestart: unless-stoppedvolumes:esdata:

会在docker的data目录下创建数据位置

创建四个容器 分别给ehr的logstash和oa的logstash配置pipieline匹配收集日志

(1) ehr

input {beats{port => 5044}}filter {# ignore log commentsif [message] =~ "^#" {drop {}}# check that fields match your IIS log settingsgrok {#remove_field => ["message"]match => ["message", "%{TIMESTAMP_ISO8601:log_timestamp} %{IPORHOST:s-ip} %{WORD:cs-method} %{NOTSPACE:cs-uri-stem} %{NOTSPACE:cs-uri-query} %{NUMBER:s-port} %{NOTSPACE:cs-username} %{IPORHOST:c-ip} %{NOTSPACE:cs-useragent} %{NOTSPACE:referer} %{NUMBER:sc-status} %{NUMBER:sc-substatus} %{NUMBER:sc-win32-status} %{NUMBER:time-taken:int}"]}# set the event timestamp from the log# https://www.elastic.co/guide/en/logstash/current/plugins-filters-date.htmldate {match => ["log_timestamp", "YYYY-MM-dd HH:mm:ss"]target => "@timestamp"#remove_field => ["log_timestamp"]}# matches the big, long nasty useragent string to the actual browser name, version, etc# https://www.elastic.co/guide/en/logstash/current/plugins-filters-useragent.htmluseragent {source=> "cs-useragent"prefix=> "browser_"}}output {elasticsearch {hosts => ["10.0.0.11:9200"]user => "elastic"password => ""index => "ehr-access_log-%{+YYYYMMdd}"}# output to console}

(此处的用户密码另外创建)
 

Create a metricbeat_reader role that has read and view_index_metadata privileges on the metricbeat-* indicesCreate a metricbeat_writer role that has manage_index_templates and monitor cluster privileges, as well as write, delete, and create_index privileges on the metricbeat-* indices

(2)oa

input {beats{port => 5045}}filter {if [message] =~ "^#" {drop {}}if [type] == "nginx_access"{grok {match => ["message","%{IPORHOST:remote_addr} - %{HTTPDUSER:remote_user} \[%{HTTPDATE:time_local}\] \"(?:%{WORD:method} %{NOTSPACE:request}(?: HTTP/%{NUMBER:httpversion})?|%{DATA:rawrequest})\" %{NUMBER:status} (?:%{NUMBER:body_bytes}|-) %{QS:referrer} %{QS:user_agent} %{QS:x_forward_for}"]}}if [type] == "nginx_error"{grok {match => [ "message", "(?<time_local>%{YEAR}[./-]%{MONTHNUM}[./-]%{MONTHDAY}[- ]%{TIME}) \[%{LOGLEVEL:log_level}\] %{POSINT:pid}#%{NUMBER}: %{GREEDYDATA:error_message}(?:, client: (?<client>%{IP}|%{HOSTNAME}))(?:, server: %{IPORHOST:server}?)(?:, request: %{QS:request})?(?:, upstream: (?<upstream>\"%{URI}\"|%{QS}))?(?:, host: %{QS:request_host})?(?:, referrer: \"%{URI:referrer}\")?","message", "(?<time_local>%{YEAR}[./-]%{MONTHNUM}[./-]%{MONTHDAY}[- ]%{TIME}) \[%{LOGLEVEL:log_level}\]\s{1,}%{GREEDYDATA:error_message}"]}}date {match => ["log_timestamp", "YYYY-MM-dd HH:mm:ss"]target => "@timestamp"remove_field => ["log_timestamp"]}useragent {source=> "cs-useragent"prefix=> "browser_"}}output {elasticsearch {hosts => ["10.0.0.11:9200"]user => "elastic"password => ""index => "oa-%{type}-log-%{+YYYYMMdd}"}}java的匹配if [type] =~"java"{grok {match =>["message","%{TIMESTAMP_ISO8601:log_timestamp}\[%{WORD:level}\] %{DATA:class}\): %{GREEDYDATA:javamessage}"]#remove_field => ["message"]}
}

(3)kibana配置文件映射

server.port: 5601server.host: "10.0.0.11"elasticsearch.url: "http://10.0.0.11:9200"elasticsearch.username: "kibana"elasticsearch.password: ""sudo docker-compose up -d 创建启动容器

4、分别在windows的iis环境下安装filebeat

(1)windows下ehr

https://www.elastic.co/cn/downloads/beats/filebeat 

注意es log kb和filebeat版本一致

下载windows版本,配合iis使用

filebeat配置读取日志位置

 

安装解压放在默认位置 管理员权限执行

解压到 C:\Program Files

重命名 filebeat-5.0.0-windows 目录为 Filebeat

右键点击 PowerSHell 图标,选择『以管理员身份运行』

运行下列命令,将 Filebeat 安装成 windows 服务:

PS > cd 'C:\Program Files\Filebeat'

PS C:\Program Files\Filebeat> .\install-service-filebeat.ps1

注意

可能需要额外授予执行权限。命令为:PowerShell.exe -ExecutionPolicy RemoteSigned -File .\install-service-filebeat.ps1.

 

配置修改service 

"C:\Program Files\filebeat\filebeat.exe" -c "C:\Program Files\filebeat\filebeat.yml" -path.home "C:\Program Files\filebeat" -path.data "C:\Program Files\filebeat\data" -path.logs "C:\Program Files\filebeat\logs"

配置文件

 

filebeat.inputs:- type: logenabled: truefields:type: pcfields_under_root: truepaths:- C:\inetpub\logs\LogFiles\W3SVC1\*.log- type: logenabled: truefields:type: mobilefields_under_root: truepaths:- C:\inetpub\logs\LogFiles\W3SVC4\*.log# - type: log# enabled: true# fields:# type: appLog# fields_under_root: true# paths:# - C:\HRLOG\*\*.txt#拼接多行日志内容# multiline.pattern: ^\[# multiline.negate: true# multiline.match: afterfilebeat.config.modules:path: ${path.config}/modules.d/*.ymlreload.enabled: falsesetup.template.settings:index.number_of_shards: 3setup.kibana:host: "10.0.0.11:5601"username: "kibana"password: ""output.logstash:hosts: "10.0.0.11:5044"username: "logstash_system"password: ""worker: 4compression_level: 3bulk_max_size: 20480xpack.monitoring:enabled: false

 

(2)centos下oa(可以直接下载rpm一键安装)

curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.8.10-linux-x86_64.tar.gz

tar xzvf filebeat-6.8.10-linux-x86_64.tar.gz

赋予filefeat.yml权限 sudo chown root:root filefeat.yml

配置文件如下

 

filebeat.inputs:- type: logenabled: truefields:type: nginx_accessfields_under_root: truepaths:- /usr/local/nginx/logs/access.log- type: logenabled: truefields:type: nginx_errorfields_under_root: truepaths:- /usr/local/nginx/logs/error.logfilebeat.config.modules:path: ${path.config}/modules.d/*.ymlreload.enabled: falsesetup.template.settings:index.number_of_shards: 3setup.kibana:host: "10.0.0.11:5601"username: "kibana"password: ""output.logstash:hosts: "10.0.0.11:5045"username: "logstash_system"password: ""worker: 4compression_level: 3bulk_max_size: 20480xpack.monitoring:enabled: false

 

sudo ./filebeat -e -c filebeat.yml

创建自启动service

[Unit]Description=filebeatWants=network-online.targetAfter=network-online.target[Service]User=rootExecStart=/home/pupumall/filebeat/filebeat  -e -c /home/pupumall/filebeat/filebeat.yml[Install]WantedBy=multi-user.targetsudo systemctl daemon-reload sudo systemctl enable filebeat.servicesudo systemctl start filebeat

5、登录kabana 配置role和user

配置索引和显示

注意日志的过滤与匹配

  相关解决方案