当前位置: 代码迷 >> 综合 >> windows环境logstash提示Successfully started Logstash API endpoint {:port=>9600}
  详细解决方案

windows环境logstash提示Successfully started Logstash API endpoint {:port=>9600}

热度:26   发布时间:2023-12-18 08:31:23.0

场景:启动logstash文件,有时写入ES数据(创建索引并添加数据),有时失败提示如下

C:\Users\admin\Desktop\work\logstash\logstash-5.1.1\bin>logstash -f logstash.conf
Could not find log4j2 configuration at path /Users/admin/Desktop/work/logstash/logstash-5.1.1/config/log4j2.properties. Using default config which logs to console
14:03:38.758 [[main]-pipeline-manager] INFO  logstash.outputs.elasticsearch - Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>["http://192.168.100.88:9200"]}}
14:03:38.760 [[main]-pipeline-manager] INFO  logstash.outputs.elasticsearch - Running health check to see if an Elasticsearch connection is working {:url=>#<URI::HTTP:0x47861fbb URL:http://192.168.100.88:9200>, :healthcheck_path=>"/"}
14:03:38.816 [[main]-pipeline-manager] WARN  logstash.outputs.elasticsearch - Restored connection to ES instance {:url=>#<URI::HTTP:0x47861fbb URL:http://192.168.100.88:9200>}
14:03:38.816 [[main]-pipeline-manager] INFO  logstash.outputs.elasticsearch - Using mapping template from {:path=>"C:\\Users\\admin\\Desktop\\work\\logstash\\logstash-5.1.1\\template\\logstash-mapping.json"}
14:03:38.930 [[main]-pipeline-manager] INFO  logstash.outputs.elasticsearch - Attempting to install template {:manage_template=>{"template"=>"*", "version"=>50001, "order"=>1, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"_all"=>{"enabled"=>true, "norms"=>false}, "dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"keyword", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"keyword", "norms"=>false, "index"=>"not_analyzed", "fields"=>{"keyword"=>{"type"=>"keyword"}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date", "include_in_all"=>false}, "@version"=>{"type"=>"keyword", "include_in_all"=>false}}}}}}
14:03:38.932 [[main]-pipeline-manager] INFO  logstash.outputs.elasticsearch - New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["192.168.100.88:9200"]}
14:03:38.949 [[main]-pipeline-manager] INFO  logstash.pipeline - Starting pipeline {"id"=>"main", "pipeline.workers"=>12, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>1500}
14:03:38.953 [[main]-pipeline-manager] INFO  logstash.pipeline - Pipeline main started
14:03:39.044 [Api Webserver] INFO  logstash.agent - Successfully started Logstash API endpoint {:port=>9600}

logstash.conf文件配置如下 

input {file {path => ["D:\tomcat\apache-tomcat-8.5.65\logs\GisqPlatformExplorer.log","D:\tomcat\apache-tomcat-8.5.65\logs\platform.log","D:\tomcat\apache-tomcat-8.5.65\logs\platform-rest.log"]codec=> multiline {pattern => "^\d{4}-\d{1,2}-\d{1,2}\s\d{1,2}:\d{1,2}:\d{1,2},\d{3}"negate => truewhat => "previous"charset => "UTF-8"}start_position => "end"}
}filter {if "microName" in [message]  or "SimpleCORSFilter"  in [message]{drop {}}grok {match => [               "message","%{TIMESTAMP_ISO8601:datetime}(\s+%{DATA})*\s+-\s+%{GREEDYDATA:log_json}"]remove_field => ["message"]}json {source => "log_json"target => "json"remove_field => "log_json"}date{match => ["datetime","yyyy-MM-dd HH:mm:ss"]target => "@timestamp"remove_field => ["path","@version","host"]} }  output {if "_jsonparsefailure" not in [tags] and "_groktimeout" not in [tags] and "_grokparsefailure" not in [tags] and "multiline" not in [tags]{elasticsearch{hosts=> ["192.168.100.88:9200"]index=> "logstash_index"document_type=> "logstash_type"manage_template=> true#template_overwrite=> true                #template_name=> "mapping-ik"template=> "C:\Users\admin\Desktop\work\logstash\logstash-5.1.1\template\logstash-mapping.json"}stdout { codec => rubydebug}}}

我此时的原因是项目没有启动,设置的是 start_position => "end" 日志文件没有变化,因此ES写入失败。

解决方法(想到这个方法来源在:网上一句话就是logstash默认不处理一天前的文件):

1.修改 :start_position => "beginning"

2.删除:data\plugins\inputs\file下的sinced_db文件

3.再次启动,索引创建并成功写入数据。

按道理说start_position => "end"时如果有日志变化应该也可以实现,此时我删除了刚才创建的索引修改配置(start_position => "end"),重新启动logstash

C:\Users\admin\Desktop\work\logstash\logstash-5.1.1\bin>logstash -f logstash.conf
Could not find log4j2 configuration at path /Users/admin/Desktop/work/logstash/logstash-5.1.1/config/log4j2.properties. Using default config which logs to console
14:25:58.567 [[main]-pipeline-manager] INFO  logstash.outputs.elasticsearch - Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>["http://192.168.100.88:9200"]}}
14:25:58.569 [[main]-pipeline-manager] INFO  logstash.outputs.elasticsearch - Running health check to see if an Elasticsearch connection is working {:url=>#<URI::HTTP:0xfb49bc5 URL:http://192.168.100.88:9200>, :healthcheck_path=>"/"}
14:25:58.620 [[main]-pipeline-manager] WARN  logstash.outputs.elasticsearch - Restored connection to ES instance {:url=>#<URI::HTTP:0xfb49bc5 URL:http://192.168.100.88:9200>}
14:25:58.621 [[main]-pipeline-manager] INFO  logstash.outputs.elasticsearch - Using mapping template from {:path=>"C:\\Users\\admin\\Desktop\\work\\logstash\\logstash-5.1.1\\template\\logstash-mapping.json"}
14:25:58.736 [[main]-pipeline-manager] INFO  logstash.outputs.elasticsearch - Attempting to install template {:manage_template=>{"template"=>"*", "version"=>50001, "order"=>1, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"_all"=>{"enabled"=>true, "norms"=>false}, "dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"keyword", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"keyword", "norms"=>false, "index"=>"not_analyzed", "fields"=>{"keyword"=>{"type"=>"keyword"}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date", "include_in_all"=>false}, "@version"=>{"type"=>"keyword", "include_in_all"=>false}}}}}}
14:25:58.738 [[main]-pipeline-manager] INFO  logstash.outputs.elasticsearch - New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["192.168.100.88:9200"]}
14:25:58.755 [[main]-pipeline-manager] INFO  logstash.pipeline - Starting pipeline {"id"=>"main", "pipeline.workers"=>12, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>1500}
14:25:58.758 [[main]-pipeline-manager] INFO  logstash.pipeline - Pipeline main started
14:25:58.848 [Api Webserver] INFO  logstash.agent - Successfully started Logstash API endpoint {:port=>9600}

启动项目后,测试了一下,在ES中创建了索引并加入了采集的日志

总结:重点是采集的时候日志只加载一次,

如果start_position => "end"设置成end则日志文件没有变化(项目未启动)就会出现上面的问题。

如果start_position => "beginning"设置成beginning因为日志是从开始加载因此启动不会出现上面的问题。

参考:https://blog.csdn.net/LJFPHP/article/details/89340807

  相关解决方案