日志归档与数据挖掘
http://netkiller.github.io/journal/log.html
版权 ? 2013, 2014 Netkiller. All rights reserved.
版权声明
转载请与作者联系,转载时请务必标明文章原始出处和作者信息及本声明。
|
?
2014-12-16
2013-03-19 第一版
2014-12-16 第二版
Netkiller Architect 手札 | Netkiller Developer 手札 | Netkiller PHP 手札 | Netkiller Python 手札 | Netkiller Testing 手札 |
Netkiller Cryptography 手札 | Netkiller Linux 手札 | Netkiller Debian 手札 | Netkiller CentOS 手札 | Netkiller FreeBSD 手札 |
Netkiller Shell 手札 | Netkiller Security 手札 | Netkiller Web 手札 | Netkiller Monitoring 手札 | Netkiller Storage 手札 |
Netkiller Mail 手札 | Netkiller Docbook 手札 | Netkiller Version 手札 | Netkiller Database 手札 | Netkiller PostgreSQL 手札 |
Netkiller MySQL 手札 | Netkiller NoSQL 手札 | Netkiller LDAP 手札 | Netkiller Network 手札 | Netkiller Cisco IOS 手札 |
Netkiller H3C 手札 | Netkiller Multimedia 手札 | Netkiller Perl 手札 | Netkiller Amateur Radio 手札 | Netkiller DevOps 手札 |
- 1. 什么日志归档
- 2. 为什么要做日志归档
- 3. 何时做日志归档
- 4. 归档日志放在哪里
- 5. 谁去做日志归档
- 6. 怎样做日志归档
- 6.1. 日志格式转换
- 6.1.1. 将日志放入数据库
- 6.1.2. Apache Pipe
- 6.1.3. Log format
- 6.1.4. 日志导入到 MongoDB
- 6.2. 日志中心方案
- 6.2.1. 软件安装
- 6.2.2. 节点推送端
- 6.2.3. 日志收集端
- 6.2.4. 日志监控
- 6.1. 日志格式转换
1.?什么日志归档
归档,是指将日志整理完毕且有保存价值的文件,经系统整理交日志服务器保存的过程。
2.?为什么要做日志归档
- 随时调出历史日志查询。
- 通过日志做数据挖掘,挖掘有价值的数据。
- 查看应用程序的工作状态
3.?何时做日志归档
日志归档应该是企业规定的一项制度(“归档制度”),系统建设之初就应该考虑到日志归档问题。如果你的企业没有这项工作或制度,在看完本文后建议你立即实施。
4.?归档日志放在哪里
简单的可以采用单节点服务器加备份方案。
随着日志规模扩大,未来必须采用分布式文件系统,甚至涉及到远程异地容灾。
5.?谁去做日志归档
我的答案是日志归档自动化,人工检查或抽检。
6.?怎样做日志归档
将所有服务器的日志都汇总到一处,有几种方法
- ftp 定是下载, 这种做法适合小文件且日志量不大,定是下载到指定服务器,缺点是重复传输,实时性差。
- rsyslog 一类的程序,比较通用,但扩展不便。
- rsync 定是同步,适合打文件同步,好于FTP,实时性差。
6.1.?日志格式转换
首先我来介绍一种简单的方案
我用D语言写了一个程序将 WEB 日志正则分解然后通过管道传递给数据库处理程序
6.1.1.?将日志放入数据库
将WEB服务器日志通过管道处理然后写入数据库
处理程序源码
$ vim match.dimport std.regex;import std.stdio;import std.string;import std.array;void main(){ // nginx //auto r = regex(`^(\S+) (\S+) (\S+) \[(.+)\] "([^"]+)" ([0-9]{3}) ([0-9]+) "([^"]+)" "([^"]+)" "([^"]+)"`); // apache2 auto r = regex(`^(\S+) (\S+) (\S+) \[(.+)\] "([^"]+)" ([0-9]{3}) ([0-9]+) "([^"]+)" "([^"]+)"`); foreach(line; stdin.byLine) { foreach(m; match(line, r)){ //writeln(m.hit); auto c = m.captures; c.popFront(); //writeln(c); auto value = join(c, "\",\""); auto sql = format("insert into log(remote_addr,unknow,remote_user,time_local,request,status,body_bytes_sent,http_referer,http_user_agent,http_x_forwarded_for) value(\"%s\");", value ); writeln(sql); } }}
编译
$ dmd match.d$ strip match$ lsmatch match.d match.o
简单用法
$ cat access.log | ./match
高级用法
$ cat access.log | match | mysql -hlocalhost -ulog -p123456 logging
实时处理日志,首先创建一个管道,寻该日志文件写入管道中。
cat 管道名 | match | mysql -hlocalhost -ulog -p123456 logging
这样就可以实现实时日志插入。
提示
上面程序稍加修改即可实现Hbase, Hypertable 本版
6.1.2.?Apache Pipe
Apache 日志管道过滤 CustomLog "| /srv/match >> /tmp/access.log" combined
<VirtualHost *:80> ServerAdmin [email protected] #DocumentRoot /var/www DocumentRoot /www <Directory /> Options FollowSymLinks AllowOverride None </Directory> #<Directory /var/www/> <Directory /www/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn #CustomLog ${APACHE_LOG_DIR}/access.log combined CustomLog "| /srv/match >> /tmp/access.log" combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory></VirtualHost>
经过管道转换过的日志效果
$ tail /tmp/access.loginsert into log(remote_addr,unknow,remote_user,time_local,request,status,body_bytes_sent,http_referer,http_user_agent,http_x_forwarded_for) value("192.168.6.30","-","-","21/Mar/2013:16:11:00 +0800","GET / HTTP/1.1","304","208","-","Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.22 (KHTML, like Gecko) Chrome/25.0.1364.172 Safari/537.22");insert into log(remote_addr,unknow,remote_user,time_local,request,status,body_bytes_sent,http_referer,http_user_agent,http_x_forwarded_for) value("192.168.6.30","-","-","21/Mar/2013:16:11:00 +0800","GET /favicon.ico HTTP/1.1","404","501","-","Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.22 (KHTML, like Gecko) Chrome/25.0.1364.172 Safari/537.22");insert into log(remote_addr,unknow,remote_user,time_local,request,status,body_bytes_sent,http_referer,http_user_agent,http_x_forwarded_for) value("192.168.6.30","-","-","21/Mar/2013:16:11:00 +0800","GET / HTTP/1.1","304","208","-","Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.22 (KHTML, like Gecko) Chrome/25.0.1364.172 Safari/537.22");
6.1.3.?Log format
通过定义LogFormat可以直接输出SQL形式的日志
Apache
LogFormat "%v:%p %h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" vhost_combinedLogFormat "%h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" combinedLogFormat "%h %l %u %t \"%r\" %>s %O" commonLogFormat "%{Referer}i -> %U" refererLogFormat "%{User-agent}i" agent
Nginx
log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"';
但对于系统管理员使用grep,awk,sed,sort,uniq分析时造成一定的麻烦。所以我建议仍然采用正则分解
产生有规则日志格式,Apache:
LogFormat \ "\"%h\",%{%Y%m%d%H%M%S}t,%>s,\"%b\",\"%{Content-Type}o\", \ \"%U\",\"%{Referer}i\",\"%{User-Agent}i\""
将access.log文件导入到mysql中
LOAD DATA INFILE '/local/access_log' INTO TABLE tbl_nameFIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' ESCAPED BY '\\'
6.1.4.?日志导入到 MongoDB
# rpm -Uvh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm# yum install mongodb
D语言日志处理程序
import std.regex;//import std.range;import std.stdio;import std.string;import std.array;void main(){ // nginx auto r = regex(`^(\S+) (\S+) (\S+) \[(.+)\] "([^"]+)" ([0-9]{3}) ([0-9]+) "([^"]+)" "([^"]+)" "([^"]+)"`); // apache2 //auto r = regex(`^(\S+) (\S+) (\S+) \[(.+)\] "([^"]+)" ([0-9]{3}) ([0-9]+) "([^"]+)" "([^"]+)"`); foreach(line; stdin.byLine) { //writeln(line); //auto m = match(line, r); foreach(m; match(line, r)){ //writeln(m.hit); auto c = m.captures; c.popFront(); //writeln(c); /* SQL auto value = join(c, "\",\""); auto sql = format("insert into log(remote_addr,unknow,remote_user,time_local,request,status,body_bytes_sent,http_referer,http_user_agent,http_x_forwarded_for) value(\"%s\");", value ); writeln(sql); */ // MongoDB string bson = format("db.logging.access.save({ 'remote_addr': '%s', 'remote_user': '%s', 'time_local': '%s', 'request': '%s', 'status': '%s', 'body_bytes_sent':'%s', 'http_referer': '%s', 'http_user_agent': '%s', 'http_x_forwarded_for': '%s' })", c[0],c[2],c[3],c[4],c[5],c[6],c[7],c[8],c[9] ); writeln(bson); } }}
编译日志处理程序
dmd mlog.d
用法
cat /var/log/nginx/access.log | mlog | mongo 192.169.0.5/logging -uxxx -pxxx
处理压错过的日志
# zcat /var/log/nginx/*.access.log-*.gz | /srv/mlog | mongo 192.168.6.1/logging -uneo -pchen
实时采集日志
tail -f /var/log/nginx/access.log | mlog | mongo 192.169.0.5/logging -uxxx -pxxx
6.2.?日志中心方案
上面的方案虽然简单,但太依赖系统管理员,需要配置很多服务器,每种应用软件产生的日志都不同,所以很复杂。如果中途出现故障,将会丢失一部日志。
于是我又回到了起点,所有日志存放在自己的服务器上,定时将他们同步到日志服务器,这样解决了日志归档。远程收集日志,通过UDP协议推送汇总到日志中心,这样解决了日志实时监控、抓取等等对实时性要求较高的需求。
为此我用了两三天写了一个软件,下载地址:https://github.com/netkiller/logging
这种方案并不是最佳的,只是比较适合我的场景,而且我仅用了两三天就完成了软件的开发。后面我会进一步扩展,增加消息队列传送日志的功能。
6.2.1.?软件安装
$ git clone https://github.com/netkiller/logging.git$ cd logging$ python3 setup.py sdist$ python3 setup.py install
6.2.2.?节点推送端
安装启动脚本
CentOS
# cp logging/init.d/ulog /etc/init.d
Ubuntu
$ sudo cp init.d/ulog /etc/init.d/ $ service ulog Usage: /etc/init.d/ulog {start|stop|status|restart}
配置脚本,打开 /etc/init.d/ulog 文件
配置日志中心的IP地址
HOST=xxx.xxx.xxx.xxx
然后配置端口与采集那些日志
done << EOF1213 /var/log/nginx/access.log1214 /tmp/test.log1215 /tmp/$(date +"%Y-%m-%d.%H:%M:%S").logEOF
格式为
Port | Logfile------------------------------1213 /var/log/nginx/access.log1214 /tmp/test.log1215 /tmp/$(date +"%Y-%m-%d.%H:%M:%S").log
1213 目的端口号(日志中心端口)后面是你需要监控的日志,如果日志每日产生一个文件写法类似 /tmp/$(date +"%Y-%m-%d.%H:%M:%S").log
提示
每日产生一个新日志文件需要定时重启 ulog 方法是 /etc/init.d/ulog restart配置完成后启动推送程序
# service ulog start
查看状态
$ service ulog status13865 pts/16 S 0:00 /usr/bin/python3 /usr/local/bin/rlog -d -H 127.0.0.1 -p 1213 /var/log/nginx/access.log
停止推送
# service ulog stop
6.2.3.?日志收集端
# cp logging/init.d/ucollection /etc/init.d# /etc/init.d/ucollection Usage: /etc/init.d/ucollection {start|stop|status|restart}
配置接收端口与保存文件,打开 /etc/init.d/ucollection 文件,看到下面段落
done << EOF1213 /tmp/nginx/access.log1214 /tmp/test/test.log1215 /tmp/app/$(date +"%Y-%m-%d.%H:%M:%S").log1216 /tmp/db/$(date +"%Y-%m-%d")/mysql.log1217 /tmp/cache/$(date +"%Y")/$(date +"%m")/$(date +"%d")/cache.logEOF
格式如下,表示接收来自1213端口的数据,并保存到/tmp/nginx/access.log文件中。
Port | Logfile1213 /tmp/nginx/access.log
如果需要分割日志配置如下
1217 /tmp/cache/$(date +"%Y")/$(date +"%m")/$(date +"%d")/cache.log
上面配置日志文件将会产生在下面的目录中
$ find /tmp/cache//tmp/cache//tmp/cache/2014/tmp/cache/2014/12/tmp/cache/2014/12/16/tmp/cache/2014/12/16/cache.log
提示
同样,如果分割日志需要重启收集端程序。启动收集端
# service ulog start
停止程序
# service ulog stop
查看状态
$ init.d/ucollection status12429 pts/16 S 0:00 /usr/bin/python3 /usr/local/bin/collection -d -p 1213 -l /tmp/nginx/access.log12432 pts/16 S 0:00 /usr/bin/python3 /usr/local/bin/collection -d -p 1214 -l /tmp/test/test.log12435 pts/16 S 0:00 /usr/bin/python3 /usr/local/bin/collection -d -p 1215 -l /tmp/app/2014-12-16.09:55:15.log12438 pts/16 S 0:00 /usr/bin/python3 /usr/local/bin/collection -d -p 1216 -l /tmp/db/2014-12-16/mysql.log12441 pts/16 S 0:00 /usr/bin/python3 /usr/local/bin/collection -d -p 1217 -l /tmp/cache/2014/12/16/cache.log
6.2.4.?日志监控
监控来自1217宽口的数据
$ collection -p 1213192.168.6.20 - - [16/Dec/2014:15:06:23 +0800] "GET /journal/log.html HTTP/1.1" 304 0 "-" "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.95 Safari/537.36"192.168.6.20 - - [16/Dec/2014:15:06:23 +0800] "GET /journal/docbook.css HTTP/1.1" 304 0 "http://192.168.6.2/journal/log.html" "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.95 Safari/537.36"192.168.6.20 - - [16/Dec/2014:15:06:23 +0800] "GET /journal/journal.css HTTP/1.1" 304 0 "http://192.168.6.2/journal/log.html" "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.95 Safari/537.36"192.168.6.20 - - [16/Dec/2014:15:06:23 +0800] "GET /images/by-nc-sa.png HTTP/1.1" 304 0 "http://192.168.6.2/journal/log.html" "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.95 Safari/537.36"192.168.6.20 - - [16/Dec/2014:15:06:23 +0800] "GET /js/q.js HTTP/1.1" 304 0 "http://192.168.6.2/journal/log.html" "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.95 Safari/537.36"
启动后实时将最新日志传送过来