当前位置: 代码迷 >> 综合 >> flume agent链接采集到hdfs
  详细解决方案

flume agent链接采集到hdfs

热度:18   发布时间:2023-12-14 07:02:05.0

一般是采集web服务器上的日志信息,一般source为exec 即shell命令结果,多为tail -F log

多个agent链接webserver(source) exec tail -F channel sink 下沉到另外一个agent的source

所以前一个agent sink为 avro sink (一个发送的server )绑定指定要发送到哪一个一个主机

则下一个agent source为avro source 最后sink 到hdfs sink 为hdfs

tail -》avro


# Name the components on this agent
a1.sources = r1
a1.sinks = k1
a1.channels = c1

# Describe/configure the source
a1.sources.r1.type = exec
a1.sources.r1.command = tail -F /home/hadoop/log/test.log
a1.sources.r1.channels = c1

# Describe the sink
#绑定的不是本机, 是另外一台机器的服务地址, sink端的avro是一个发送端, avro的客户端, 往hadoop01这个机器上发
a1.sinks = k1
a1.sinks.k1.type = avro
a1.sinks.k1.channel = c1
a1.sinks.k1.hostname = master
a1.sinks.k1.port = 4141
a1.sinks.k1.batch-size = 2



# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100

# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1? 


avro-hdfs为了简单便于观察打印到了logger

# Name the components on this agent
a1.sources = r1
a1.sinks = k1
a1.channels = c1

# Describe/configure the source
#source中的avro组件是接收者服务, 绑定本机
a1.sources.r1.type = avro
a1.sources.r1.channels = c1
a1.sources.r1.bind = 0.0.0.0
a1.sources.r1.port = 4141

# Describe the sink
a1.sinks.k1.type = logger

# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100

# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

启动因为2号要接受开启端口监听,所以先启动2

flume/bin/flume-ng agent --conf conf --conf-file conf/avro-hdfs.conf --namw a1 -Dflume.root.logger=INFO,console

flume/bin/flume-ng agent --conf cong --conf-file conf/tail-avro.conf --name a1


一般场景为从多台服务器采集送到一台服务器,再由其送到hdfs

n个hdfs->avro-hdfs




tail-hdfs


flume/bin/flume-ng agent --conf conf --conf-file conf/netcat-logger.conf --name a1


用tail命令获取数据,下沉到hdfs



while true
do
echo 111111 >> /home/hadoop/log/test.log
sleep 0.5
done

tail -F test.log

采集到hdfs中, 文件中的目录不用自己建的

检查下hdfs式否是salf模式:
    hdfs dfsadmin -report



前端页面查看下, master:50070,



# Name the components on this agent
a1.sources = r1
a1.sinks = k1
a1.channels = c1

#exec 指的是命令
# Describe/configure the source
a1.sources.r1.type = exec
#F根据文件名追中, f根据文件的nodeid追中
a1.sources.r1.command = tail -F /home/hadoop/log/test.log
a1.sources.r1.channels = c1

# Describe the sink
#下沉目标
a1.sinks.k1.type = hdfs
a1.sinks.k1.channel = c1
#指定目录, flum帮做目的替换
a1.sinks.k1.hdfs.path = /flume/events/%y-%m-%d/%H%M/
#文件的命名, 前缀
a1.sinks.k1.hdfs.filePrefix = events-

#10 分钟就改目录
a1.sinks.k1.hdfs.round = true
a1.sinks.k1.hdfs.roundValue = 10
a1.sinks.k1.hdfs.roundUnit = minute

#文件滚动之前的等待时间(秒)
a1.sinks.k1.hdfs.rollInterval = 3

#文件滚动的大小限制(bytes)
a1.sinks.k1.hdfs.rollSize = 500

#写入多少个event数据后滚动文件(事件个数)
a1.sinks.k1.hdfs.rollCount = 20

#5个事件就往里面写入
a1.sinks.k1.hdfs.batchSize = 5

#用本地时间格式化目录
a1.sinks.k1.hdfs.useLocalTimeStamp = true

#下沉后, 生成的文件类型,默认是Sequencefile,可用DataStream,则为普通文本
a1.sinks.k1.hdfs.fileType = DataStream

# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100

# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1


  相关解决方案