/bigdata/flume-1.6.0/bin/flume-ng agent -n a1 -c /bigdata/flume-1.6.0/conf -f /bigdata//flume-1.6.0/conf/bdvisual.conf -Dflume.root.logger=INFO,console
要写成绝对路径 并且还需要加上
-Dflume.root.logger=INFO,console
如果是多个flume 级联的话,比如 1 2 机器采集到3 机器上
1 2 source一般为 exec tail -F xxx
1 2 sink为avro sink
3 source 为 avro source
配置文件
a1.sources = r1 a1.sources.r1.type = exec a1.sources.r1.command = tail -F /home/zl/bdproject/logs a1.sources.r1.channels = c1a1.channels = c1 a1.channels.c1.type = memory a1.channels.c1.capacity = 100 a1.channels.c1.transactionCapacity = 100#a1.sinks = k1 #a1.sinks.k1.type = logger #a1.sinks.k1.channel = c1a1.sinks=k1 #不要忘了 a1.sinks.k1.type = org.apache.flume.sink.kafka.KafkaSink a1.sinks.k1.topic = weblogs a1.sinks.k1.brokerList = localhost:9092 a1.sinks.k1.requiredAcks = 1 a1.sinks.k1.batchSize = 10 a1.sinks.k1.channel = c1
kafka版本众多更新快,导致版本等问题,如果flume对kafka对接出现问题,注意看log排查,kafka自己搞事情!!!
此版本为 flume 1.6.0 kafka为kafka_2.11-1.1.0/
并且 flume 1.9.0 下面的配置是错的,会报 org.apache.flume.conf.ConfigurationException: brokerList must contain at least one Kafka broker 下面的配置摘自官网 ???
a1.sinks.k1.channel = c1 a1.sinks.k1.type = org.apache.flume.sink.kafka.KafkaSink a1.sinks.k1.kafka.topic = mytopic a1.sinks.k1.kafka.bootstrap.servers = localhost:9092 a1.sinks.k1.kafka.flumeBatchSize = 20 a1.sinks.k1.kafka.producer.acks = 1 a1.sinks.k1.kafka.producer.linger.ms = 1 a1.sinks.k1.kafka.producer.compression.type = snappy