文章目录
-
- 6.安装 Hadoop3.1.2
-
- (1)修改配置文件 core-site.xml
- (2)修改配置文件 hdfs-site.xml
- (3)修改配置文件 mapred-site.xml
- (4)修改配置文件 capacity-scheduler.xml
- (5)修改配置文件 yarn-site.xml
- (6)编辑 start-dfs.sh,stop-dfs.sh 脚本
- (7)编辑 start-yarn.sh,stop-yarn.sh 脚本
- (8)修改配置文件 works 文件
Hadoop(一)之实验一CentOS7配置Hadoop系统:配置CentOS和下载安装包
Hadoop(二)之实验一CentOS7配置Hadoop系统:安装Zookeeper3.4.14
Hadoop(三)之实验一CentOS7配置Hadoop系统:安装 Hadoop3.1.2
Hadoop(四)之实验一CentOS7配置Hadoop系统:启动Hadoop
6.安装 Hadoop3.1.2
【只在c0上】
(1)修改配置文件 core-site.xml
编译 /home/work/_app/hadoop-3.1.2/etc/hadoop/core-site.xml
文件,内容如下:
gedit /home/work/_app/hadoop-3.1.2/etc/hadoop/core-site.xml
有warning不用管,这是使用gedit的正确现象,内容的确也修改了,everything is ok.
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--Licensed under the Apache License, Version 2.0 (the "License");you may not use this file except in compliance with the License.You may obtain a copy of the License athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, softwaredistributed under the License is distributed on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions andlimitations under the License. See accompanying LICENSE file. --><!-- Put site-specific property overrides in this file. --><configuration><property><name>fs.defaultFS</name><value>hdfs://mshkcluster</value><description>默认文件系统的名称。一个URI,其方案和权限决定了FileSystem的实现。</description></property><property><name>ha.zookeeper.quorum</name><value>c0:2181,c1:2181,c2:2181,c3:2181</value><description>由逗号分隔的ZooKeeper服务器地址列表,由ZKFailoverController在自动故障转移中使用。</description></property><property><name>hadoop.tmp.dir</name><value>/home/work/_data/hadoop-3.1.2</value><description>数据目录目录</description></property><property><name>dfs.ha.fencing.methods</name><value>sshfence</value><description>用于服务防护的防护方法列表。可能包含内置方法(例如shell和sshfence)或用户定义的方法。</description></property><property><name>dfs.ha.fencing.ssh.private-key-files</name><value>/root/.ssh/id_rsa</value><description>用于内置sshfence fencer的SSH私钥文件。</description></property><property><name>io.file.buffer.size</name><value>131072</value><description>SequenceFiles中使用的读/写缓冲区的大小。</description></property><property><name>ipc.client.connect.max.retries</name><value>100</value><description>客户端为建立服务器连接而重试的次数。</description></property><property><name>ipc.client.connect.retry.interval</name><value>10000</value><description>客户端在重试建立服务器连接之前将等待的毫秒数。</description></property>
</configuration>
(2)修改配置文件 hdfs-site.xml
编辑 /home/work/_app/hadoop-3.1.2/etc/hadoop/hdfs-site.xml
文件并保存,内容如下:
gedit /home/work/_app/hadoop-3.1.2/etc/hadoop/hdfs-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--Licensed under the Apache License, Version 2.0 (the "License");you may not use this file except in compliance with the License.You may obtain a copy of the License athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, softwaredistributed under the License is distributed on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions andlimitations under the License. See accompanying LICENSE file. --><!-- Put site-specific property overrides in this file. --><configuration><property><name>dfs.nameservices</name><value>mshkcluster</value></property><property><name>dfs.ha.namenodes.mshkcluster</name><value>c0,c1</value><description>给定名称服务的前缀包含给定名称服务的逗号分隔的名称节点列表。</description></property><property><name>dfs.namenode.rpc-address.mshkcluster.c0</name><value>c0:8020</value></property><property><name>dfs.namenode.rpc-address.mshkcluster.c1</name><value>c1:8020</value></property><property><name>dfs.namenode.http-address.mshkcluster.c0</name><value>c0:50070</value></property><property><name>dfs.namenode.http-address.mshkcluster.c1</name><value>c1:50070</value></property><property><name>dfs.namenode.shared.edits.dir</name><value>qjournal://c0:8485;c1:8485/mshkcluster</value><description>HA群集中多个名称节点之间的共享存储上的目录。此目录将由活动写入并由备用数据库读取,以保持命名空间同步。</description></property><property><name>dfs.client.failover.proxy.provider.mshkcluster</name><value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value><description>配置Java类的名称,DFS客户端将使用该名称来确定哪个NameNode是当前的Active,以及哪个NameNode当前正在为客户端请求提供服务。</description></property><property><name>dfs.ha.automatic-failover.enabled</name><value>true</value><description>是否启用自动故障转移。</description></property><property><name>dfs.replication</name><value>3</value></property><property><name>dfs.permissions.enabled</name><value>false</value><description>如果为“true”,则启用HDFS中的权限检查。如果为“false”,则关闭权限检查,但所有其他行为都保持不变。</description></property><property><name>dfs.journalnode.edits.dir</name><value>${hadoop.tmp.dir}/journalnode</value><description>指定JournalNode在本地磁盘存放数据的位置</description></property><property><name>dfs.namenode.name.dir</name><value>file://${hadoop.tmp.dir}/namenode</value><description>设置namenode存放路径</description></property><property><name>dfs.datanode.data.dir</name><value>file://${hadoop.tmp.dir}/datanode</value><description>设置datanode存放径路</description></property><property><name>dfs.blocksize</name><value>268435456</value><description>大型文件系统的HDFS块大小为256MB。</description></property><property><name>dfs.namenode.handler.count</name><value>100</value><description>namenode的服务器线程数</description></property>
</configuration>
(3)修改配置文件 mapred-site.xml
编辑 /home/work/_app/hadoop-3.1.2/etc/hadoop/mapred-site.xml
文件并保存,内容如下:
gedit /home/work/_app/hadoop-3.1.2/etc/hadoop/mapred-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--Licensed under the Apache License, Version 2.0 (the "License");you may not use this file except in compliance with the License.You may obtain a copy of the License athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, softwaredistributed under the License is distributed on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions andlimitations under the License. See accompanying LICENSE file. --><!-- Put site-specific property overrides in this file. --><configuration><property><name>mapreduce.framework.name</name><value>yarn</value><description>指定mr框架为yarn方式</description></property><property> <name>mapreduce.map.memory.mb</name> <value>512</value> <description>每个Map任务的物理内存限制</description> </property><property> <name>mapreduce.reduce.memory.mb</name> <value>512</value> <description>每个Reduce任务的物理内存限制</description> </property> <property><name>mapreduce.jobhistory.address</name><value>0.0.0.0:10020</value><description>MapReduce JobHistory服务器IPC主机:端口</description></property><property><name>mapreduce.jobhistory.webapp.address</name><value>0.0.0.0:19888</value><description>MapReduce JobHistory服务器Web浏览时的主机:端口</description></property><property><name>mapreduce.application.classpath</name><value>/home/work/_app/hadoop-3.1.2/etc/hadoop,/home/work/_app/hadoop-3.1.2/share/hadoop/common/*,/home/work/_app/hadoop-3.1.2/share/hadoop/common/lib/*,/home/work/_app/hadoop-3.1.2/share/hadoop/hdfs/*,/home/work/_app/hadoop-3.1.2/share/hadoop/hdfs/lib/*,/home/work/_app/hadoop-3.1.2/share/hadoop/mapreduce/*,/home/work/_app/hadoop-3.1.2/share/hadoop/mapreduce/lib/*,/home/work/_app/hadoop-3.1.2/share/hadoop/yarn/*,/home/work/_app/hadoop-3.1.2/share/hadoop/yarn/lib/* </value></property>
</configuration>
(4)修改配置文件 capacity-scheduler.xml
capacity-scheduler.xml主要对 hadoop 的队列进行管理,在这里我们分test、dev、prod三个队列。
编辑 /home/work/_app/hadoop-3.1.2/etc/hadoop/capacity-scheduler.xml
文件并保存,内容如下:
gedit /home/work/_app/hadoop-3.1.2/etc/hadoop/capacity-scheduler.xml
<!--Licensed under the Apache License, Version 2.0 (the "License");you may not use this file except in compliance with the License.You may obtain a copy of the License athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, softwaredistributed under the License is distributed on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions andlimitations under the License. See accompanying LICENSE file. -->
<configuration><property><name>yarn.scheduler.capacity.maximum-applications</name><value>10000</value><description>系统中可以同时处于运行和挂起状态的最大应用程序数。</description></property><property><name>yarn.scheduler.capacity.maximum-am-resource-percent</name><value>0.5</value><description>群集中可用于运行应用程序主机的最大资源百分比 - 控制并发活动应用程序的数量。
每个队列的限制与其队列容量和用户限制成正比。</description></property><property><name>yarn.scheduler.capacity.resource-calculator</name><value>org.apache.hadoop.yarn.util.resource.DefaultResourceCalculator</value><description>The ResourceCalculator implementation to be used to compareResources in the scheduler.The default i.e. DefaultResourceCalculator only uses Memory whileDominantResourceCalculator uses dominant-resource to comparemulti-dimensional resources such as Memory, CPU etc.</description></property><property><name>yarn.scheduler.capacity.root.queues</name><value>default,dev,test,prod</value><description>CapacityScheduler有一个名为root的预定义队列。系统中的所有队列都是根队列的子节点。可以通过使用逗号分隔的子队列列表配置yarn.scheduler.capacity.root.queues来设置更多队列。</description></property><property><name>yarn.scheduler.capacity.root.default.capacity</name><value>20</value><description>每个级别的所有队列的容量总和必须等于100.如果有空闲资源,则队列中的应用程序可能比队列容量消耗更多资源,从而提供弹性。.</description></property><property><name>yarn.scheduler.capacity.root.default.user-limit-factor</name><value>1</value><description>队列容量的倍数,可配置为允许单个用户获取更多资源。默认情况下,此值设置为1可确保单个用户永远不会超过队列配置的容量,无论群集的空闲程度如何。</description></property><property><name>yarn.scheduler.capacity.root.default.maximum-capacity</name><value>70</value><description>最大队列容量,以百分比(%)表示为浮点数。这限制了队列中应用程序的弹性。默认为-1,禁用它。</description></property><property><name>yarn.scheduler.capacity.root.default.state</name><value>RUNNING</value><description>队列的状态。可以是RUNNING或STOPPED之一。如果队列处于STOPPED状态,则无法将新应用程序提交给自身或其任何子队列。因此,如果根队列是STOPPED,则不能将任何应用程序提交给整个群集。</description></property><property><name>yarn.scheduler.capacity.root.test.capacity</name><value>10</value><description>每个级别的所有队列的容量总和必须等于100.如果有空闲资源,则队列中的应用程序可能比队列容量消耗更多资源,从而提供弹性。.</description></property><property><name>yarn.scheduler.capacity.root.test.user-limit-factor</name><value>1</value><description>队列容量的倍数,可配置为允许单个用户获取更多资源。默认情况下,此值设置为1可确保单个用户永远不会超过队列配置的容量,无论群集的空闲程度如何。</description></property><property><name>yarn.scheduler.capacity.root.test.maximum-capacity</name><value>20</value><description>最大队列容量,以百分比(%)表示为浮点数。这限制了队列中应用程序的弹性。默认为-1,禁用它。</description></property><property><name>yarn.scheduler.capacity.root.prod.capacity</name><value>40</value><description>每个级别的所有队列的容量总和必须等于100.如果有空闲资源,则队列中的应用程序可能比队列容量消耗更多资源,从而提供弹性。.</description></property><property><name>yarn.scheduler.capacity.root.prod.user-limit-factor</name><value>1</value><description>队列容量的倍数,可配置为允许单个用户获取更多资源。默认情况下,此值设置为1可确保单个用户永远不会超过队列配置的容量,无论群集的空闲程度如何。</description></property><property><name>yarn.scheduler.capacity.root.prod.maximum-capacity</name><value>70</value><description>最大队列容量,以百分比(%)表示为浮点数。这限制了队列中应用程序的弹性。默认为-1,禁用它。</description></property><property><name>yarn.scheduler.capacity.root.prod.state</name><value>RUNNING</value><description>队列的状态。可以是RUNNING或STOPPED之一。如果队列处于STOPPED状态,则无法将新应用程序提交给自身或其任何子队列。因此,如果根队列是STOPPED,则不能将任何应用程序提交给整个群集。</description></property><property><name>yarn.scheduler.capacity.root.dev.capacity</name><value>30</value><description>每个级别的所有队列的容量总和必须等于100.如果有空闲资源,则队列中的应用程序可能比队列容量消耗更多资源,从而提供弹性。.</description></property><property><name>yarn.scheduler.capacity.root.dev.user-limit-factor</name><value>1</value><description>队列容量的倍数,可配置为允许单个用户获取更多资源。默认情况下,此值设置为1可确保单个用户永远不会超过队列配置的容量,无论群集的空闲程度如何。</description></property><property><name>yarn.scheduler.capacity.root.dev.maximum-capacity</name><value>40</value><description>最大队列容量,以百分比(%)表示为浮点数。这限制了队列中应用程序的弹性。默认为-1,禁用它。</description></property><property><name>yarn.scheduler.capacity.root.default.acl_submit_applications</name><value>*</value><description>The ACL of who can submit jobs to the default queue.</description></property><property><name>yarn.scheduler.capacity.root.default.acl_administer_queue</name><value>*</value><description>The ACL of who can administer jobs on the default queue.</description></property><property><name>yarn.scheduler.capacity.root.default.acl_application_max_priority</name><value>*</value><description>The ACL of who can submit applications with configured priority.For e.g, [user={name} group={name} max_priority={priority} default_priority={priority}]</description></property><property><name>yarn.scheduler.capacity.root.default.maximum-application-lifetime</name><value>-1</value><description>在几秒钟内提交到队列的应用程序的最长生命周期。任何小于或等于零的值都将被视为已禁用。对于此队列中的所有应用程序,这将是一个艰难的时间限制。如果配置了正值,那么提交到此队列的任何应用程序将在超过配置的生存期后被终止。用户还可以在应用程序提交上下文中指定每个应用程.但如果超过队列最长生命周期,则会覆盖用户生命周期。</description></property><property><name>yarn.scheduler.capacity.root.default.default-application-lifetime</name><value>-1</value><description>在几秒钟内提交到队列的应用程序的默认生存期。任何小于或等于零的值都将被视为已禁用。如果用户尚未提交具有生命周期值的应用程序,则将采用此值。</description></property><property><name>yarn.scheduler.capacity.node-locality-delay</name><value>40</value><description>Number of missed scheduling opportunities after which the CapacitySchedulerattempts to schedule rack-local containers.When setting this parameter, the size of the cluster should be taken into account.We use 40 as the default value, which is approximately the number of nodes in one rack.Note, if this value is -1, the locality constraint in the container requestwill be ignored, which disables the delay scheduling.</description></property><property><name>yarn.scheduler.capacity.rack-locality-additional-delay</name><value>-1</value><description>Number of additional missed scheduling opportunities over the node-locality-delayones, after which the CapacityScheduler attempts to schedule off-switch containers,instead of rack-local ones.Example: with node-locality-delay=40 and rack-locality-delay=20, the scheduler willattempt rack-local assignments after 40 missed opportunities, and off-switch assignmentsafter 40+20=60 missed opportunities.When setting this parameter, the size of the cluster should be taken into account.We use -1 as the default value, which disables this feature. In this case, the numberof missed opportunities for assigning off-switch containers is calculated based onthe number of containers and unique locations specified in the resource request,as well as the size of the cluster.</description></property><property><name>yarn.scheduler.capacity.queue-mappings</name><value></value><description>A list of mappings that will be used to assign jobs to queuesThe syntax for this list is [u|g]:[name]:[queue_name][,next mapping]*Typically this list will be used to map users to queues,for example, u:%user:%user maps all users to queues with the same nameas the user.</description></property><property><name>yarn.scheduler.capacity.queue-mappings-override.enable</name><value>false</value><description>If a queue mapping is present, will it override the value specifiedby the user? This can be used by administrators to place jobs in queuesthat are different than the one specified by the user.The default is false.</description></property><property><name>yarn.scheduler.capacity.per-node-heartbeat.maximum-offswitch-assignments</name><value>1</value><description>Controls the number of OFF_SWITCH assignments allowedduring a node's heartbeat. Increasing this value can improvescheduling rate for OFF_SWITCH containers. Lower values reduce"clumping" of applications on particular nodes. The default is 1.Legal values are 1-MAX_INT. This config is refreshable.</description></property><property><name>yarn.scheduler.capacity.application.fail-fast</name><value>false</value><description>Whether RM should fail during recovery if previous applications'queue is no longer valid.</description></property></configuration>
(5)修改配置文件 yarn-site.xml
编辑 /home/work/_app/hadoop-3.1.2/etc/hadoop/yarn-site.xml
文件并保存,内容如下:
gedit /home/work/_app/hadoop-3.1.2/etc/hadoop/yarn-site.xml
<?xml version="1.0"?>
<!--Licensed under the Apache License, Version 2.0 (the "License");you may not use this file except in compliance with the License.You may obtain a copy of the License athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, softwaredistributed under the License is distributed on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions andlimitations under the License. See accompanying LICENSE file. -->
<configuration>
<!-- Site specific YARN configuration properties--><property><name>yarn.resourcemanager.recovery.enabled</name><value>true</value><description>启动后启用RM以恢复状态。如果为true,则必须指定yarn.resourcemanager.store.class。</description></property><property><name>yarn.resourcemanager.store.class</name><value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value><description>用作持久存储的类。</description></property><property><name>yarn.resourcemanager.zk-address</name><value>c0:2181,c1:2181</value><description>ZooKeeper服务的地址,多个地址使用逗号隔开</description></property><property><name>yarn.resourcemanager.ha.enabled</name><value>true</value><description>启用RM高可用性。启用时,(1)默认情况下,RM以待机模式启动,并在提示时转换为活动模式。(2)RM集合中的节点列在yarn.resourcemanager.ha.rm-ids中(3)如果明确指定了yarn.resourcemanager.ha.id,则每个RM的id来自yarn.resourcemanager.ha.id或者可以通过匹配yarn.resourcemanager.address。</description></property><property><name>yarn.resourcemanager.ha.rm-ids</name><value>rm1,rm2</value><description>启用HA时群集中的RM节点列表。最少2个</description></property><property><name>yarn.resourcemanager.webapp.address.rm1</name><value>c0:8088</value></property><property><name>yarn.resourcemanager.webapp.address.rm2</name><value>c1:8088</value></property><property><name>yarn.resourcemanager.cluster-id</name><value>mshk-yarn-ha</value><description>集群HA的id,用于在ZooKeeper上创建节点,区分使用同一个ZooKeeper集群的不同Hadoop集群</description></property><property><name>yarn.resourcemanager.hostname.rm1</name><value>c0</value><description>主机名</description></property><property><name>yarn.resourcemanager.hostname.rm2</name><value>c1</value><description>主机名</description></property><property><name>yarn.nodemanager.aux-services</name><value>mapreduce_shuffle</value><description>reducer取数据的方式是mapreduce_shuffle</description></property><property> <name>yarn.nodemanager.resource.memory-mb</name> <value>2048</value> <discription>每个节点可用内存,单位MB</discription> </property> <property> <name>yarn.nodemanager.resource.cpu-vcores</name> <value>2</value> <discription>每个节点可用cpu</discription></property> <property> <name>yarn.scheduler.minimum-allocation-mb</name> <value>512</value> <discription>单个任务可申请最少内存,默认1024MB</discription> </property> <property> <name>yarn.scheduler.maximum-allocation-mb</name> <value>1024</value> <discription>单个任务可申请最大内存,默认8192MB</discription> </property> <property> <name>yarn.scheduler.minimum-allocation-vcores</name> <value>1</value> <discription>最小的cores 1 个,默认的就是一个</discription> </property> <property> <name>yarn.scheduler.minimum-allocation-vcores</name> <value>2</value> <discription>最多可分配的cores 2 个</discription> </property> <property><name>yarn.nodemanager.pmem-check-enabled</name><value>false</value></property><property><name>yarn.nodemanager.vmem-check-enabled</name><value>false</value></property><property><name>yarn.log-aggregation-enable</name><value>true</value><discription>是否开启聚合日志</discription></property><property><name>yarn.nodemanager.log-aggregation.roll-monitoring-interval-seconds</name><value>-1</value><discription>定义NM唤醒上载日志文件的频率。默认值为-1。默认情况下,应用程序完成后将上载日志。通过设置此配置,可以在应用程序运行时定期上载日志。可设置的最小滚动间隔秒数为3600。</discription></property><property><name>yarn.log.server.url</name><value>http://c0:19888/jobhistory/logs</value><discription> 配置日志服务器的地址</discription></property><property><name>yarn.log-aggregation.retain-seconds</name><value>-1</value><discription> 在删除聚合日志之前保留多长时间。-1禁用。单位是秒</discription></property><property><name>yarn.nodemanager.log-dirs</name><value>/home/work/_data/hadoop-3.1.2/yarn/container-logs/</value><discription>nodemanager存放container日志的本地路径</discription></property><property><name>yarn.nodemanager.remote-app-log-dir</name><value>/tmp/logs</value><discription>nodemanager存放container日志的本地路径</discription></property><property><name>yarn.app.mapreduce.am.resource.mb</name><value>200</value></property>
</configuration>
(6)编辑 start-dfs.sh,stop-dfs.sh 脚本
编辑 /home/work/_app/hadoop-3.1.2/sbin/start-dfs.sh
和 /home/work/_app/hadoop-3.1.2/sbin/stop-dfs.sh
文件。
gedit /home/work/_app/hadoop-3.1.2/sbin/start-dfs.sh
gedit /home/work/_app/hadoop-3.1.2/sbin/stop-dfs.sh
在开始处 #!/usr/bin/env bash
的下面,增加以下内容:
HDFS_DATANODE_USER=root
HDFS_DATANODE_SECURE_USER=hdfs
HDFS_ZKFC_USER=root
HDFS_JOURNALNODE_USER=root
HDFS_NAMENODE_USER=root
HDFS_SECONDARYNAMENODE_USER=root
(7)编辑 start-yarn.sh,stop-yarn.sh 脚本
编辑 /home/work/_app/hadoop-3.1.2/sbin/start-yarn.sh
和 /home/work/_app/hadoop-3.1.2/sbin/stop-yarn.sh
文件。
gedit /home/work/_app/hadoop-3.1.2/sbin/start-yarn.sh
gedit /home/work/_app/hadoop-3.1.2/sbin/stop-yarn.sh
在开始处 #!/usr/bin/env bash
的下面增加以下内容:
YARN_RESOURCEMANAGER_USER=root
HADOOP_SECURE_DN_USER=yarn
YARN_NODEMANAGER_USER=root
(8)修改配置文件 works 文件
设置主从配置,如果不设置,集群将不知道主从配置。编辑 /home/work/_app/hadoop-3.1.2/etc/hadoop/workers
文件并保存,内容如下:
gedit /home/work/_app/hadoop-3.1.2/etc/hadoop/workers
里面有个localhost
,直接覆盖掉,不用管。
c2
c3