当前位置: 代码迷 >> Eclipse >> hadoop eclipse 三(转)
  详细解决方案

hadoop eclipse 三(转)

热度:10903   发布时间:2013-02-25 21:52:42.0
hadoop eclipse 3(转)
2012年5月3日
《转载》hadoop cdh3u3 eclipse插件编译
1. 编译环境

操作系统:debian6 amd64,安装ant和maven2这两个java打包工具。

hadoop:hadoop-0.20.2-cdh3u3.tar.gz

eclipse:eclipse-java-indigo-SR2-win32.zip
2. 编译hadoop

解压源码hadoop-0.20.2-cdh3u3.tar.gz并进入,执行ant,自动下载依赖并编译。
3. 编译eclipse plugin

解压eclipse。

进入hadoop源码的src/contrib/eclipse-plugin目录下,执行:

ant -Declipse.home=/eclipse解压目录/ -Dversion=0.20.2-cdh3u3 jar。
4. 测试

在hadoop源码的build/contrib/eclipse-plugin中有hadoop-eclipse-plugin-0.20.2-cdh3u3.jar。拷贝至eclipse的plugins目录下,启动eclipse。 启动后报错:

An internal error occurred during: "Connecting to DFS localhost".

查看eclipse错误日志,显示:

java.lang.NoClassDefFoundError: org/apache/hadoop/thirdparty/guava/common/collect/LinkedListMultimap

还有一个错误:

java.lang.NoClassDefFoundError: org/codehaus/jackson/map/JsonMappingException

说明eclipse找不到guava和jackson包。
5. 修复bug

首先,在hadoop源码的lib目录下拷贝出guava-r09-jarjar.jar,jackson-mapper-asl-1.5.2.jar包。
5.1 方法一

把guava-r09-jarjar.jar,jackson-mapper-asl-1.5.2.jar中的字节码(org目录)解压至hadoop-eclipse-plugin-0.20.2-cdh3u3.jar的classes下。
5.2 方法二

把guava-r09-jarjar.jar,jackson-mapper-asl-1.5.2.jar放到hadoop-eclipse-plugin-0.20.2-cdh3u3.jar的lib目录下。

然后,修改该包META-INF目录下的MANIFEST.MF,将classpath修改为一下内容:

Bundle-ClassPath: classes/,lib/hadoop-core.jar,lib/guava-r09-jarjar.jar,lib/jackson-mapper-asl-1.5.2.jar

方法二理论上可以,但我测试未成功。
posted @ 2012-05-03 18:39 riverphoenix 阅读(505) 评论(0) 编辑

wenti

An internal error occurred during: "Map/Reduce location status updater".
org/codehaus/jackson/map/JsonMappingException

An internal error occurred during: "Connecting to DFS hadoop".
org/apache/commons/configuration/Configuration
posted @ 2012-05-03 17:06 riverphoenix 阅读(213) 评论(1) 编辑

<zhuan>Hadoop格式化HDFS报错java.net.UnknownHostException: localhost.localdomain: localhost.localdomain
异常描述


在对HDFS格式化,执行hadoop namenode -format命令时,出现未知的主机名的问题,异常信息如下所示:
[plain] view plaincopyprint?

     [shirdrn@localhost bin]$ hadoop namenode -format 
    11/06/22 07:33:31 INFO namenode.NameNode: STARTUP_MSG:  
    /************************************************************ 
    STARTUP_MSG: Starting NameNode 
    STARTUP_MSG:   host = java.net.UnknownHostException: localhost.localdomain: localhost.localdomain 
    STARTUP_MSG:   args = [-format] 
    STARTUP_MSG:   version = 0.20.0 
    STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/core/branches/branch-0.20 -r 763504; compiled by 'ndaley' on Thu Apr  9 05:18:40 UTC 2009 
    ************************************************************/ 
    Re-format filesystem in /tmp/hadoop/hadoop-shirdrn/dfs/name ? (Y or N) Y 
    11/06/22 07:33:36 INFO namenode.FSNamesystem: fsOwner=shirdrn,shirdrn 
    11/06/22 07:33:36 INFO namenode.FSNamesystem: supergroup=supergroup 
    11/06/22 07:33:36 INFO namenode.FSNamesystem: isPermissionEnabled=true 
    11/06/22 07:33:36 INFO metrics.MetricsUtil: Unable to obtain hostName 
    java.net.UnknownHostException: localhost.localdomain: localhost.localdomain 
            at java.net.InetAddress.getLocalHost(InetAddress.java:1353) 
            at org.apache.hadoop.metrics.MetricsUtil.getHostName(MetricsUtil.java:91) 
            at org.apache.hadoop.metrics.MetricsUtil.createRecord(MetricsUtil.java:80) 
            at org.apache.hadoop.hdfs.server.namenode.FSDirectory.initialize(FSDirectory.java:73) 
            at org.apache.hadoop.hdfs.server.namenode.FSDirectory.<init>(FSDirectory.java:68) 
            at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:370) 
            at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:853) 
            at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:947) 
            at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:964) 
    11/06/22 07:33:36 INFO common.Storage: Image file of size 97 saved in 0 seconds. 
    11/06/22 07:33:36 INFO common.Storage: Storage directory /tmp/hadoop/hadoop-shirdrn/dfs/name has been successfully formatted. 
    11/06/22 07:33:36 INFO namenode.NameNode: SHUTDOWN_MSG:  
    /************************************************************ 
    SHUTDOWN_MSG: Shutting down NameNode at java.net.UnknownHostException: localhost.localdomain: localhost.localdomain 
    ************************************************************/ 

我们通过执行hostname命令可以看到:
[plain] view plaincopyprint?

    [shirdrn@localhost bin]# hostname 
    localhost.localdomain 

也就是说,Hadoop在格式化HDFS的时候,通过hostname命令获取到的主机名是localhost.localdomain,然后在/etc/hosts文件中进行映射的时候,没有找到,看下我的/etc/hosts内容:
[plain] view plaincopyprint?

    [root@localhost bin]# cat /etc/hosts 
    # Do not remove the following line, or various programs 
    # that require network functionality will fail. 
    127.0.0.1               localhost       localhost 
    192.168.1.103           localhost       localhost 

也就说,通过localhost.localdomain根本无法映射到一个IP地址,所以报错了。

此时,我们查看一下/etc/sysconfig/network文件:
[plain] view plaincopyprint?

    NETWORKING=yes 
    NETWORKING_IPV6=yes 
    HOSTNAME=localhost.localdomain 

可见,执行hostname获取到这里配置的HOSTNAME的值。


解决方法


修改/etc/sysconfig/network中HOSTNAME的值为localhost,或者自己指定的主机名,保证localhost在/etc/hosts文件中映射为正确的IP地址,然后重新启动网络服务:
[plain] view plaincopyprint?

    [root@localhost bin]# /etc/rc.d/init.d/network restart 
    Shutting down interface eth0:  [  OK  ] 
    Shutting down loopback interface:  [  OK  ] 
    Bringing up loopback interface:  [  OK  ] 
    Bringing up interface eth0:   
    Determining IP information for eth0... done. 
    [  OK  ] 

这时,再执行格式化HDFS命令,以及启动HDFS集群就正常了。

格式化:
[plain] view plaincopyprint?

    [shirdrn@localhost bin]$ hadoop namenode -format 
    11/06/22 08:02:37 INFO namenode.NameNode: STARTUP_MSG:  
    /************************************************************ 
    STARTUP_MSG: Starting NameNode 
    STARTUP_MSG:   host = localhost/127.0.0.1 
    STARTUP_MSG:   args = [-format] 
    STARTUP_MSG:   version = 0.20.0 
    STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/core/branches/branch-0.20 -r 763504; compiled by 'ndaley' on Thu Apr  9 05:18:40 UTC 2009 
    ************************************************************/ 
    11/06/22 08:02:37 INFO namenode.FSNamesystem: fsOwner=shirdrn,shirdrn 
    11/06/22 08:02:37 INFO namenode.FSNamesystem: supergroup=supergroup 
    11/06/22 08:02:37 INFO namenode.FSNamesystem: isPermissionEnabled=true 
    11/06/22 08:02:37 INFO common.Storage: Image file of size 97 saved in 0 seconds. 
    11/06/22 08:02:37 INFO common.Storage: Storage directory /tmp/hadoop/hadoop-shirdrn/dfs/name has been successfully formatted. 
    11/06/22 08:02:37 INFO namenode.NameNode: SHUTDOWN_MSG:  
    /************************************************************ 
    SHUTDOWN_MSG: Shutting down NameNode at localhost/127.0.0.1 
    ************************************************************/ 

启动:
[plain] view plaincopyprint?

    [shirdrn@localhost bin]$ start-all.sh  
    starting namenode, logging to /home/shirdrn/eclipse/eclipse-3.5.2/hadoop/hadoop-0.20.0/logs/hadoop-shirdrn-namenode-localhost.out 
    localhost: starting datanode, logging to /home/shirdrn/eclipse/eclipse-3.5.2/hadoop/hadoop-0.20.0/logs/hadoop-shirdrn-datanode-localhost.out 
    localhost: starting secondarynamenode, logging to /home/shirdrn/eclipse/eclipse-3.5.2/hadoop/hadoop-0.20.0/logs/hadoop-shirdrn-secondarynamenode-localhost.out 
    starting jobtracker, logging to /home/shirdrn/eclipse/eclipse-3.5.2/hadoop/hadoop-0.20.0/logs/hadoop-shirdrn-jobtracker-localhost.out 
    localhost: starting tasktracker, logging to /home/shirdrn/eclipse/eclipse-3.5.2/hadoop/hadoop-0.20.0/logs/hadoop-shirdrn-tasktracker-localhost.out 

查看:
[plain] view plaincopyprint?

        [shirdrn@localhost bin]$ jps 
        8192 TaskTracker 
        7905 DataNode 
        7806 NameNode 
        8065 JobTracker 
        8002 SecondaryNameNode 
        8234 Jps 
        http://blog.csdn.net/shirdrn/article/details/6562292
  相关解决方案