国产探花免费观看_亚洲丰满少妇自慰呻吟_97日韩有码在线_资源在线日韩欧美_一区二区精品毛片,辰东完美世界有声小说,欢乐颂第一季,yy玄幻小说排行榜完本

首頁 > 學院 > 操作系統 > 正文

安裝hadoop2.x出現的問題

2024-06-28 16:05:14
字體:
來源:轉載
供稿:網友
安裝完hadoop,格式化之后啟動hdfs,datanode不能啟動 查看日志: 2017-02-07 14:29:47,741 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting 2017-02-07 14:29:47,758 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 50020: starting 2017-02-07 14:29:53,973 INFO org.apache.hadoop.hdfs.server.common.Storage: Using 1 threads to upgrade data directories (dfs.datanode.paralle l.volumes.load.threads.num=1, dataDirs=1) 2017-02-07 14:29:54,113 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /opt/data/tmp/dfs/data/in_use.lock acquired by nodename 5 400@hadoop-master 2017-02-07 14:29:54,203 WARN org.apache.hadoop.hdfs.server.common.Storage: Failed to add storage directory [DISK]file:/opt/data/tmp/dfs/data / java.io.IOException: Incompatible clusterIDs in /opt/data/tmp/dfs/data: namenode clusterID = CID-2ca58eab-b3ef-4f08-b3f9-6246c4d6d0be; datan ode clusterID = CID-2121b6fc-ca1c-4f87-a700-1f7314390f13 at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:775) at org.apache.hadoop.hdfs.server.datanode.DataStorage.loadStorageDirectory(DataStorage.java:300) at org.apache.hadoop.hdfs.server.datanode.DataStorage.loadDataStorage(DataStorage.java:416) at org.apache.hadoop.hdfs.server.datanode.DataStorage.addStorageLocations(DataStorage.java:395) at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:573) at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1362) at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1327) at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:317) at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:223) at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:802) at java.lang.Thread.run(Thread.java:745) 2017-02-07 14:29:54,298 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for Block pool (Datanode Uuid unassigned) service to hadoop-master/192.168.8.88:9000. Exiting. java.io.IOException: All specified directories are failed to load. at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:574) at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1362) at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1327) at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:317) at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:223) at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:802) at java.lang.Thread.run(Thread.java:745) 2017-02-07 14:29:54,298 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service for: Block pool (Datan ode Uuid unassigned) service to hadoop-master/192.168.8.88:9000 2017-02-07 14:29:54,324 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool (Datanode Uuid unassigned) 2017-02-07 14:29:56,324 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode 2017-02-07 14:29:56,342 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 0 2017-02-07 14:29:56,377 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG: 造成這個原因的是在安裝完hadoop后格式化啟動hdfs后,又格式化了一遍hdfs,知識后namenode的clusterID和datanode的clusterID不一致,所以造成不能正常的啟動datanode. 解決方法:在配置的hadoop運行的臨時文件目錄中找到datanode的clusterID目錄 /opt/data/tmp/dfs/data/current/VERSION clusterID和/opt/data/tmp/dfs/name/current/VERSION中的clusterID一致就可以了,把namenode中的clusterID拷貝到datanode中的clusterID。開啟Hadoop2.6.0出現ssh無法解析主機名等錯誤提示的解決辦法! 報錯: You: ssh: Could not resolve hostname You: Temporary failure in name resolution warning:: ssh: Could not resolve hostname warning:: Temporary failure in name resolution VM: ssh: Could not resolve hostname VM: Temporary failure in name resolution have: ssh: Could not resolve hostname have: Temporary failure in name resolution library: ssh: Could not resolve hostname library: Temporary failure in name resolution loaded: ssh: Could not resolve hostname loaded: Temporary failure in name resolution might: ssh: Could not resolve hostname might: Temporary failure in name resolution which: ssh: Could not resolve hostname which: Temporary failure in name resolution have: ssh: Could not resolve hostname have: Temporary failure in name resolution disabled: ssh: Could not resolve hostname disabled: Temporary failure in name resolution stack: ssh: Could not resolve hostname stack: Temporary failure in name resolution guard.: ssh: Could not resolve hostname guard.: Temporary failure in name resolution VM: ssh: Could not resolve hostname VM: Temporary failure in name resolution The: ssh: Could not resolve hostname The: Temporary failure in name resolution try: ssh: Could not resolve hostname try: Temporary failure in name resolution will: ssh: Could not resolve hostname will: Temporary failure in name resolution to: ssh: Could not resolve hostname to: Temporary failure in name resolution fix: ssh: Could not resolve hostname fix: Temporary failure in name resolution the: ssh: Could not resolve hostname the: Temporary failure in name resolution 省略一大堆 Could not resolve hostname stack: Temporary failure in name resolution 解決辦法: 出現上述問題主要是環境變量沒設置好,在~/.bash_PRofile或者/etc/profile中加入以下語句就沒問題了 vi /etc/profile或者vi ~/.bash_profileexport HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/nativeexport HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib"

然后用source重新編譯使之生效即可! source /etc/profile或者source ~/.bash_profile


發表評論 共有條評論
用戶名: 密碼:
驗證碼: 匿名發表
主站蜘蛛池模板: 岫岩| 清新县| 云阳县| 光山县| 礼泉县| 平远县| 乾安县| 墨竹工卡县| 屏东县| 长岭县| 左权县| 石嘴山市| 北宁市| 邳州市| 铜陵市| 双柏县| 秦皇岛市| 永修县| 东辽县| 基隆市| 潞西市| 晋江市| 北流市| 固阳县| 奉新县| 寻乌县| 辉县市| 三亚市| 广元市| 广南县| 岗巴县| 富宁县| 辽宁省| 页游| 左贡县| 闸北区| 嘉祥县| 潢川县| 凤台县| 万全县| 都安|