新聞中心
機器分布
創(chuàng)新互聯(lián)建站是一家專業(yè)提供肥鄉(xiāng)企業(yè)網站建設,專注與成都網站設計、做網站、H5響應式網站、小程序制作等業(yè)務。10年已為肥鄉(xiāng)眾多企業(yè)、政府機構等服務。創(chuàng)新互聯(lián)專業(yè)網站制作公司優(yōu)惠進行中。hadoop1 192.168.56121
hadoop2 192.168.56122
hadoop3 192.168.56123
準備安裝包
jdk-7u71-linux-x64.tar.gz
zookeeper-3.4.9.tar.gz
hadoop-2.9.2.tar.gz
把安裝包上傳到三臺機器的/usr/local目錄下并解壓
配置hosts
echo?"192.168.56.121?hadoop1"?>>?/etc/hosts echo?"192.168.56.122?hadoop2"?>>?/etc/hosts echo?"192.168.56.123?hadoop3"?>>?/etc/hosts
配置環(huán)境變量
/etc/profile
export?HADOOP_PREFIX=/usr/local/hadoop-2.9.2 export?JAVA_HOME=/usr/local/jdk1.7.0_71
部署zookeeper
創(chuàng)建zoo用戶
useradd?zoo passwd?zoo
修改zookeeper目錄的屬主為zoo
chown?zoo:zoo?-R?/usr/local/zookeeper-3.4.9
修改zookeeper配置文件
到/usr/local/zookeeper-3.4.9/conf目錄
cp?zoo_sample.cfg?zoo.cfg vi?zoo.cfg tickTime=2000 initLimit=10 syncLimit=5 dataDir=/usr/local/zookeeper-3.4.9 clientPort=2181 server.1=hadoop1:2888:3888 server.2=hadoop2:2888:3888 server.3=hadoop3:2888:3888
創(chuàng)建myid文件放在/usr/local/zookeeper-3.4.9目錄下,myid文件中只保存1-255的數(shù)字,與zoo.cfg中server.id行中的id相同。
hadoop1中myid為1
hadoop2中myid為2
hadoop3中myid為3
在三臺機器啟動zookeeper服務
[zoo@hadoop1?zookeeper-3.4.9]$?bin/zkServer.sh?start
驗證zookeeper
[zoo@hadoop1?zookeeper-3.4.9]$?bin/zkServer.sh?status ZooKeeper?JMX?enabled?by?default Using?config:?/usr/local/zookeeper-3.4.9/bin/../conf/zoo.cfg Mode:?follower
配置Hadoop
創(chuàng)建用戶
useradd?hadoop passwd?hadoop
修改hadoop目錄屬主為hadoop
chmod?hadoop:hadoop?-R?/usr/local/hadoop-2.9.2
創(chuàng)建目錄
mkdir?/hadoop1?/hadoop2?/hadoop3 chown?hadoop:hadoop?/hadoop1 chown?hadoop:hadoop?/hadoop2 chown?hadoop:hadoop?/hadoop3
配置互信
ssh-keygen ssh-copy-id?-i?~/.ssh/id_rsa.pub?hadoop@hadoop1 ssh-copy-id?-i?~/.ssh/id_rsa.pub?hadoop@hadoop2 ssh-copy-id?-i?~/.ssh/id_rsa.pub?hadoop@hadoop3 #使用如下命令測試互信 ssh?hadoop1?date ssh?hadoop2?date ssh?hadoop3?date
配置環(huán)境變量
/home/hadoop/.bash_profile
export?PATH=$JAVA_HOME/bin:$HADOOP_PREFIX/bin:$HADOOP_PREFIX/sbin:$PATH
配置參數(shù)
etc/hadoop/hadoop-env.sh?
export?JAVA_HOME=/usr/local/jdk1.7.0_71
etc/hadoop/core-site.xml
??????? ? ?fs.defaultFS ??????hdfs://ns ??????? ?hadoop.tmp.dir ??????/usr/loca/hadoop-2.9.2/temp ??????? ? ?io.file.buffer.size ??????4096 ??????? ha.zookeeper.quorum ??????hadoop1:2181,hadoop2:2181,hadoop3:2181 ?
etc/hadoop/hdfs-site.xml
???????? ?? ??dfs.nameservices ??????ns ??????? ?? ??dfs.ha.namenodes.ns ?????nn1,nn2 ??????? ?? ??dfs.namenode.rpc-address.ns.nn1 ?????hadoop1:9000 ???????? ?? ??dfs.namenode.http-address.ns.nn1 ??????hadoop1:50070 ???????? ?? ??dfs.namenode.rpc-address.ns.nn2 ??????hadoop2:9000 ???????? ?? ??dfs.namenode.http-address.ns.nn2 ??????hadoop2:50070 ????????? ?? ??dfs.namenode.shared.edits.dir ???????qjournal://hadoop1:8485;hadoop2:8485;hadoop3:8485/ns ?????????? ?? ??dfs.journalnode.edits.dir ????????/hadoop1/hdfs/journal ?????????? ?? ??dfs.ha.automatic-failover.enabled ????????true ???????????? ?? ??dfs.client.failover.proxy.provider.ns ??????????org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider ????????????? ?? ??dfs.ha.fencing.methods ???????????sshfence ???????????? ??dfs.ha.fencing.ssh.private-key-files ??????????/home/hadoop/.ssh/id_rsa ???????? ??dfs.namenode.name.dir ??????file:/hadoop1/hdfs/name,file:/hadoop2/hdfs/name ???????? ??dfs.datanode.data.dir ??????file:/hadoop1/hdfs/data,file:/hadoop2/hdfs/data,file:/hadoop3/hdfs/data ??????? ?? ??dfs.replication ?????2 ??????? ??dfs.webhdfs.enabled ?????true ???? dfs.hosts.exclude /usr/local/hadoop-2.9.2/etc/hadoop/excludes
etc/hadoop/mapred-site.xml
?????? yarn-site.xml ?? ??mapreduce.framework.name ??????yarn ???????????? ???yarn.nodemanager.aux-services ??????????mapreduce_shuffle ????????????? ??? ???yarn.nodemanager.aux-services.mapreduce_shuffle.class ??????????org.apache.hadoop.mapred.ShuffleHandler ????????????? yarn.resourcemanager.hostname ??????????hadoop1 ????
etc/hadoop/slaves
hadoop1 hadoop2 hadoop3
首次啟動命令
1、首先啟動各個節(jié)點的Zookeeper,在各個節(jié)點上執(zhí)行以下命令: bin/zkServer.sh?start 2、在某一個namenode節(jié)點執(zhí)行如下命令,創(chuàng)建命名空間 hdfs?zkfc?-formatZK 3、在每個journalnode節(jié)點用如下命令啟動journalnode sbin/hadoop-daemon.sh?start?journalnode 4、在主namenode節(jié)點格式化namenode和journalnode目錄 hdfs?namenode?-format?ns 5、在主namenode節(jié)點啟動namenode進程 sbin/hadoop-daemon.sh?start?namenode 6、在備namenode節(jié)點執(zhí)行第一行命令,這個是把備namenode節(jié)點的目錄格式化并把元數(shù)據(jù)從主namenode節(jié)點copy過來,并且這個命令不會把journalnode目錄再格式化了!然后用第二個命令啟動備namenode進程! hdfs?namenode?-bootstrapStandby sbin/hadoop-daemon.sh?start?namenode 7、在兩個namenode節(jié)點都執(zhí)行以下命令 sbin/hadoop-daemon.sh?start?zkfc 8、在所有datanode節(jié)點都執(zhí)行以下命令啟動datanode sbin/hadoop-daemon.sh?start?datanode
日常啟停命令
#啟動腳本,啟動所有節(jié)點服務 sbin/start-dfs.sh #停止腳本,停止所有節(jié)點服務 sbin/stop-dfs.sh驗證
jps檢查進程
http://192.168.56.122:50070
http://192.168.56.121:50070
測試文件上傳下載
#創(chuàng)建目錄 [hadoop@hadoop1?~]$?hadoop?fs?-mkdir?/test #驗證 [hadoop@hadoop1?~]$?hadoop?fs?-ls?/ Found?1?items drwxr-xr-x???-?hadoop?supergroup??????????0?2019-04-12?12:16?/test???? #上傳文件 [hadoop@hadoop1?~]$?hadoop?fs?-put?/usr/local/hadoop-2.9.2/LICENSE.txt?/test #驗證 [hadoop@hadoop1?~]$?hadoop?fs?-ls?/test????????????????????????????????????? Found?1?items -rw-r--r--???2?hadoop?supergroup?????106210?2019-04-12?12:17?/test/LICENSE.txt #下載文件到/tmp [hadoop@hadoop1?~]$?hadoop?fs?-get?/test/LICENSE.txt?/tmp #驗證 [hadoop@hadoop1?~]$?ls?-l?/tmp/LICENSE.txt? -rw-r--r--.?1?hadoop?hadoop?106210?Apr?12?12:19?/tmp/LICENSE.txt
參考:https://blog.csdn.net/Trigl/article/details/55101826
另外有需要云服務器可以了解下創(chuàng)新互聯(lián)scvps.cn,海內外云服務器15元起步,三天無理由+7*72小時售后在線,公司持有idc許可證,提供“云服務器、裸金屬服務器、高防服務器、香港服務器、美國服務器、虛擬主機、免備案服務器”等云主機租用服務以及企業(yè)上云的綜合解決方案,具有“安全穩(wěn)定、簡單易用、服務可用性高、性價比高”等特點與優(yōu)勢,專為企業(yè)上云打造定制,能夠滿足用戶豐富、多元化的應用場景需求。
文章名稱:HadoopHA雙namenode搭建-創(chuàng)新互聯(lián)
新聞來源:http://www.dlmjj.cn/article/ccgpse.html