分类:
hadoop
系统说明
系统:CentOS 7(最小化安装)
节点信息:
节点 | ip |
---|---|
emo1 | 192.168.2.7 |
emo2 | 192.168.2.8 |
emo3 | 192.168.2.9 |
搭建步骤详述(默认情况下emo1主机为主节点)
一、节点基础配置
1、配置各节点网络
设置节点的IP地址
BOOTPROTO="static"
IPADDR=192.168.2.7
NETMASK=255.255.255.0
修改各节点的名字
vi /etc/hostname
emo1
添加映射
vi /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.2.10 master 192.168.2.20 slave1 192.168.2.30 slave2 scp /etc/hosts emo2:/etc/[复制的前提是需要设置免密登录]
关闭防火墙
vi /etc/selinux/config
systemctl stop firewalld
systemctl disable firewalld
systemctl status firewalld
免密登录
ssh-keygen -t rsa
ssh-copy-id emo1
ssh-copy-id emo2
ssh-copy-id emo3
(登陆:ssh emo1 退出:logout)
2.安装Java和hadoop
tar -zxvf hadoop-2.7.7tar.gz
vi /etc/profile
export JAVA_HOME=/usr/local/src/java/jdk1.8.0_191
export JAVA_HOME=/usr/local/src/hadoop/hadoop-2.7.7
export PATH=$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH
source /etc/proffile
java -version
hadoop
3.修改hadoop配置文件
cd /usr/local/src/hadoop/hadoop-2.7.7/etc/hadoop
export JAVA_HOME=/usr/local/src/java/jdk1.8.0_191
vi core-site.xml (需要在hadoop-2.7.7创建tmp目录)
<property>
<name>fs.defaultFS</name>
<value>hdfs://emo1:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>file:/usr/local/src/hadoop/hadoop-2.7.7/tmp</value>
</property>
vi hdfs-site.xml
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/usr/local/src/hadoop/hadoop-2.7.7/hdfs/data</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/usr/local/src/hadoop/hadoop-2.7.7/hdfs/name</value>
</property>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
vi yarn-site.xml
<property>
<name>yarn.resourcemanager.address</name>
<value>emo1:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>emo1:8030</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>emo1:8031</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>emo1:8033</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>master:8088</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
cp mapred-site.xml.template mapred-site.xml
vi mapred-site.xml
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>emo1:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>emo1:19888</value>
</property>
vi masters
emo1
vi slaves
emo2
emo3
同步环境变量文件
scp /root/.bash_profile slave1:/root/
scp /root/.bash_profile slave2:/root/
同步jdk安装文件
scp -r ../java slave1:/usr/local/src/
scp -r ../java slave2:/usr/local/src/
同步hadoop文件
scp -r ../hadoop slave1:/usr/local/src/
scp -r ../hadoop slave2:/usr/local/src/
子节点刷新环境变量
slave1机器:
source /root/.bash_profile
slave2机器:
source /root/.bash_profile
````
格式化命令
hadoop namenode -format
启动命令
start-all.sh
关闭集群
stop-all.sh
评价