ubuntu18.04搭建Ambari离线环境

1,288 阅读6分钟

1、至少选择两台主机,一台作为管理机(配置好nginx,mysql数据库),一台(多台)肉鸡,并配置好jdk环境。

2、主机之间进行ssh免登录,我这边全部用的root账号,也可以新建一个具备root权限的账号

3、为了方便记忆,分别把server机和agent机用hostname替代下

4、离线下载地址

https://docs.hortonworks.com/HDPDocuments/Ambari-2.7.3.0/bk_ambari-installation/content/ambari_repositories.html
https://docs.hortonworks.com/HDPDocuments/Ambari-2.7.3.0/bk_ambari-installation/content/hdp_31_repositories.html

分别下载 ambari、HDP、HDP-UTILS、HDP-GPL四个文件,其中一个特别大,到时候rz是传不上去的,我这边开了下vsftp服务器, 用的默认账号登录,启动下允许写就可以了

5、下载nginx,把刚才下载的东西全部解压,放到www目录下,我这边地址为

 root@yqzn-server:/home/yqroot/system_environment/nginx/www/ambari# ls
 ambari  HDP  HDP-GPL  HDP-UTILS

nginx配置文件为

#[ambari配置信息]
server {
    listen 8888;

    server_name 192.168.47.212;#server机ip

    root /var/www/ambari;

    location ^~ / {
        autoindex on;
        autoindex_exact_size off;
        autoindex_localtime on;
    }


    location ~ /\.ht {
        deny all;
    }

}

完成后能够通过web访问到如下图片即可

6、本地仓库配置

在/etc/apt/sources.list.d/ 分别新建 ambari-hdp.list ambari.list 两个文件,内容如下

root@yqzn-server:/etc/apt/sources.list.d# cat ambari-hdp.list 
deb http://server.hdp:8888/HDP/ubuntu18/3.1.0.0-78/ HDP main
deb http://server.hdp:8888/HDP-GPL/ubuntu18/3.1.0.0-78/ HDP-GPL main
deb http://server.hdp:8888/HDP-UTILS/ubuntu18/1.1.0.22/ HDP-UTILS main


root@yqzn-server:/etc/apt/sources.list.d# cat ambari.list 
deb http://server.hdp:8888/ambari/ubuntu18/2.7.3.0-139/ Ambari main
root@yqzn-server:/etc/apt/sources.list.d# 

通过scp命令传给客户端就好了 最后执行下命令

[所有主机]
# 确定key
sudo apt-key adv --recv-keys --keyserver keyserver.ubuntu.com B9733A7A07513CAD
# 更新地址
sudo apt-get update

7配置jdk、mysql相关(也可以用其他数据库,我用的mysql)

sudo ambari-server setup

以上图片来源于其他人的博客,意思懂就可以,中间还需要在本地提交mysql-connect的jar包,然后执行

[授予权限]
GRANT ALL PRIVILEGES ON *.* TO 'root'@'%'  IDENTIFIED BY '123456' WITH GRANT OPTION;

# 新增数据库ambari
create database ambari character set utf8;
CREATE USER 'ambari'@'%'IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON *.* TO 'ambari'@'%';
FLUSH PRIVILEGES;

# 新增数据库hive
create database hive character set utf8;
CREATE USER 'hive'@'%'IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON *.* TO 'hive'@'%';
FLUSH PRIVILEGES;

# 执行sql脚本
mysql> use ambari;
mysql> source /var/lib/ambari-server/resources/Ambari-DDL-MySQL-CREATE.sql;

**我这边mysql是用docker装的,所以把最后的文件拿出来就好了,然后原作者这边也有问题,ambari这个用户是多建的,之前账户用的root(默认是ambari)配置文件在这里,写错了可以修改 **

vim /etc/ambari-server/conf/ambari.properties 

8启动服务

sudo ambari-server start

默认使用了8080端口,本着给大哥让路的态度,我把之前占用的8080端口给了他,nginx重新配置了下。最后本地访问

账号密码admin admin 到这样,服务端的配置基本都完成了,下面进行客户端的软件安装

9其他

可能需要修改本地的hostname

服务端客户端需要在hosts添加

127.0.1.1       server.hdp

最后附上我的完整卸载指南,方法来源于网络,做了下修改(原版都是2版本的),不排除存在卸载不干净的可能

ambari-server stop
ambari-agent stop

sudo apt-get --purge remove -y hadoop_3* hdp-select* ranger_3* zookeeper* bigtop* atlas-metadata* ambari* spark* slide* strom* hive*
sudo apt-get --purge autoremove -y hadoop_3* hdp-select* ranger_3* zookeeper* bigtop* atlas-metadata* ambari* spark* slide* strom* hive*
dpkg -l | grep hadoop_3*   
dpkg -l | grep hdp-select*
dpkg -l | grep ranger_3*
dpkg -l | grep zookeeper*
dpkg -l | grep bigtop*
dpkg -l | grep atlas-metadata* 
dpkg -l | grep ambari*
dpkg -l | grep spark* slide* 
dpkg -l | grep strom* 
dpkg -l | grep hive*
dpkg -l | grep hadoop_3* hdp-select* ranger_3* zookeeper* bigtop* atlas-metadata* ambari* spark* slide* strom* hive*




sudo userdel oozie
sudo userdel hive
sudo userdel ambari-qa
sudo userdel flume 
sudo userdel hdfs 
sudo userdel knox 
sudo userdel storm 
sudo userdel mapred
sudo userdel hbase 
sudo userdel tez 
sudo userdel zookeeper
sudo userdel kafka 
sudo userdel falcon
sudo userdel sqoop 
sudo userdel yarn 
sudo userdel hcat
sudo userdel atlas
sudo userdel spark
sudo userdel ams
sudo userdel zeppelin
 
sudo rm -rf /home/atlas
sudo rm -rf /home/accumulo
sudo rm -rf /home/hbase
sudo rm -rf /home/hive
sudo rm -rf /home/oozie
sudo rm -rf /home/storm
sudo rm -rf /home/yarn
sudo rm -rf /home/ambari-qa
sudo rm -rf /home/falcon
sudo rm -rf /home/hcat
sudo rm -rf /home/kafka
sudo rm -rf /home/mahout
sudo rm -rf /home/spark
sudo rm -rf /home/tez
sudo rm -rf /home/zookeeper
sudo rm -rf /home/flume
sudo rm -rf /home/hdfs
sudo rm -rf /home/knox
sudo rm -rf /home/mapred
sudo rm -rf /home/sqoop
 
sudo rm -rf /var/lib/ambari*
sudo rm -rf /usr/lib/python2.6/site-packages/ambari_*
sudo rm -rf /usr/lib/python2.6/site-packages/resource_management
sudo rm -rf /usr/lib/ambari-*
 
sudo rm -rf /etc/ambari-*
sudo rm -rf /etc/hst
sudo rm -rf /etc/hadoop
sudo rm -rf /etc/hbase
sudo rm -rf /etc/hive
sudo rm -rf /etc/hive*
sudo rm -rf /etc/oozie
sudo rm -rf /etc/sqoop 
sudo rm -rf /etc/zookeeper
sudo rm -rf /etc/flume 
sudo rm -rf /etc/storm 
sudo rm -rf /etc/tez_hive*
sudo rm -rf /etc/spark*
sudo rm -rf /etc/phoenix 
sudo rm -rf /etc/pig 
sudo rm -rf /etc/hive-hcatalog
sudo rm -rf /etc/tez 
sudo rm -rf /etc/falcon 
sudo rm -rf /etc/knox 
sudo rm -rf /etc/hive-webhcat
sudo rm -rf /etc/kafka 
sudo rm -rf /etc/slider 
sudo rm -rf /etc/storm-slider-client
sudo rm -rf /etc/spark 
 
sudo rm -rf /var/run/spark
sudo rm -rf /var/run/hadoop
sudo rm -rf /var/run/hbase
sudo rm -rf /var/run/zookeeper
sudo rm -rf /var/run/flume
sudo rm -rf /var/run/storm
sudo rm -rf /var/run/webhcat
sudo rm -rf /var/run/hadoop-yarn
sudo rm -rf /var/run/hadoop-mapreduce
sudo rm -rf /var/run/kafka
sudo rm -rf /var/run/hive	
sudo rm -rf /var/run/oozie	
sudo rm -rf /var/run/sqoop	
sudo rm -rf /var/run/hive-hcatalog	
sudo rm -rf /var/run/falcon	
sudo rm -rf /var/run/hadoop-hdfs	
sudo rm -rf /var/run/ambari-metrics-collector
sudo rm -rf /var/run/ambari-metrics-monitor	
sudo rm -rf /var/log/hadoop-hdfs	
sudo rm -rf /var/log/hive-hcatalog
sudo rm -rf /var/log/ambari-metrics-monitor
sudo rm -rf /var/log/hadoop
sudo rm -rf /var/log/hbase
sudo rm -rf /var/log/flume
sudo rm -rf /var/log/sqoop
sudo rm -rf /var/log/ambari-server
sudo rm -rf /var/log/ambari-agent
sudo rm -rf /var/log/storm
sudo rm -rf /var/log/hadoop-yarn
sudo rm -rf /var/log/hadoop-mapreduce
sudo rm -rf /var/log/knox 
sudo rm -rf /var/lib/slider
 
sudo rm -rf /usr/lib/flume
sudo rm -rf /usr/lib/storm
sudo rm -rf /var/lib/hive 
sudo rm -rf /var/lib/oozie
sudo rm -rf /var/lib/flume
sudo rm -rf /var/lib/hadoop-yarn
sudo rm -rf /var/lib/hadoop-mapreduce
sudo rm -rf /var/lib/hadoop-hdfs
sudo rm -rf /var/lib/zookeeper
sudo rm -rf /var/lib/knox 
sudo rm -rf /var/log/hive 
sudo rm -rf /var/log/oozie
sudo rm -rf /var/log/zookeeper
sudo rm -rf /var/log/falcon
sudo rm -rf /var/log/webhcat
sudo rm -rf /var/log/spark
sudo rm -rf /var/tmp/oozie
sudo rm -rf /tmp/ambari-qa
sudo rm -rf /tmp/hive 
sudo rm -rf /var/hadoop
sudo rm -rf /hadoop/falcon
sudo rm -rf /tmp/hadoop 
sudo rm -rf /tmp/hadoop-hdfs
sudo rm -rf /usr/hdp
sudo rm -rf /usr/hadoop
sudo rm -rf /opt/hadoop
sudo rm -rf /tmp/hadoop
sudo rm -rf /var/hadoop
sudo rm -rf /hadoop
 
sudo rm -rf /usr/bin/worker-lanucher
sudo rm -rf /usr/bin/zookeeper-client
sudo rm -rf /usr/bin/zookeeper-server
sudo rm -rf /usr/bin/zookeeper-server-cleanup
sudo rm -rf /usr/bin/yarn 
sudo rm -rf /usr/bin/storm
sudo rm -rf /usr/bin/storm-slider 
sudo rm -rf /usr/bin/worker-lanucher
sudo rm -rf /usr/bin/storm
sudo rm -rf /usr/bin/storm-slider 
sudo rm -rf /usr/bin/sqoop 
sudo rm -rf /usr/bin/sqoop-codegen 
sudo rm -rf /usr/bin/sqoop-create-hive-table 
sudo rm -rf /usr/bin/sqoop-eval 
sudo rm -rf /usr/bin/sqoop-export 
sudo rm -rf /usr/bin/sqoop-help 
sudo rm -rf /usr/bin/sqoop-import 
sudo rm -rf /usr/bin/sqoop-import-all-tables 
sudo rm -rf /usr/bin/sqoop-job 
sudo rm -rf /usr/bin/sqoop-list-databases 
sudo rm -rf /usr/bin/sqoop-list-tables 
sudo rm -rf /usr/bin/sqoop-merge 
sudo rm -rf /usr/bin/sqoop-metastore 
sudo rm -rf /usr/bin/sqoop-version 
sudo rm -rf /usr/bin/slider 
sudo rm -rf /usr/bin/ranger-admin-start 
sudo rm -rf /usr/bin/ranger-admin-stop 
sudo rm -rf /usr/bin/ranger-kms
sudo rm -rf /usr/bin/ranger-usersync-start
sudo rm -rf /usr/bin/ranger-usersync-stop
sudo rm -rf /usr/bin/pig 
sudo rm -rf /usr/bin/phoenix-psql 
sudo rm -rf /usr/bin/phoenix-queryserver 
sudo rm -rf /usr/bin/phoenix-sqlline 
sudo rm -rf /usr/bin/phoenix-sqlline-thin 
sudo rm -rf /usr/bin/oozie 
sudo rm -rf /usr/bin/oozied.sh 
sudo rm -rf /usr/bin/mapred 
sudo rm -rf /usr/bin/mahout 
sudo rm -rf /usr/bin/kafka 
sudo rm -rf /usr/bin/hive 
sudo rm -rf /usr/bin/hiveserver* 
sudo rm -rf /usr/bin/hbase
sudo rm -rf /usr/bin/hcat 
sudo rm -rf /usr/bin/hdfs 
sudo rm -rf /usr/bin/hadoop 
sudo rm -rf /usr/bin/flume-ng 
sudo rm -rf /usr/bin/falcon 
sudo rm -rf /usr/bin/beeline
sudo rm -rf /usr/bin/atlas-start 
sudo rm -rf /usr/bin/atlas-stop 
sudo rm -rf /usr/bin/accumulo