提前装好hadoop,hive,spark on yarn
参考https://blog.csdn.net/zheng911209/article/details/105498505
复制这几个文件到spark的conf目录
cp /hadoop/hive-3.1.2/conf/hive-site.xml /spark-3.0.0/conf
cp /hadoop-3.2.1/etc/hadoop/core-site.xml /spark-3.0.0/conf
cp /hadoop-3.2.1/etc/hadoop/hdfs-site.xml /spark-3.0.0/confcd /spark-3.0.0/bin
启动sparksql
spark-sql
启动报如下错误:
#原因是缺少mysql-connector-java-5.1.48.jar,从hive的lib目录复制一个过去
cp /hive-3.1.2/lib/mysql-connector-java-5.1.48.jar /spark-3.0.0/jars
这样说明启动成功了
#进入spark的sbin目录
cd /spark-3.0.0/sbin #启动start-thriftserver.sh
./start-thriftserver.sh
访问http://localhost:8088/cluster