「Spark」SparkSQLThriftServer运行方式

database

Spark SQL可以使用JDBC/ODBC或命令行接口充当分布式查询引擎。这种模式,用户或者应用程序可以直接与Spark SQL交互,以运行SQL查询,无需编写任何代码。

Spark SQL提供两种方式来运行SQL:

  • 通过运行Thrift Server
  • 直接执行Spark SQL命令行

运行Thrift Server方式

1、先运行Hive metastore

nohup hive --service metastore &


2、在 hdfs-site.xml 中添加以下配置

<property>
       <name>fs.hdfs.impl.disable.cache</name>
       <value>true</value>

</property>


3、启动Thrift Server

[root@node1 sbin]# pwd

/export/servers/spark-2.2.0-bin-hadoop2.6/sbin

[root@node1 sbin]# ./start-thriftserver.sh --master local[*]

starting org.apache.spark.sql.hive.thriftserver.HiveThriftServer2, logging to /export/servers/spark-2.2.0-bin-hadoop2.6/logs/spark-root-org.apache.spark.sql.hive.thriftserver.HiveThriftServer2-1-node1.out

默认的端口是:10000

注意:启动 Thrift Server 的命令兼容spark-submit的所有命令


4、使用 beeline 连接 Thrift Server

[root@node1 bin]# ./beeline

Beeline version 1.2.1.spark2 by Apache Hive

beeline> !connect jdbc:hive2://node1:10000

Connecting to jdbc:hive2://node1:10000

Enter username for jdbc:hive2://node1:10000: root

Enter password for jdbc:hive2://node1:10000:

20/02/01 22:26:41 INFO jdbc.Utils: Supplied authorities: node1:10000

20/02/01 22:26:41 INFO jdbc.Utils: Resolved authority: node1:10000

20/02/01 22:26:41 INFO jdbc.HiveConnection: Will try to open client transport with JDBC Uri: jdbc:hive2://node1:10000

Connected to: Spark SQL (version 2.2.0)

Driver: Hive JDBC (version 1.2.1.spark2)

Transaction isolation: TRANSACTION_REPEATABLE_READ

0: jdbc:hive2://node1:10000> show databases;

+---------------+--+

| databaseName  |

+---------------+--+

| default       |

| demo          |

| job_analysis  |

| test          |

+---------------+--+

4 rows selected (0.629 seconds)

以上是 「Spark」SparkSQLThriftServer运行方式 的全部内容, 来源链接: utcz.com/z/532149.html

回到顶部