{"id":8951,"date":"2023-04-12T03:17:09","date_gmt":"2023-04-12T01:17:09","guid":{"rendered":"https:\/\/myoceane.fr\/?p=8951"},"modified":"2023-09-23T02:48:29","modified_gmt":"2023-09-23T00:48:29","slug":"application-hive-metastore-server","status":"publish","type":"post","link":"https:\/\/myoceane.fr\/index.php\/application-hive-metastore-server\/","title":{"rendered":"[Hive] Hive Server with Spark Standalone"},"content":{"rendered":"<div id=\"fb-root\"><\/div>\n<p style=\"text-align: justify;\">\u5728<a href=\"https:\/\/myoceane.fr\/index.php\/hive-%e5%9c%a8-spark-%e5%ad%98%e5%8f%96%e8%87%aa%e5%b7%b1%e7%9a%84-hive-metastore\/\">\u4e0a\u4e00\u7bc7<\/a>\u6211\u5011\u4ecb\u7d39\u4e86\u5982\u4f55\u5229\u7528 MySQL \u670d\u52d9\u5efa\u7acb\u5c6c\u65bc\u81ea\u5df1\u7684 Hive Metastore \u8cc7\u6599\u5eab\uff0c\u4e26\u4e14\u5229\u7528 Spark SQL \u7684\u65b9\u5f0f\u5c0d Metastore \u88e1\u9762\u7684\u8cc7\u6599\u505a\u5b58\u53d6\uff0c\u6839\u64da\u4e0a\u65b9\u5716\u793a\uff0c\u6211\u5011\u53ef\u4ee5\u7406\u89e3\u9664\u4e86 Spark \u53ef\u4ee5\u5c0d Hive Metastore \u505a\u5b58\u5132\u4e4b\u5916\uff0c\u6211\u5011\u4e5f\u53ef\u4ee5\u5229\u7528 Hive, Impala, Presto, Apache Hudi \u751a\u81f3\u662f\u6700\u8fd1\u51fa\u4f86\u7684 Apache Superset \u4f86\u505a\u8cc7\u6599\u4e32\u63a5\uff0c\u672c\u7bc7\u60f3\u8981\u7d00\u9304\u4e26\u4e14\u6bd4\u8f03\u9019\u5e7e\u7a2e\u6280\u8853\u7684\u512a\u7f3a\u9ede\u662f\u4ec0\u9ebc\uff1f<\/p>\n<h3>Hive Server<\/h3>\n<p>\u7b2c\u4e00\u7a2e\u65b9\u5f0f\u53d6\u5f97 Hive Metastore \u8cc7\u6599\u7684\u65b9\u5f0f\u662f\u900f\u904e Hive Server\uff0c\u53c3\u8003\u5be6\u4f5c\u65b9\u5f0f<a href=\"https:\/\/medium.com\/codex\/setting-up-an-apache-hive-data-warehouse-6074775cf66\">\u7db2\u7ad91<\/a>\uff0c<a href=\"https:\/\/cloud.tencent.com\/developer\/article\/1697496\">\u7db2\u7ad92<\/a>\u3002<\/p>\n<p>\u555f\u52d5 Metastore Server \u6307\u4ee4\uff0c\u5982\u679c\u60f3\u8981\u9032\u4e00\u6b65\u53d6\u5f97\u4e00\u4e9b\u7cfb\u7d71\u8cc7\u8a0a\uff0c\u53ef\u4ee5\u52a0\u4e0a &#8211;hiveconf \u53c3\u6578<\/p>\n<pre class=\"lang:bash\">cd \/opt\/hive\/bin\n.\/hive --service metastore &amp;\n.\/hive --service metastore --hiveconf hive.root.logger=INFO,console &amp;<\/pre>\n<p>\u555f\u52d5 Hive Server 2 \u6307\u4ee4\uff1a<\/p>\n<pre class=\"lang:bash\">cd \/opt\/hive\/bin\n.\/hive --service hiveserver2 &amp;<\/pre>\n<p>\u5728\u958b\u555f Metastore Server \u8207 Hive Server2 \u4e4b\u5f8c\uff2c\u6211\u5011\u5617\u8a66\u5229\u7528 Beeline \u53bb\u9023\u63a5\uff0c\u5f97\u5230\u4ee5\u4e0b\u7684\u932f\u8aa4\u8a0a\u606f\uff0c\u8a0a\u606f\u986f\u793a \/tmp\/hive\/java \u8cc7\u6599\u593e\u6c92\u6709\u5beb\u5165\u7684\u6b0a\u9650\uff1a<\/p>\n<pre class=\"lang:bash\">root@22c98e215f2d:\/opt\/hive\/bin$ .\/beeline\nSLF4J: Class path contains multiple SLF4J bindings.\nSLF4J: Found binding in [jar:file:\/opt\/apache-hive-3.1.3-bin\/lib\/log4j-slf4j-impl-2.17.1.jar!\/org\/slf4j\/impl\/StaticLoggerBinder.class]\nSLF4J: Found binding in [jar:file:\/usr\/local\/hadoop-3.3.0\/share\/hadoop\/common\/lib\/slf4j-log4j12-1.7.25.jar!\/org\/slf4j\/impl\/StaticLoggerBinder.class]\nSLF4J: See http:\/\/www.slf4j.org\/codes.html$multiple_bindings for an explanation.\nSLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]\nBeeline version 3.1.3 by Apache Hive\nbeeline&gt; !connect jdbc:hive2:\/\/\nConnecting to jdbc:hive2:\/\/\nEnter username for jdbc:hive2:\/\/: root\nEnter password for jdbc:hive2:\/\/: ****\nHive Session ID = d8236e49-49be-4b98-b4fa-ee8110c3cf2c\nError applying authorization policy on hive configuration: The dir: \/tmp\/hive on HDFS should be writable. Current permissions are: rwxr-xr-x\n0: jdbc:hive2:\/\/ (closed)&gt; Connection is already closed.<\/pre>\n<p>\u5728\u7d66\u4e88\u5beb\u5165\u6b0a\u9650\u4e4b\u5f8c\u6210\u529f\u900f\u904e Beeline \u9023\u7dda\u81f3 HiveServer2 \u5982\u4e0b\u986f\u793a\uff0c\u53ea\u4e0d\u904e\u6240\u6709\u5132\u5b58\u5728 Hive Metastore \u4e0a\u9762\u7684 Azure Gen2 \u8def\u5f91\u5728\u8b80\u53d6\u7684\u6642\u5019\u6703\u767c\u751f\u932f\u8aa4\uff1a<\/p>\n<pre class=\"lang:bash\">root@22c98e215f2d:\/opt\/hive\/bin$ .\/beeline\nSLF4J: Class path contains multiple SLF4J bindings.\nSLF4J: Found binding in [jar:file:\/opt\/apache-hive-3.1.3-bin\/lib\/log4j-slf4j-impl-2.17.1.jar!\/org\/slf4j\/impl\/StaticLoggerBinder.class]\nSLF4J: Found binding in [jar:file:\/usr\/local\/hadoop-3.3.0\/share\/hadoop\/common\/lib\/slf4j-log4j12-1.7.25.jar!\/org\/slf4j\/impl\/StaticLoggerBinder.class]\nSLF4J: See http:\/\/www.slf4j.org\/codes.html$multiple_bindings for an explanation.\nSLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]\nBeeline version 3.1.3 by Apache Hive\nbeeline&gt; !connect jdbc:hive2:\/\/\nConnecting to jdbc:hive2:\/\/\nEnter username for jdbc:hive2:\/\/: root\nEnter password for jdbc:hive2:\/\/: ****\nHive Session ID = 5c324202-a99b-453d-b93f-3c256b9754c9\n23\/04\/09 09:29:42 [main]: WARN session.SessionState: METASTORE_FILTER_HOOK will be ignored, since hive.security.authorization.manager is set to instance of HiveAuthorizerFactory.\n23\/04\/09 09:29:42 [main]: WARN metastore.ObjectStore: datanucleus.autoStartMechanismMode is set to unsupported value null . Setting it to value: ignored\n23\/04\/09 09:29:44 [main]: WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored\n23\/04\/09 09:29:44 [main]: WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored\n23\/04\/09 09:29:44 [main]: WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored\n23\/04\/09 09:29:44 [main]: WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored\n23\/04\/09 09:29:44 [main]: WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored\n23\/04\/09 09:29:44 [main]: WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored\n23\/04\/09 09:29:45 [main]: WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored\n23\/04\/09 09:29:45 [main]: WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored\n23\/04\/09 09:29:45 [main]: WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored\n23\/04\/09 09:29:45 [main]: WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored\n23\/04\/09 09:29:45 [main]: WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored\n23\/04\/09 09:29:45 [main]: WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored\nConnected to: Apache Hive (version 3.1.3)\nDriver: Hive JDBC (version 3.1.3)\nTransaction isolation: TRANSACTION_REPEATABLE_READ\n0: jdbc:hive2:\/\/&gt;\n0: jdbc:hive2:\/\/&gt; SHOW TABLES;\nOK\n+-----------+\n| tab_name  |\n+-----------+\n| employee  |\n+-----------+\n2 rows selected (0.13 seconds)\n0: jdbc:hive2:\/\/&gt; SELECT * FROM employee;\nOK\n23\/04\/09 09:35:00 [232657a1-cf1d-41cc-a51d-f01da87cf43c main]: WARN fs.FileSystem: Failed to initialize fileystem abfss:\/\/xxxx@storage.dfs.core.windows.net\/user\/hive\/warehouse\/employee: Configuration property storage.dfs.core.windows.net not found.\n23\/04\/09 09:35:00 [main]: WARN thrift.ThriftCLIService: Error fetching results: \norg.apache.hive.service.cli.HiveSQLException: java.io.IOException: Configuration property sponsorwus2f40castorage.dfs.core.windows.net not found.\n        at org.apache.hive.service.cli.operation.SQLOperation.getNextRowSet(SQLOperation.java:465) ~[hive-service-3.1.3.jar:3.1.3]\n        at org.apache.hive.service.cli.operation.OperationManager.getOperationNextRowSet(OperationManager.java:309) ~[hive-service-3.1.3.jar:3.1.3]\n        at org.apache.hive.service.cli.session.HiveSessionImpl.fetchResults(HiveSessionImpl.java:905) ~[hive-service-3.1.3.jar:3.1.3]\n        at org.apache.hive.service.cli.CLIService.fetchResults(CLIService.java:561) ~[hive-service-3.1.3.jar:3.1.3]\n        at org.apache.hive.service.cli.thrift.ThriftCLIService.FetchResults(ThriftCLIService.java:786) [hive-service-3.1.3.jar:3.1.3]\n        at sun.reflect.GeneratedMethodAccessor3.invoke(Unknown Source) ~[?:?]\n        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_362]\n        at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_362]\n        at org.apache.hive.jdbc.HiveConnection$SynchronizedHandler.invoke(HiveConnection.java:1585) [hive-jdbc-3.1.3.jar:3.1.3]\n        at com.sun.proxy.$Proxy33.FetchResults(Unknown Source) [?:?]\n        at org.apache.hive.jdbc.HiveQueryResultSet.next(HiveQueryResultSet.java:373) [hive-jdbc-3.1.3.jar:3.1.3]\n        at org.apache.hive.beeline.BufferedRows.&lt;init&gt;(BufferedRows.java:56) [hive-beeline-3.1.3.jar:3.1.3]\n        at org.apache.hive.beeline.IncrementalRowsWithNormalization.&lt;init&gt;(IncrementalRowsWithNormalization.java:50) [hive-beeline-3.1.3.jar:3.1.3]\n        at org.apache.hive.beeline.BeeLine.print(BeeLine.java:2250) [hive-beeline-3.1.3.jar:3.1.3]\n        at org.apache.hive.beeline.Commands.executeInternal(Commands.java:1026) [hive-beeline-3.1.3.jar:3.1.3]\n        at org.apache.hive.beeline.Commands.execute(Commands.java:1201) [hive-beeline-3.1.3.jar:3.1.3]\n        at org.apache.hive.beeline.Commands.sql(Commands.java:1130) [hive-beeline-3.1.3.jar:3.1.3]\n        at org.apache.hive.beeline.BeeLine.dispatch(BeeLine.java:1425) [hive-beeline-3.1.3.jar:3.1.3]\n        at org.apache.hive.beeline.BeeLine.execute(BeeLine.java:1287) [hive-beeline-3.1.3.jar:3.1.3]\n        at org.apache.hive.beeline.BeeLine.begin(BeeLine.java:1071) [hive-beeline-3.1.3.jar:3.1.3]\n        at org.apache.hive.beeline.BeeLine.mainWithInputRedirection(BeeLine.java:538) [hive-beeline-3.1.3.jar:3.1.3]\n        at org.apache.hive.beeline.BeeLine.main(BeeLine.java:520) [hive-beeline-3.1.3.jar:3.1.3]\n        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_362]\n        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_362]\n        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_362]\n        at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_362]\n        at org.apache.hadoop.util.RunJar.run(RunJar.java:323) [hadoop-common-3.3.0.jar:?]\n        at org.apache.hadoop.util.RunJar.main(RunJar.java:236) [hadoop-common-3.3.0.jar:?]\nCaused by: java.io.IOException: Configuration property sponsorwus2f40castorage.dfs.core.windows.net not found.\n        at org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:602) ~[hive-exec-3.1.3.jar:3.1.3]\n        at org.apache.hadoop.hive.ql.exec.FetchOperator.pushRow(FetchOperator.java:509) ~[hive-exec-3.1.3.jar:3.1.3]\n        at org.apache.hadoop.hive.ql.exec.FetchTask.fetch(FetchTask.java:146) ~[hive-exec-3.1.3.jar:3.1.3]\n        at org.apache.hadoop.hive.ql.Driver.getResults(Driver.java:2691) ~[hive-exec-3.1.3.jar:3.1.3]\n        at org.apache.hadoop.hive.ql.reexec.ReExecDriver.getResults(ReExecDriver.java:229) ~[hive-exec-3.1.3.jar:3.1.3]\n        at org.apache.hive.service.cli.operation.SQLOperation.getNextRowSet(SQLOperation.java:460) ~[hive-service-3.1.3.jar:3.1.3]\n        ... 27 more\nCaused by: org.apache.hadoop.fs.azurebfs.contracts.exceptions.ConfigurationPropertyNotFoundException: Configuration property sponsorwus2f40castorage.dfs.core.windows.net not found.\n        at org.apache.hadoop.fs.azurebfs.AbfsConfiguration.getStorageAccountKey(AbfsConfiguration.java:399) ~[hadoop-azure-3.3.0.jar:?]\n        at org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.initializeClient(AzureBlobFileSystemStore.java:1164) ~[hadoop-azure-3.3.0.jar:?]\n        at org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.&lt;init&gt;(AzureBlobFileSystemStore.java:180) ~[hadoop-azure-3.3.0.jar:?]\n        at org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.initialize(AzureBlobFileSystem.java:108) ~[hadoop-azure-3.3.0.jar:?]\n        at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3414) ~[hadoop-common-3.3.0.jar:?]\n        at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:158) ~[hadoop-common-3.3.0.jar:?]\n        at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3474) ~[hadoop-common-3.3.0.jar:?]\n        at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3442) ~[hadoop-common-3.3.0.jar:?]\n        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:524) ~[hadoop-common-3.3.0.jar:?]\n        at org.apache.hadoop.fs.Path.getFileSystem(Path.java:365) ~[hadoop-common-3.3.0.jar:?]\n        at org.apache.hadoop.hive.ql.exec.FetchOperator.getNextPath(FetchOperator.java:276) ~[hive-exec-3.1.3.jar:3.1.3]\n        at org.apache.hadoop.hive.ql.exec.FetchOperator.getNextSplits(FetchOperator.java:370) ~[hive-exec-3.1.3.jar:3.1.3]\n        at org.apache.hadoop.hive.ql.exec.FetchOperator.getRecordReader(FetchOperator.java:314) ~[hive-exec-3.1.3.jar:3.1.3]\n        at org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:540) ~[hive-exec-3.1.3.jar:3.1.3]\n        at org.apache.hadoop.hive.ql.exec.FetchOperator.pushRow(FetchOperator.java:509) ~[hive-exec-3.1.3.jar:3.1.3]\n        at org.apache.hadoop.hive.ql.exec.FetchTask.fetch(FetchTask.java:146) ~[hive-exec-3.1.3.jar:3.1.3]\n        at org.apache.hadoop.hive.ql.Driver.getResults(Driver.java:2691) ~[hive-exec-3.1.3.jar:3.1.3]\n        at org.apache.hadoop.hive.ql.reexec.ReExecDriver.getResults(ReExecDriver.java:229) ~[hive-exec-3.1.3.jar:3.1.3]\n        at org.apache.hive.service.cli.operation.SQLOperation.getNextRowSet(SQLOperation.java:460) ~[hive-service-3.1.3.jar:3.1.3]\n        ... 27 more\nError: java.io.IOException: Configuration property sponsorwus2f40castorage.dfs.core.windows.net not found. (state=,code=0)\n0: jdbc:hive2:\/\/&gt;<\/pre>\n<p>\u53ef\u80fd\u7684\u89e3\u6c7a\u65b9\u6cd5\u662f\u900f\u904e Hive Connection with Azure Blob Storage\uff0c<a href=\"https:\/\/docs.starburst.io\/latest\/connector\/hive-azure.html\">\u53c3\u8003\u7db2\u7ad9<\/a>\uff0c\u518d\u586b\u5165\u4e86 abfss \u76f8\u95dc\u7684\u74b0\u5883\u53c3\u6578\u5230 hadoop core-site.xml \u4e4b\u5f8c\uff0c\u57f7\u884c SQL Query \u7684\u6307\u4ee4\u8b8a\u6210\u4e86\u4ee5\u4e0b\uff1a<\/p>\n<pre class=\"lang:bash\">0: jdbc:hive2:\/\/&gt; SELECT * FROM table LIMIT 10;\nOK\n23\/04\/10 01:01:29 [main]: WARN thrift.ThriftCLIService: Error fetching results: \norg.apache.hive.service.cli.HiveSQLException: java.io.IOException: java.io.IOException: abfss:\/\/xxxxxx@storage.dfs.core.windows.net\/user\/hive\/warehouse\/table\/part-00000-997a0587-3935-4e06-bca0-632dee826842-c000.snappy.parquet not a SequenceFile\n        at org.apache.hive.service.cli.operation.SQLOperation.getNextRowSet(SQLOperation.java:465) ~[hive-service-3.1.3.jar:3.1.3]\n        at org.apache.hive.service.cli.operation.OperationManager.getOperationNextRowSet(OperationManager.java:309) ~[hive-service-3.1.3.jar:3.1.3]\n        at org.apache.hive.service.cli.session.HiveSessionImpl.fetchResults(HiveSessionImpl.java:905) ~[hive-service-3.1.3.jar:3.1.3]\n        at org.apache.hive.service.cli.CLIService.fetchResults(CLIService.java:561) ~[hive-service-3.1.3.jar:3.1.3]\n        at org.apache.hive.service.cli.thrift.ThriftCLIService.FetchResults(ThriftCLIService.java:786) [hive-service-3.1.3.jar:3.1.3]\n        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_362]\n        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_362]\n        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_362]\n        at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_362]\n        at org.apache.hive.jdbc.HiveConnection$SynchronizedHandler.invoke(HiveConnection.java:1585) [hive-jdbc-3.1.3.jar:3.1.3]\n        at com.sun.proxy.$Proxy37.FetchResults(Unknown Source) [?:?]\n        at org.apache.hive.jdbc.HiveQueryResultSet.next(HiveQueryResultSet.java:373) [hive-jdbc-3.1.3.jar:3.1.3]\n        at org.apache.hive.beeline.BufferedRows.&lt;init&gt;(BufferedRows.java:56) [hive-beeline-3.1.3.jar:3.1.3]\n        at org.apache.hive.beeline.IncrementalRowsWithNormalization.&lt;init&gt;(IncrementalRowsWithNormalization.java:50) [hive-beeline-3.1.3.jar:3.1.3]\n        at org.apache.hive.beeline.BeeLine.print(BeeLine.java:2250) [hive-beeline-3.1.3.jar:3.1.3]\n        at org.apache.hive.beeline.Commands.executeInternal(Commands.java:1026) [hive-beeline-3.1.3.jar:3.1.3]\n        at org.apache.hive.beeline.Commands.execute(Commands.java:1201) [hive-beeline-3.1.3.jar:3.1.3]\n        at org.apache.hive.beeline.Commands.sql(Commands.java:1130) [hive-beeline-3.1.3.jar:3.1.3]\n        at org.apache.hive.beeline.BeeLine.dispatch(BeeLine.java:1425) [hive-beeline-3.1.3.jar:3.1.3]\n        at org.apache.hive.beeline.BeeLine.execute(BeeLine.java:1287) [hive-beeline-3.1.3.jar:3.1.3]\n        at org.apache.hive.beeline.BeeLine.begin(BeeLine.java:1071) [hive-beeline-3.1.3.jar:3.1.3]\n        at org.apache.hive.beeline.BeeLine.mainWithInputRedirection(BeeLine.java:538) [hive-beeline-3.1.3.jar:3.1.3]\n        at org.apache.hive.beeline.BeeLine.main(BeeLine.java:520) [hive-beeline-3.1.3.jar:3.1.3]\n        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_362]\n        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_362]\n        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_362]\n        at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_362]\n        at org.apache.hadoop.util.RunJar.run(RunJar.java:323) [hadoop-common-3.3.0.jar:?]\n        at org.apache.hadoop.util.RunJar.main(RunJar.java:236) [hadoop-common-3.3.0.jar:?]\nCaused by: java.io.IOException: java.io.IOException: abfss:\/\/xxxxxx@storage.dfs.core.windows.net\/user\/hive\/warehouse\/table\/part-00000-997a0587-3935-4e06-bca0-632dee826842-c000.snappy.parquet not a SequenceFile\n        at org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:602) ~[hive-exec-3.1.3.jar:3.1.3]\n        at org.apache.hadoop.hive.ql.exec.FetchOperator.pushRow(FetchOperator.java:509) ~[hive-exec-3.1.3.jar:3.1.3]\n        at org.apache.hadoop.hive.ql.exec.FetchTask.fetch(FetchTask.java:146) ~[hive-exec-3.1.3.jar:3.1.3]\n        at org.apache.hadoop.hive.ql.Driver.getResults(Driver.java:2691) ~[hive-exec-3.1.3.jar:3.1.3]\n        at org.apache.hadoop.hive.ql.reexec.ReExecDriver.getResults(ReExecDriver.java:229) ~[hive-exec-3.1.3.jar:3.1.3]\n        at org.apache.hive.service.cli.operation.SQLOperation.getNextRowSet(SQLOperation.java:460) ~[hive-service-3.1.3.jar:3.1.3]\n        ... 28 more\nCaused by: java.io.IOException: abfss:\/\/xxxxxx@storage.dfs.core.windows.net\/user\/hive\/warehouse\/table\/part-00000-997a0587-3935-4e06-bca0-632dee826842-c000.snappy.parquet not a SequenceFile\n        at org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1970) ~[hadoop-common-3.3.0.jar:?]\n        at org.apache.hadoop.io.SequenceFile$Reader.initialize(SequenceFile.java:1923) ~[hadoop-common-3.3.0.jar:?]\n        at org.apache.hadoop.io.SequenceFile$Reader.&lt;init&gt;(SequenceFile.java:1872) ~[hadoop-common-3.3.0.jar:?]\n        at org.apache.hadoop.io.SequenceFile$Reader.&lt;init&gt;(SequenceFile.java:1886) ~[hadoop-common-3.3.0.jar:?]\n        at org.apache.hadoop.mapred.SequenceFileRecordReader.&lt;init&gt;(SequenceFileRecordReader.java:49) ~[hadoop-mapreduce-client-core-3.3.0.jar:?]\n        at org.apache.hadoop.mapred.SequenceFileInputFormat.getRecordReader(SequenceFileInputFormat.java:64) ~[hadoop-mapreduce-client-core-3.3.0.jar:?]\n        at org.apache.hadoop.hive.ql.exec.FetchOperator$FetchInputFormatSplit.getRecordReader(FetchOperator.java:776) ~[hive-exec-3.1.3.jar:3.1.3]\n        at org.apache.hadoop.hive.ql.exec.FetchOperator.getRecordReader(FetchOperator.java:344) ~[hive-exec-3.1.3.jar:3.1.3]\n        at org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:540) ~[hive-exec-3.1.3.jar:3.1.3]\n        at org.apache.hadoop.hive.ql.exec.FetchOperator.pushRow(FetchOperator.java:509) ~[hive-exec-3.1.3.jar:3.1.3]\n        at org.apache.hadoop.hive.ql.exec.FetchTask.fetch(FetchTask.java:146) ~[hive-exec-3.1.3.jar:3.1.3]\n        at org.apache.hadoop.hive.ql.Driver.getResults(Driver.java:2691) ~[hive-exec-3.1.3.jar:3.1.3]\n        at org.apache.hadoop.hive.ql.reexec.ReExecDriver.getResults(ReExecDriver.java:229) ~[hive-exec-3.1.3.jar:3.1.3]\n        at org.apache.hive.service.cli.operation.SQLOperation.getNextRowSet(SQLOperation.java:460) ~[hive-service-3.1.3.jar:3.1.3]\n        ... 28 more\nError: java.io.IOException: java.io.IOException: abfss:\/\/xxxxxx@storage.dfs.core.windows.net\/user\/hive\/warehouse\/table\/part-00000-997a0587-3935-4e06-bca0-632dee826842-c000.snappy.parquet not a SequenceFile (state=,code=0)<\/pre>\n<p style=\"text-align: justify;\">\u4ee5\u4e0a\u7684 HiveServer \u53ea\u80fd\u8aaa\u660e\u4ed6\u6709\u6210\u529f\u9023\u7dda\u5230 Hive Metastore Server\uff0c\u8981\u9032\u4e00\u6b65\u53bb\u5c0d\u8cc7\u6599\u505a\u904b\u7b97\u9700\u8981\u8a2d\u5b9a hive.execution.engine \u5230\u4ee5\u4e0b\u7b49\u503c mr, spark, tez\uff0c\u9810\u8a2d\u662f mr (map reduce)\u3002<\/p>\n<h4>Hive On Spark<\/h4>\n<p>\u5617\u8a66\u5c07 hive.execution.engine \u8a2d\u5b9a\u6210 spark \u4e26\u4e14\u53bb\u57f7\u884c\uff0c\u5f97\u5230\u4ee5\u4e0b\u7684\u932f\u8aa4\u8a0a\u606f\uff0c\u9019\u8aaa\u660e\u4e86 HiveServer2 \u4e26\u6c92\u6709\u6210\u529f\u5c07 SQL \u9001\u5230 Spark Engine \u4e0a\u53bb\u57f7\u884c\u3002<\/p>\n<pre class=\"lang:bash\">Connected to: Apache Hive (version 3.1.2)\nDriver: Hive JDBC (version 3.1.2)\nTransaction isolation: TRANSACTION_REPEATABLE_READ\n0: jdbc:hive2:\/\/&gt; SELECT COUNT(*) FROM table;\nQuery ID = root_20230503002607_da42464c-8e54-43ff-baf6-1689c33c9c9c\nTotal jobs = 1\nLaunching Job 1 out of 1\nIn order to change the average load for a reducer (in bytes):\n  set hive.exec.reducers.bytes.per.reducer=&lt;number&gt;\nIn order to limit the maximum number of reducers:\n  set hive.exec.reducers.max=&lt;number&gt;\nIn order to set a constant number of reducers:\n  set mapreduce.job.reduces=&lt;number&gt;\nFailed to execute spark task, with exception 'org.apache.hadoop.hive.ql.metadata.HiveException(Failed to create Spark client for Spark session eaf41788-060b-4eb6-bf4c-ccfef2c0d0de)'\n23\/05\/03 00:26:12 [HiveServer2-Background-Pool: Thread-32]: ERROR spark.SparkTask: Failed to execute spark task, with exception 'org.apache.hadoop.hive.ql.metadata.HiveException(Failed to create Spark client for Spark session eaf41788-060b-4eb6-bf4c-ccfef2c0d0de)'\norg.apache.hadoop.hive.ql.metadata.HiveException: Failed to create Spark client for Spark session eaf41788-060b-4eb6-bf4c-ccfef2c0d0de\n\tat org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionImpl.getHiveException(SparkSessionImpl.java:221)\n\tat org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionImpl.open(SparkSessionImpl.java:92)\n\tat org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionManagerImpl.getSession(SparkSessionManagerImpl.java:115)\n\tat org.apache.hadoop.hive.ql.exec.spark.SparkUtilities.getSparkSession(SparkUtilities.java:136)\n\tat org.apache.hadoop.hive.ql.exec.spark.SparkTask.execute(SparkTask.java:115)\n\tat org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:205)\n\tat org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:97)\n\tat org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2664)\n\tat org.apache.hadoop.hive.ql.Driver.execute(Driver.java:2335)\n\tat org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:2011)\n\tat org.apache.hadoop.hive.ql.Driver.run(Driver.java:1709)\n\tat org.apache.hadoop.hive.ql.Driver.run(Driver.java:1703)\n\tat org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:157)\n\tat org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:224)\n\tat org.apache.hive.service.cli.operation.SQLOperation.access$700(SQLOperation.java:87)\n\tat org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:316)\n\tat java.security.AccessController.doPrivileged(Native Method)\n\tat javax.security.auth.Subject.doAs(Subject.java:422)\n\tat org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1845)\n\tat org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:329)\n\tat java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)\n\tat java.util.concurrent.FutureTask.run(FutureTask.java:266)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)\n\tat java.lang.Thread.run(Thread.java:750)\nCaused by: java.lang.NoClassDefFoundError: org\/apache\/spark\/SparkConf\n\tat org.apache.hadoop.hive.ql.exec.spark.HiveSparkClientFactory.generateSparkConf(HiveSparkClientFactory.java:263)\n\tat org.apache.hadoop.hive.ql.exec.spark.RemoteHiveSparkClient.&lt;init&gt;(RemoteHiveSparkClient.java:98)\n\tat org.apache.hadoop.hive.ql.exec.spark.HiveSparkClientFactory.createHiveSparkClient(HiveSparkClientFactory.java:76)\n\tat org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionImpl.open(SparkSessionImpl.java:87)\n\t... 23 more\nCaused by: java.lang.ClassNotFoundException: org.apache.spark.SparkConf\n\tat java.net.URLClassLoader.findClass(URLClassLoader.java:387)\n\tat java.lang.ClassLoader.loadClass(ClassLoader.java:418)\n\tat sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352)\n\tat java.lang.ClassLoader.loadClass(ClassLoader.java:351)\n\t... 27 more\n\n23\/05\/03 00:26:12 [HiveServer2-Background-Pool: Thread-32]: ERROR spark.SparkTask: Failed to execute spark task, with exception 'org.apache.hadoop.hive.ql.metadata.HiveException(Failed to create Spark client for Spark session eaf41788-060b-4eb6-bf4c-ccfef2c0d0de)'\norg.apache.hadoop.hive.ql.metadata.HiveException: Failed to create Spark client for Spark session eaf41788-060b-4eb6-bf4c-ccfef2c0d0de\n\tat org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionImpl.getHiveException(SparkSessionImpl.java:221) ~[hive-exec-3.1.2.jar:3.1.2]\n\tat org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionImpl.open(SparkSessionImpl.java:92) ~[hive-exec-3.1.2.jar:3.1.2]\n\tat org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionManagerImpl.getSession(SparkSessionManagerImpl.java:115) ~[hive-exec-3.1.2.jar:3.1.2]\n\tat org.apache.hadoop.hive.ql.exec.spark.SparkUtilities.getSparkSession(SparkUtilities.java:136) ~[hive-exec-3.1.2.jar:3.1.2]\n\tat org.apache.hadoop.hive.ql.exec.spark.SparkTask.execute(SparkTask.java:115) [hive-exec-3.1.2.jar:3.1.2]\n\tat org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:205) [hive-exec-3.1.2.jar:3.1.2]\n\tat org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:97) [hive-exec-3.1.2.jar:3.1.2]\n\tat org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2664) [hive-exec-3.1.2.jar:3.1.2]\n\tat org.apache.hadoop.hive.ql.Driver.execute(Driver.java:2335) [hive-exec-3.1.2.jar:3.1.2]\n\tat org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:2011) [hive-exec-3.1.2.jar:3.1.2]\n\tat org.apache.hadoop.hive.ql.Driver.run(Driver.java:1709) [hive-exec-3.1.2.jar:3.1.2]\n\tat org.apache.hadoop.hive.ql.Driver.run(Driver.java:1703) [hive-exec-3.1.2.jar:3.1.2]\n\tat org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:157) [hive-exec-3.1.2.jar:3.1.2]\n\tat org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:224) [hive-service-3.1.2.jar:3.1.2]\n\tat org.apache.hive.service.cli.operation.SQLOperation.access$700(SQLOperation.java:87) [hive-service-3.1.2.jar:3.1.2]\n\tat org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:316) [hive-service-3.1.2.jar:3.1.2]\n\tat java.security.AccessController.doPrivileged(Native Method) [?:1.8.0_352]\n\tat javax.security.auth.Subject.doAs(Subject.java:422) [?:1.8.0_352]\n\tat org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1845) [hadoop-common-3.3.0.jar:?]\n\tat org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:329) [hive-service-3.1.2.jar:3.1.2]\n\tat java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_352]\n\tat java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_352]\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_352]\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_352]\n\tat java.lang.Thread.run(Thread.java:750) [?:1.8.0_352]\nCaused by: java.lang.NoClassDefFoundError: org\/apache\/spark\/SparkConf\n\tat org.apache.hadoop.hive.ql.exec.spark.HiveSparkClientFactory.generateSparkConf(HiveSparkClientFactory.java:263) ~[hive-exec-3.1.2.jar:3.1.2]\n\tat org.apache.hadoop.hive.ql.exec.spark.RemoteHiveSparkClient.&lt;init&gt;(RemoteHiveSparkClient.java:98) ~[hive-exec-3.1.2.jar:3.1.2]\n\tat org.apache.hadoop.hive.ql.exec.spark.HiveSparkClientFactory.createHiveSparkClient(HiveSparkClientFactory.java:76) ~[hive-exec-3.1.2.jar:3.1.2]\n\tat org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionImpl.open(SparkSessionImpl.java:87) ~[hive-exec-3.1.2.jar:3.1.2]\n\t... 23 more\nCaused by: java.lang.ClassNotFoundException: org.apache.spark.SparkConf\n\tat java.net.URLClassLoader.findClass(URLClassLoader.java:387) ~[?:1.8.0_352]\n\tat java.lang.ClassLoader.loadClass(ClassLoader.java:418) ~[?:1.8.0_352]\n\tat sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352) ~[?:1.8.0_352]\n\tat java.lang.ClassLoader.loadClass(ClassLoader.java:351) ~[?:1.8.0_352]\n\tat org.apache.hadoop.hive.ql.exec.spark.HiveSparkClientFactory.generateSparkConf(HiveSparkClientFactory.java:263) ~[hive-exec-3.1.2.jar:3.1.2]\n\tat org.apache.hadoop.hive.ql.exec.spark.RemoteHiveSparkClient.&lt;init&gt;(RemoteHiveSparkClient.java:98) ~[hive-exec-3.1.2.jar:3.1.2]\n\tat org.apache.hadoop.hive.ql.exec.spark.HiveSparkClientFactory.createHiveSparkClient(HiveSparkClientFactory.java:76) ~[hive-exec-3.1.2.jar:3.1.2]\n\tat org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionImpl.open(SparkSessionImpl.java:87) ~[hive-exec-3.1.2.jar:3.1.2]\n\t... 23 more\nFAILED: Execution Error, return code 30041 from org.apache.hadoop.hive.ql.exec.spark.SparkTask. Failed to create Spark client for Spark session eaf41788-060b-4eb6-bf4c-ccfef2c0d0de\n23\/05\/03 00:26:12 [HiveServer2-Background-Pool: Thread-32]: ERROR ql.Driver: FAILED: Execution Error, return code 30041 from org.apache.hadoop.hive.ql.exec.spark.SparkTask. Failed to create Spark client for Spark session eaf41788-060b-4eb6-bf4c-ccfef2c0d0de\n23\/05\/03 00:26:12 [HiveServer2-Background-Pool: Thread-32]: ERROR operation.Operation: Error running hive query: \norg.apache.hive.service.cli.HiveSQLException: Error while processing statement: FAILED: Execution Error, return code 30041 from org.apache.hadoop.hive.ql.exec.spark.SparkTask. Failed to create Spark client for Spark session eaf41788-060b-4eb6-bf4c-ccfef2c0d0de\n\tat org.apache.hive.service.cli.operation.Operation.toSQLException(Operation.java:335) ~[hive-service-3.1.2.jar:3.1.2]\n\tat org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:226) ~[hive-service-3.1.2.jar:3.1.2]\n\tat org.apache.hive.service.cli.operation.SQLOperation.access$700(SQLOperation.java:87) ~[hive-service-3.1.2.jar:3.1.2]\n\tat org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:316) [hive-service-3.1.2.jar:3.1.2]\n\tat java.security.AccessController.doPrivileged(Native Method) [?:1.8.0_352]\n\tat javax.security.auth.Subject.doAs(Subject.java:422) [?:1.8.0_352]\n\tat org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1845) [hadoop-common-3.3.0.jar:?]\n\tat org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:329) [hive-service-3.1.2.jar:3.1.2]\n\tat java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_352]\n\tat java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_352]\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_352]\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_352]\n\tat java.lang.Thread.run(Thread.java:750) [?:1.8.0_352]\nCaused by: org.apache.hadoop.hive.ql.metadata.HiveException: Failed to create Spark client for Spark session eaf41788-060b-4eb6-bf4c-ccfef2c0d0de\n\tat org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionImpl.getHiveException(SparkSessionImpl.java:221) ~[hive-exec-3.1.2.jar:3.1.2]\n\tat org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionImpl.open(SparkSessionImpl.java:92) ~[hive-exec-3.1.2.jar:3.1.2]\n\tat org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionManagerImpl.getSession(SparkSessionManagerImpl.java:115) ~[hive-exec-3.1.2.jar:3.1.2]\n\tat org.apache.hadoop.hive.ql.exec.spark.SparkUtilities.getSparkSession(SparkUtilities.java:136) ~[hive-exec-3.1.2.jar:3.1.2]\n\tat org.apache.hadoop.hive.ql.exec.spark.SparkTask.execute(SparkTask.java:115) ~[hive-exec-3.1.2.jar:3.1.2]\n\tat org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:205) ~[hive-exec-3.1.2.jar:3.1.2]\n\tat org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:97) ~[hive-exec-3.1.2.jar:3.1.2]\n\tat org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2664) ~[hive-exec-3.1.2.jar:3.1.2]\n\tat org.apache.hadoop.hive.ql.Driver.execute(Driver.java:2335) ~[hive-exec-3.1.2.jar:3.1.2]\n\tat org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:2011) ~[hive-exec-3.1.2.jar:3.1.2]\n\tat org.apache.hadoop.hive.ql.Driver.run(Driver.java:1709) ~[hive-exec-3.1.2.jar:3.1.2]\n\tat org.apache.hadoop.hive.ql.Driver.run(Driver.java:1703) ~[hive-exec-3.1.2.jar:3.1.2]\n\tat org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:157) ~[hive-exec-3.1.2.jar:3.1.2]\n\tat org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:224) ~[hive-service-3.1.2.jar:3.1.2]\n\t... 11 more\nCaused by: java.lang.NoClassDefFoundError: org\/apache\/spark\/SparkConf\n\tat org.apache.hadoop.hive.ql.exec.spark.HiveSparkClientFactory.generateSparkConf(HiveSparkClientFactory.java:263) ~[hive-exec-3.1.2.jar:3.1.2]\n\tat org.apache.hadoop.hive.ql.exec.spark.RemoteHiveSparkClient.&lt;init&gt;(RemoteHiveSparkClient.java:98) ~[hive-exec-3.1.2.jar:3.1.2]\n\tat org.apache.hadoop.hive.ql.exec.spark.HiveSparkClientFactory.createHiveSparkClient(HiveSparkClientFactory.java:76) ~[hive-exec-3.1.2.jar:3.1.2]\n\tat org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionImpl.open(SparkSessionImpl.java:87) ~[hive-exec-3.1.2.jar:3.1.2]\n\tat org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionManagerImpl.getSession(SparkSessionManagerImpl.java:115) ~[hive-exec-3.1.2.jar:3.1.2]\n\tat org.apache.hadoop.hive.ql.exec.spark.SparkUtilities.getSparkSession(SparkUtilities.java:136) ~[hive-exec-3.1.2.jar:3.1.2]\n\tat org.apache.hadoop.hive.ql.exec.spark.SparkTask.execute(SparkTask.java:115) ~[hive-exec-3.1.2.jar:3.1.2]\n\tat org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:205) ~[hive-exec-3.1.2.jar:3.1.2]\n\tat org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:97) ~[hive-exec-3.1.2.jar:3.1.2]\n\tat org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2664) ~[hive-exec-3.1.2.jar:3.1.2]\n\tat org.apache.hadoop.hive.ql.Driver.execute(Driver.java:2335) ~[hive-exec-3.1.2.jar:3.1.2]\n\tat org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:2011) ~[hive-exec-3.1.2.jar:3.1.2]\n\tat org.apache.hadoop.hive.ql.Driver.run(Driver.java:1709) ~[hive-exec-3.1.2.jar:3.1.2]\n\tat org.apache.hadoop.hive.ql.Driver.run(Driver.java:1703) ~[hive-exec-3.1.2.jar:3.1.2]\n\tat org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:157) ~[hive-exec-3.1.2.jar:3.1.2]\n\tat org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:224) ~[hive-service-3.1.2.jar:3.1.2]\n\t... 11 more\nCaused by: java.lang.ClassNotFoundException: org.apache.spark.SparkConf\n\tat java.net.URLClassLoader.findClass(URLClassLoader.java:387) ~[?:1.8.0_352]\n\tat java.lang.ClassLoader.loadClass(ClassLoader.java:418) ~[?:1.8.0_352]\n\tat sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352) ~[?:1.8.0_352]\n\tat java.lang.ClassLoader.loadClass(ClassLoader.java:351) ~[?:1.8.0_352]\n\tat org.apache.hadoop.hive.ql.exec.spark.HiveSparkClientFactory.generateSparkConf(HiveSparkClientFactory.java:263) ~[hive-exec-3.1.2.jar:3.1.2]\n\tat org.apache.hadoop.hive.ql.exec.spark.RemoteHiveSparkClient.&lt;init&gt;(RemoteHiveSparkClient.java:98) ~[hive-exec-3.1.2.jar:3.1.2]\n\tat org.apache.hadoop.hive.ql.exec.spark.HiveSparkClientFactory.createHiveSparkClient(HiveSparkClientFactory.java:76) ~[hive-exec-3.1.2.jar:3.1.2]\n\tat org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionImpl.open(SparkSessionImpl.java:87) ~[hive-exec-3.1.2.jar:3.1.2]\n\tat org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionManagerImpl.getSession(SparkSessionManagerImpl.java:115) ~[hive-exec-3.1.2.jar:3.1.2]\n\tat org.apache.hadoop.hive.ql.exec.spark.SparkUtilities.getSparkSession(SparkUtilities.java:136) ~[hive-exec-3.1.2.jar:3.1.2]\n\tat org.apache.hadoop.hive.ql.exec.spark.SparkTask.execute(SparkTask.java:115) ~[hive-exec-3.1.2.jar:3.1.2]\n\tat org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:205) ~[hive-exec-3.1.2.jar:3.1.2]\n\tat org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:97) ~[hive-exec-3.1.2.jar:3.1.2]\n\tat org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2664) ~[hive-exec-3.1.2.jar:3.1.2]\n\tat org.apache.hadoop.hive.ql.Driver.execute(Driver.java:2335) ~[hive-exec-3.1.2.jar:3.1.2]\n\tat org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:2011) ~[hive-exec-3.1.2.jar:3.1.2]\n\tat org.apache.hadoop.hive.ql.Driver.run(Driver.java:1709) ~[hive-exec-3.1.2.jar:3.1.2]\n\tat org.apache.hadoop.hive.ql.Driver.run(Driver.java:1703) ~[hive-exec-3.1.2.jar:3.1.2]\n\tat org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:157) ~[hive-exec-3.1.2.jar:3.1.2]\n\tat org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:224) ~[hive-service-3.1.2.jar:3.1.2]\n\t... 11 more\nERROR : FAILED: Execution Error, return code 30041 from org.apache.hadoop.hive.ql.exec.spark.SparkTask. Failed to create Spark client for Spark session eaf41788-060b-4eb6-bf4c-ccfef2c0d0de\nError: Error while processing statement: FAILED: Execution Error, return code 30041 from org.apache.hadoop.hive.ql.exec.spark.SparkTask. Failed to create Spark client for Spark session eaf41788-060b-4eb6-bf4c-ccfef2c0d0de (state=42000,code=30041)<\/pre>\n<h3 id=\"running-the-thrift-jdbcodbc-server\">Running the Thrift JDBC\/ODBC server<\/h3>\n<p>\u4e0b\u4e00\u6b65\u5617\u8a66\u5229\u7528 Spark Standalone \u88e1\u9762\u7684 Thrift Server \u4f86\u505a\u9023\u7dda\uff0c\u53c3\u8003\u5b98\u7db2\u6709\u95dc <a href=\"https:\/\/spark.apache.org\/docs\/latest\/sql-distributed-sql-engine.html#running-the-spark-sql-cli\">Distributed SQL Engine<\/a> \u7684\u8aaa\u660e\u57f7\u884c\u4ee5\u4e0b\u7684\u6307\u4ee4\uff1a<\/p>\n<pre class=\"lang:bash\">.\/sbin\/start-thriftserver.sh \\\n  --hiveconf hive.server2.thrift.port=&lt;listening-port&gt; \\\n  --hiveconf hive.server2.thrift.bind.host=&lt;listening-host&gt; \\\n  --master &lt;master-uri&gt;<\/pre>\n<p>\u76ee\u6a19\u662f\u60f3\u4e0b\u4e00\u6b65\u53ef\u4ee5\u900f\u904e\u4ee5\u4e0b\u6307\u4ee4\uff0c\u900f\u904e\u4e00\u500b SQL Client \u53bb\u4e0b SQL \u6307\u4ee4\uff1a<\/p>\n<pre class=\"lang:bash\">beeline&gt; !connect jdbc:hive2:\/\/localhost:10000<\/pre>\n<p>\u5728\u57f7\u884c\u4e4b\u5f8c\u5f97\u5230\u4ee5\u4e0b\u7684\u932f\u8aa4\u8a0a\u606f\uff1a<\/p>\n<pre class=\"lang:bash\">root@e7233bb8500d4c69ae2f1f0760957865000000:\/home\/spark-current\/sbin$ .\/start-thriftserver.sh --hiveconf hive.server2.thrift.port=10000 --hiveconf hive.server2.thrift.bind.port=10000 --master spark:\/\/10.0.0.4:7077\nstarting org.apache.spark.sql.hive.thriftserver.HiveThriftServer2, logging to \/home\/spark-current\/logs\/spark--org.apache.spark.sql.hive.thriftserver.HiveThriftServer2-1-e7233bb8500d4c69ae2f1f0760957865000000.out\nfailed to launch: nice -n 0 bash \/home\/spark-current\/bin\/spark-submit --class org.apache.spark.sql.hive.thriftserver.HiveThriftServer2 --name Thrift JDBC\/ODBC Server --hiveconf hive.server2.thrift.port=10000 --hiveconf hive.server2.thrift.bind.port=10000 --master spark:\/\/10.0.0.4\n  SLF4J: Found binding in [jar:file:\/mnt\/spark-current\/jars\/piper-operators.jar!\/org\/slf4j\/impl\/StaticLoggerBinder.class]\n  SLF4J: Found binding in [jar:file:\/mnt\/spark-current\/jars\/log4j-slf4j-impl-2.17.2.jar!\/org\/slf4j\/impl\/StaticLoggerBinder.class]\n  SLF4J: Found binding in [jar:file:\/usr\/local\/hadoop-3.3.0\/share\/hadoop\/common\/lib\/slf4j-log4j12-1.7.25.jar!\/org\/slf4j\/impl\/StaticLoggerBinder.class]\n  SLF4J: See http:\/\/www.slf4j.org\/codes.html$multiple_bindings for an explanation.\n  SLF4J: Actual binding is of type [ch.qos.logback.classic.util.ContextSelectorStaticBinder]\n  Error: Failed to load class org.apache.spark.sql.hive.thriftserver.HiveThriftServer2.\n  Failed to load main class org.apache.spark.sql.hive.thriftserver.HiveThriftServer2.\n  You need to build Spark with -Phive and -Phive-thriftserver.\n  23\/05\/03 21:49:29.795 [shutdown-hook-0] INFO  o.a.spark.util.ShutdownHookManager - Shutdown hook called\n  23\/05\/03 21:49:29.797 [shutdown-hook-0] INFO  o.a.spark.util.ShutdownHookManager - Deleting directory \/tmp\/spark-5dd56a34-24c4-41a5-9852-edbd29530e13\nfull log in \/home\/spark-current\/logs\/spark--org.apache.spark.sql.hive.thriftserver.HiveThriftServer2-1-e7233bb8500d4c69ae2f1f0760957865000000.out<\/pre>\n<p>\u9019\u908a\u7684\u932f\u8aa4\u8a0a\u606f\u5448\u73fe You need to build Spark with -Phive and -Phive-thriftserver\uff0c\u9019\u7684\u932f\u8aa4\u8a0a\u606f\u7684\u539f\u56e0\u662f\u56e0\u70ba\u6211\u5011\u4f7f\u7528\u7684 spark \u5305\u4e26\u6c92\u6709 hive \u8207 hive-thriftserver \u7684 jar\uff0c\u89e3\u6c7a\u65b9\u6cd5\u662f\u81ea\u5df1\u4f7f\u7528\u4ee5\u4e0b\u6307\u4ee4\u91cd\u65b0\u7de8\u8b6f Spark <a href=\"https:\/\/jaceklaskowski.gitbooks.io\/mastering-spark-sql\/content\/spark-sql-thrift-server.html\">\u53c3\u8003<\/a>\uff0c<a href=\"https:\/\/ithelp.ithome.com.tw\/articles\/10195061\">\u6216\u8005\u662f\u7528\u6bd4\u8f03\u7c21\u55ae\u7684\u65b9\u5f0f\u6253\u5305\u6210 tar \u6a94<\/a>\uff1a<\/p>\n<pre class=\"lang:bash\">.\/dev\/make-distribution.sh --name \"with-hive-without-hadoop\" --tgz \"-Pyarn,hive,hive-thriftserver\"<\/pre>\n<p>\u9019\u6642\u5019\u518d\u91cd\u65b0\u4e0b\u4e00\u6a23\u7684 start-thriftserver.sh \u7684\u6307\u4ee4\uff0c\u9019\u6642\u5019\u767c\u73fe\u6709\u4ee5\u4e0b\u7684\u932f\u8aa4\u8a0a\u606f\uff0c\u4e0d\u904e\u9084\u4e0d\u662f\u5f88\u6e05\u695a\u554f\u984c\u7684\u6839\u6e90\uff1a<\/p>\n<pre class=\"lang:bash\">23\/05\/05 02:25:51 ERROR StandaloneSchedulerBackend: Application has been killed. Reason: Master removed our application: FAILED\n23\/05\/05 02:25:51 ERROR Inbox: Ignoring error\norg.apache.spark.SparkException: Exiting due to error from cluster scheduler: Master removed our application: FAILED\n\tat org.apache.spark.errors.SparkCoreErrors$.clusterSchedulerError(SparkCoreErrors.scala:218)\n\tat org.apache.spark.scheduler.TaskSchedulerImpl.error(TaskSchedulerImpl.scala:923)\n\tat org.apache.spark.scheduler.cluster.StandaloneSchedulerBackend.dead(StandaloneSchedulerBackend.scala:154)\n\tat org.apache.spark.deploy.client.StandaloneAppClient$ClientEndpoint.markDead(StandaloneAppClient.scala:262)\n\tat org.apache.spark.deploy.client.StandaloneAppClient$ClientEndpoint$anonfun$receive$1.applyOrElse(StandaloneAppClient.scala:169)\n\tat org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:115)\n\tat org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)\n\tat org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)\n\tat org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$receiveLoop(MessageLoop.scala:75)\n\tat org.apache.spark.rpc.netty.MessageLoop$anon$1.run(MessageLoop.scala:41)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)\n\tat java.lang.Thread.run(Thread.java:750)\n23\/05\/05 02:25:51 INFO StandaloneSchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.0\n23\/05\/05 02:25:51 INFO SparkUI: Stopped Spark web UI at http:\/\/e7233bb8500d4c69ae2f1f0760957865000000.internal.cloudapp.net:4040\n23\/05\/05 02:25:51 INFO StandaloneSchedulerBackend: Shutting down all executors\n23\/05\/05 02:25:51 INFO CoarseGrainedSchedulerBackend$DriverEndpoint: Asking each executor to shut down\nException in thread \"main\" java.lang.IllegalStateException: Cannot call methods on a stopped SparkContext.\nThis stopped SparkContext was created at:\n\norg.apache.spark.sql.hive.thriftserver.HiveThriftServer2.main(HiveThriftServer2.scala)\nsun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)\nsun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)\nsun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\njava.lang.reflect.Method.invoke(Method.java:498)\norg.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)\norg.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$runMain(SparkSubmit.scala:958)\norg.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:180)\norg.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)\norg.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)\norg.apache.spark.deploy.SparkSubmit$anon$2.doSubmit(SparkSubmit.scala:1046)\norg.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1055)\norg.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)\n\nThe currently active SparkContext was created at:\n\norg.apache.spark.sql.hive.thriftserver.HiveThriftServer2.main(HiveThriftServer2.scala)\nsun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)\nsun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)\nsun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\njava.lang.reflect.Method.invoke(Method.java:498)\norg.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)\norg.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$runMain(SparkSubmit.scala:958)\norg.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:180)\norg.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)\norg.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)\norg.apache.spark.deploy.SparkSubmit$anon$2.doSubmit(SparkSubmit.scala:1046)\norg.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1055)\norg.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)\n         \n\tat org.apache.spark.SparkContext.assertNotStopped(SparkContext.scala:120)\n\tat org.apache.spark.sql.SparkSession.&lt;init&gt;(SparkSession.scala:113)\n\tat org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:962)\n\tat org.apache.spark.sql.hive.thriftserver.SparkSQLEnv$.init(SparkSQLEnv.scala:54)\n\tat org.apache.spark.sql.hive.thriftserver.HiveThriftServer2$.main(HiveThriftServer2.scala:96)\n\tat org.apache.spark.sql.hive.thriftserver.HiveThriftServer2.main(HiveThriftServer2.scala)\n\tat sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)\n\tat sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)\n\tat sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\n\tat java.lang.reflect.Method.invoke(Method.java:498)\n\tat org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)\n\tat org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$runMain(SparkSubmit.scala:958)\n\tat org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:180)\n\tat org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)\n\tat org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)\n\tat org.apache.spark.deploy.SparkSubmit$anon$2.doSubmit(SparkSubmit.scala:1046)\n\tat org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1055)\n\tat org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)\n23\/05\/05 02:25:51 INFO SparkContext: Invoking stop() from shutdown hook<\/pre>\n<p>\u91cd\u65b0\u5617\u8a66\u5229\u7528\u4ee5\u4e0b\u7684 command \u53bb\u555f\u52d5 thrift-server \u5c31\u53ef\u4ee5\u6210\u529f\uff0c\u770b\u8d77\u4f86\u9810\u8a2d\u4e5f\u662f\u5229\u7528 port 10000\uff0c\u5dee\u5225\u662f\u9023\u7dda\u7684 endpoint \u8a2d\u5b9a\u5728 0.0.0.0:10000 \u800c\u4e0d\u662f localhost:10000\u3002<\/p>\n<pre class=\"lang:bash\">root@39006d6f91bd42cf99d50378e7a0eecb000000:\/home\/spark-current$ .\/sbin\/start-thriftserver.sh --master spark:\/\/10.0.0.4:7077\nroot@39006d6f91bd42cf99d50378e7a0eecb000000:\/home\/spark-current$ .\/bin\/beeline\n\nSLF4J: Class path contains multiple SLF4J bindings.\nSLF4J: Found binding in [jar:file:\/mnt\/spark-current\/jars\/log4j-slf4j-impl-2.17.2.jar!\/org\/slf4j\/impl\/StaticLoggerBinder.class]\nSLF4J: Found binding in [jar:file:\/usr\/local\/hadoop-3.3.0\/share\/hadoop\/common\/lib\/slf4j-log4j12-1.7.25.jar!\/org\/slf4j\/impl\/StaticLoggerBinder.class]\nSLF4J: See http:\/\/www.slf4j.org\/codes.html$multiple_bindings for an explanation.\nSLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]\nBeeline version 2.3.9 by Apache Hive\nbeeline&gt; \nbeeline&gt; !connect jdbc:hive2:\/\/0.0.0.0:10000\nConnecting to jdbc:hive2:\/\/0.0.0.0:10000\nEnter username for jdbc:hive2:\/\/0.0.0.0:10000: root\nEnter password for jdbc:hive2:\/\/0.0.0.0:10000: ****\n08:55:29.026 INFO  Utils - Supplied authorities: 0.0.0.0:10000\n08:55:29.032 INFO  Utils - Resolved authority: 0.0.0.0:10000\nConnected to: Spark SQL (version 3.3.2)\nDriver: Hive JDBC (version 2.3.9)\nTransaction isolation: TRANSACTION_REPEATABLE_READ\n0: jdbc:hive2:\/\/0.0.0.0:10000&gt; \n0: jdbc:hive2:\/\/0.0.0.0:10000&gt; show tables;\n+------------+------------+--------------+\n| namespace  | tableName  | isTemporary  |\n+------------+------------+--------------+\n| default    | test       | false        |\n| default    | test2      | false        |\n+------------+------------+--------------+\n4 rows selected (1.961 seconds)\n0: jdbc:hive2:\/\/0.0.0.0:10000&gt; select count(*) from test;\n+-----------+\n| count(1)  |\n+-----------+\n| 1759      |\n+-----------+\n1 row selected (77.111 seconds)\n0: jdbc:hive2:\/\/0.0.0.0:10000&gt; select count(*) from test2;\n+-----------+\n| count(1)  |\n+-----------+\n| 1759      |\n+-----------+\n1 row selected (38.882 seconds)<\/pre>\n<p style=\"text-align: justify;\">\u4f46\u662f\u5728\u4e0b\u5176\u4ed6\u6307\u4ee4\u7684\u6642\u5019\u6703\u9047\u5230\u4ee5\u4e0b\u7684\u932f\u8aa4\u8a0a\u606f\uff0c\u76f4\u89ba\u662f\u56e0\u70ba\u8f38\u51fa\u7684\u6a94\u6848\u91cf\u592a\u5927\u5c0e\u81f4\u7684 Hive Server2 Timeout\u3002<\/p>\n<pre class=\"lang:bash\">0: jdbc:hive2:\/\/0.0.0.0:10000&gt; select * from test;\norg.apache.thrift.transport.TTransportException\n\tat org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:132)\n\tat org.apache.thrift.transport.TTransport.readAll(TTransport.java:86)\n\tat org.apache.thrift.transport.TSaslTransport.readLength(TSaslTransport.java:374)\n\tat org.apache.thrift.transport.TSaslTransport.readFrame(TSaslTransport.java:451)\n\tat org.apache.thrift.transport.TSaslTransport.read(TSaslTransport.java:433)\n\tat org.apache.thrift.transport.TSaslClientTransport.read(TSaslClientTransport.java:38)\n\tat org.apache.thrift.transport.TTransport.readAll(TTransport.java:86)\n\tat org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:425)\n\tat org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:321)\n\tat org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:225)\n\tat org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:77)\n\tat org.apache.hive.service.rpc.thrift.TCLIService$Client.recv_FetchResults(TCLIService.java:567)\n\tat org.apache.hive.service.rpc.thrift.TCLIService$Client.FetchResults(TCLIService.java:554)\n\tat sun.reflect.GeneratedMethodAccessor3.invoke(Unknown Source)\n\tat sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\n\tat java.lang.reflect.Method.invoke(Method.java:498)\n\tat org.apache.hive.jdbc.HiveConnection$SynchronizedHandler.invoke(HiveConnection.java:1524)\n\tat com.sun.proxy.$Proxy15.FetchResults(Unknown Source)\n\tat org.apache.hive.jdbc.HiveQueryResultSet.next(HiveQueryResultSet.java:373)\n\tat org.apache.hive.beeline.BufferedRows.&lt;init&gt;(BufferedRows.java:53)\n\tat org.apache.hive.beeline.IncrementalRowsWithNormalization.&lt;init&gt;(IncrementalRowsWithNormalization.java:50)\n\tat org.apache.hive.beeline.BeeLine.print(BeeLine.java:2192)\n\tat org.apache.hive.beeline.Commands.executeInternal(Commands.java:1009)\n\tat org.apache.hive.beeline.Commands.execute(Commands.java:1205)\n\tat org.apache.hive.beeline.Commands.sql(Commands.java:1134)\n\tat org.apache.hive.beeline.BeeLine.dispatch(BeeLine.java:1314)\n\tat org.apache.hive.beeline.BeeLine.execute(BeeLine.java:1178)\n\tat org.apache.hive.beeline.BeeLine.begin(BeeLine.java:1033)\n\tat org.apache.hive.beeline.BeeLine.mainWithInputRedirection(BeeLine.java:519)\n\tat org.apache.hive.beeline.BeeLine.main(BeeLine.java:501)\nUnknown HS2 problem when communicating with Thrift server.\nError: org.apache.thrift.transport.TTransportException: java.net.SocketException: Broken pipe (Write failed) (state=08S01,code=0)<\/pre>\n<p>\u5617\u8a66\u5728\u8d77 Hive2ThriftServer \u7684\u6642\u5019\u52a0\u5165\u4e00\u500b hive \u7684\u8a2d\u5b9a (hive.spark.client.server.connect.timeout=90000)\uff0c\u4f46\u662f\u767c\u751f\u53e6\u5916\u4e00\u500b\u554f\u984c\uff0c\u539f\u56e0\u662f\u56e0\u70ba driver \u7684 memory \u4e0d\u5920\u7684\u95dc\u4fc2\uff0c\u8abf\u9ad8\u5373\u53ef\u3002<\/p>\n<pre class=\"lang:bash\">0: jdbc:hive2:\/\/0.0.0.0:10000&gt; select * from test limit 500;\norg.apache.thrift.TException: Error in calling method FetchResults\n\tat org.apache.hive.jdbc.HiveConnection$SynchronizedHandler.invoke(HiveConnection.java:1532)\n\tat com.sun.proxy.$Proxy15.FetchResults(Unknown Source)\n\tat org.apache.hive.jdbc.HiveQueryResultSet.next(HiveQueryResultSet.java:373)\n\tat org.apache.hive.beeline.BufferedRows.&lt;init&gt;(BufferedRows.java:53)\n\tat org.apache.hive.beeline.IncrementalRowsWithNormalization.&lt;init&gt;(IncrementalRowsWithNormalization.java:50)\n\tat org.apache.hive.beeline.BeeLine.print(BeeLine.java:2192)\n\tat org.apache.hive.beeline.Commands.executeInternal(Commands.java:1009)\n\tat org.apache.hive.beeline.Commands.execute(Commands.java:1205)\n\tat org.apache.hive.beeline.Commands.sql(Commands.java:1134)\n\tat org.apache.hive.beeline.BeeLine.dispatch(BeeLine.java:1314)\n\tat org.apache.hive.beeline.BeeLine.execute(BeeLine.java:1178)\n\tat org.apache.hive.beeline.BeeLine.begin(BeeLine.java:1033)\n\tat org.apache.hive.beeline.BeeLine.mainWithInputRedirection(BeeLine.java:519)\n\tat org.apache.hive.beeline.BeeLine.main(BeeLine.java:501)\nCaused by: java.lang.OutOfMemoryError: Java heap space\n\tat java.lang.StringCoding$StringDecoder.decode(StringCoding.java:149)\n\tat java.lang.StringCoding.decode(StringCoding.java:193)\n\tat java.lang.String.&lt;init&gt;(String.java:426)\n\tat java.lang.String.&lt;init&gt;(String.java:491)\n\tat org.apache.thrift.protocol.TBinaryProtocol.readStringBody(TBinaryProtocol.java:380)\n\tat org.apache.thrift.protocol.TBinaryProtocol.readString(TBinaryProtocol.java:372)\n\tat org.apache.hive.service.rpc.thrift.TStringColumn$TStringColumnStandardScheme.read(TStringColumn.java:453)\n\tat org.apache.hive.service.rpc.thrift.TStringColumn$TStringColumnStandardScheme.read(TStringColumn.java:433)\n\tat org.apache.hive.service.rpc.thrift.TStringColumn.read(TStringColumn.java:367)\n\tat org.apache.hive.service.rpc.thrift.TColumn.standardSchemeReadValue(TColumn.java:331)\n\tat org.apache.thrift.TUnion$TUnionStandardScheme.read(TUnion.java:224)\n\tat org.apache.thrift.TUnion$TUnionStandardScheme.read(TUnion.java:213)\n\tat org.apache.thrift.TUnion.read(TUnion.java:138)\n\tat org.apache.hive.service.rpc.thrift.TRowSet$TRowSetStandardScheme.read(TRowSet.java:743)\n\tat org.apache.hive.service.rpc.thrift.TRowSet$TRowSetStandardScheme.read(TRowSet.java:695)\n\tat org.apache.hive.service.rpc.thrift.TRowSet.read(TRowSet.java:605)\n\tat org.apache.hive.service.rpc.thrift.TFetchResultsResp$TFetchResultsRespStandardScheme.read(TFetchResultsResp.java:522)\n\tat org.apache.hive.service.rpc.thrift.TFetchResultsResp$TFetchResultsRespStandardScheme.read(TFetchResultsResp.java:490)\n\tat org.apache.hive.service.rpc.thrift.TFetchResultsResp.read(TFetchResultsResp.java:412)\n\tat org.apache.hive.service.rpc.thrift.TCLIService$FetchResults_result$FetchResults_resultStandardScheme.read(TCLIService.java:16159)\n\tat org.apache.hive.service.rpc.thrift.TCLIService$FetchResults_result$FetchResults_resultStandardScheme.read(TCLIService.java:16144)\n\tat org.apache.hive.service.rpc.thrift.TCLIService$FetchResults_result.read(TCLIService.java:16091)\n\tat org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:88)\n\tat org.apache.hive.service.rpc.thrift.TCLIService$Client.recv_FetchResults(TCLIService.java:567)\n\tat org.apache.hive.service.rpc.thrift.TCLIService$Client.FetchResults(TCLIService.java:554)\n\tat sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)\n\tat sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)\n\tat sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\n\tat java.lang.reflect.Method.invoke(Method.java:498)\n\tat org.apache.hive.jdbc.HiveConnection$SynchronizedHandler.invoke(HiveConnection.java:1524)\n\tat com.sun.proxy.$Proxy15.FetchResults(Unknown Source)\n\tat org.apache.hive.jdbc.HiveQueryResultSet.next(HiveQueryResultSet.java:373)\nError: org.apache.thrift.TException: Error in calling method CloseOperation (state=08S01,code=0)<\/pre>\n<p>\u00a0<\/p>\n\n\n<p><\/p>\n","protected":false},"excerpt":{"rendered":"<p>\u5728\u4e0a\u4e00\u7bc7\u6211\u5011\u4ecb\u7d39\u4e86\u5982\u4f55\u5229\u7528 MySQL \u670d\u52d9\u5efa\u7acb\u5c6c\u65bc\u81ea\u5df1\u7684 Hive Metastore \u8cc7\u6599\u5eab\uff0c\u4e26\u4e14\u5229\u7528 Spark SQL \u7684\u65b9\u5f0f\u5c0d Metastore \u88e1\u9762\u7684\u8cc7\u6599\u505a\u5b58\u53d6\uff0c\u6839\u64da\u4e0a\u65b9\u5716\u793a\uff0c\u6211\u5011\u53ef\u4ee5\u7406\u89e3\u9664\u4e86 Spark \u53ef\u4ee5\u5c0d Hive Metastore \u505a\u5b58\u5132\u4e4b\u5916\uff0c\u6211\u5011\u4e5f\u53ef\u4ee5\u5229\u7528 Hive, Impala, Presto, Apache Hudi \u751a\u81f3\u662f\u6700\u8fd1\u51fa\u4f86\u7684 Apache Superset \u4f86\u505a\u8cc7\u6599\u4e32\u63a5\uff0c\u672c\u7bc7\u60f3\u8981\u7d00\u9304\u4e26\u4e14\u6bd4\u8f03\u9019\u5e7e\u7a2e\u6280\u8853\u7684\u512a\u7f3a\u9ede\u662f\u4ec0\u9ebc\uff1f<\/p>\n","protected":false},"author":1,"featured_media":8950,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[9],"tags":[1574],"class_list":["post-8951","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-bigdata-ml","tag-hive-metastore"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v24.6 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>[Hive] Hive Server with Spark Standalone - \u60f3\u65b9\u6d89\u6cd5 - \u91cf\u74f6\u5916\u7684\u5929\u7a7a M-Y-Oceane<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/myoceane.fr\/index.php\/application-hive-metastore-server\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"[Hive] Hive Server with Spark Standalone - \u60f3\u65b9\u6d89\u6cd5 - \u91cf\u74f6\u5916\u7684\u5929\u7a7a M-Y-Oceane\" \/>\n<meta property=\"og:description\" content=\"\u5728\u4e0a\u4e00\u7bc7\u6211\u5011\u4ecb\u7d39\u4e86\u5982\u4f55\u5229\u7528 MySQL \u670d\u52d9\u5efa\u7acb\u5c6c\u65bc\u81ea\u5df1\u7684 Hive Metastore \u8cc7\u6599\u5eab\uff0c\u4e26\u4e14\u5229\u7528 Spark SQL \u7684\u65b9\u5f0f\u5c0d Metastore \u88e1\u9762\u7684\u8cc7\u6599\u505a\u5b58\u53d6\uff0c\u6839\u64da\u4e0a\u65b9\u5716\u793a\uff0c\u6211\u5011\u53ef\u4ee5\u7406\u89e3\u9664\u4e86 Spark \u53ef\u4ee5\u5c0d Hive Metastore \u505a\u5b58\u5132\u4e4b\u5916\uff0c\u6211\u5011\u4e5f\u53ef\u4ee5\u5229\u7528 Hive, Impala, Presto, Apache Hudi \u751a\u81f3\u662f\u6700\u8fd1\u51fa\u4f86\u7684 Apache Superset \u4f86\u505a\u8cc7\u6599\u4e32\u63a5\uff0c\u672c\u7bc7\u60f3\u8981\u7d00\u9304\u4e26\u4e14\u6bd4\u8f03\u9019\u5e7e\u7a2e\u6280\u8853\u7684\u512a\u7f3a\u9ede\u662f\u4ec0\u9ebc\uff1f\" \/>\n<meta property=\"og:url\" content=\"https:\/\/myoceane.fr\/index.php\/application-hive-metastore-server\/\" \/>\n<meta property=\"og:site_name\" content=\"\u60f3\u65b9\u6d89\u6cd5 - \u91cf\u74f6\u5916\u7684\u5929\u7a7a M-Y-Oceane\" \/>\n<meta property=\"article:published_time\" content=\"2023-04-12T01:17:09+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2023-09-23T00:48:29+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/myoceane.fr\/wp-content\/uploads\/2022\/09\/HiveMetastore-scaled.jpeg\" \/>\n\t<meta property=\"og:image:width\" content=\"2560\" \/>\n\t<meta property=\"og:image:height\" content=\"1509\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"\u6ab8\u6aac\u7238\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"\u6ab8\u6aac\u7238\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"31 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/myoceane.fr\/index.php\/application-hive-metastore-server\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/myoceane.fr\/index.php\/application-hive-metastore-server\/\"},\"author\":{\"name\":\"\u6ab8\u6aac\u7238\",\"@id\":\"https:\/\/myoceane.fr\/#\/schema\/person\/4a4552fb8c27693083d465e12db7658b\"},\"headline\":\"[Hive] Hive Server with Spark Standalone\",\"datePublished\":\"2023-04-12T01:17:09+00:00\",\"dateModified\":\"2023-09-23T00:48:29+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/myoceane.fr\/index.php\/application-hive-metastore-server\/\"},\"wordCount\":135,\"commentCount\":2,\"publisher\":{\"@id\":\"https:\/\/myoceane.fr\/#\/schema\/person\/4a4552fb8c27693083d465e12db7658b\"},\"image\":{\"@id\":\"https:\/\/myoceane.fr\/index.php\/application-hive-metastore-server\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/myoceane.fr\/wp-content\/uploads\/2022\/09\/HiveMetastore-scaled.jpeg\",\"keywords\":[\"Hive Metastore\"],\"articleSection\":[\"Big Data &amp; Machine Learning\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/myoceane.fr\/index.php\/application-hive-metastore-server\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/myoceane.fr\/index.php\/application-hive-metastore-server\/\",\"url\":\"https:\/\/myoceane.fr\/index.php\/application-hive-metastore-server\/\",\"name\":\"[Hive] Hive Server with Spark Standalone - \u60f3\u65b9\u6d89\u6cd5 - \u91cf\u74f6\u5916\u7684\u5929\u7a7a M-Y-Oceane\",\"isPartOf\":{\"@id\":\"https:\/\/myoceane.fr\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/myoceane.fr\/index.php\/application-hive-metastore-server\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/myoceane.fr\/index.php\/application-hive-metastore-server\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/myoceane.fr\/wp-content\/uploads\/2022\/09\/HiveMetastore-scaled.jpeg\",\"datePublished\":\"2023-04-12T01:17:09+00:00\",\"dateModified\":\"2023-09-23T00:48:29+00:00\",\"breadcrumb\":{\"@id\":\"https:\/\/myoceane.fr\/index.php\/application-hive-metastore-server\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/myoceane.fr\/index.php\/application-hive-metastore-server\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/myoceane.fr\/index.php\/application-hive-metastore-server\/#primaryimage\",\"url\":\"https:\/\/myoceane.fr\/wp-content\/uploads\/2022\/09\/HiveMetastore-scaled.jpeg\",\"contentUrl\":\"https:\/\/myoceane.fr\/wp-content\/uploads\/2022\/09\/HiveMetastore-scaled.jpeg\",\"width\":2560,\"height\":1509},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/myoceane.fr\/index.php\/application-hive-metastore-server\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/myoceane.fr\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"[Hive] Hive Server with Spark Standalone\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/myoceane.fr\/#website\",\"url\":\"https:\/\/myoceane.fr\/\",\"name\":\"M-Y-Oceane \u60f3\u65b9\u6d89\u6cd5\u3002\u91cf\u74f6\u5916\u7684\u5929\u7a7a\",\"description\":\"\u60f3\u65b9\u6d89\u6cd5, France, Taiwan, Health, Information Technology\",\"publisher\":{\"@id\":\"https:\/\/myoceane.fr\/#\/schema\/person\/4a4552fb8c27693083d465e12db7658b\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/myoceane.fr\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":[\"Person\",\"Organization\"],\"@id\":\"https:\/\/myoceane.fr\/#\/schema\/person\/4a4552fb8c27693083d465e12db7658b\",\"name\":\"\u6ab8\u6aac\u7238\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/myoceane.fr\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/6cc678684664f8ad45a8d56a6630b183?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/6cc678684664f8ad45a8d56a6630b183?s=96&d=mm&r=g\",\"caption\":\"\u6ab8\u6aac\u7238\"},\"logo\":{\"@id\":\"https:\/\/myoceane.fr\/#\/schema\/person\/image\/\"},\"url\":\"https:\/\/myoceane.fr\/index.php\/author\/johnny5584767gmail-com\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"[Hive] Hive Server with Spark Standalone - \u60f3\u65b9\u6d89\u6cd5 - \u91cf\u74f6\u5916\u7684\u5929\u7a7a M-Y-Oceane","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/myoceane.fr\/index.php\/application-hive-metastore-server\/","og_locale":"en_US","og_type":"article","og_title":"[Hive] Hive Server with Spark Standalone - \u60f3\u65b9\u6d89\u6cd5 - \u91cf\u74f6\u5916\u7684\u5929\u7a7a M-Y-Oceane","og_description":"\u5728\u4e0a\u4e00\u7bc7\u6211\u5011\u4ecb\u7d39\u4e86\u5982\u4f55\u5229\u7528 MySQL \u670d\u52d9\u5efa\u7acb\u5c6c\u65bc\u81ea\u5df1\u7684 Hive Metastore \u8cc7\u6599\u5eab\uff0c\u4e26\u4e14\u5229\u7528 Spark SQL \u7684\u65b9\u5f0f\u5c0d Metastore \u88e1\u9762\u7684\u8cc7\u6599\u505a\u5b58\u53d6\uff0c\u6839\u64da\u4e0a\u65b9\u5716\u793a\uff0c\u6211\u5011\u53ef\u4ee5\u7406\u89e3\u9664\u4e86 Spark \u53ef\u4ee5\u5c0d Hive Metastore \u505a\u5b58\u5132\u4e4b\u5916\uff0c\u6211\u5011\u4e5f\u53ef\u4ee5\u5229\u7528 Hive, Impala, Presto, Apache Hudi \u751a\u81f3\u662f\u6700\u8fd1\u51fa\u4f86\u7684 Apache Superset \u4f86\u505a\u8cc7\u6599\u4e32\u63a5\uff0c\u672c\u7bc7\u60f3\u8981\u7d00\u9304\u4e26\u4e14\u6bd4\u8f03\u9019\u5e7e\u7a2e\u6280\u8853\u7684\u512a\u7f3a\u9ede\u662f\u4ec0\u9ebc\uff1f","og_url":"https:\/\/myoceane.fr\/index.php\/application-hive-metastore-server\/","og_site_name":"\u60f3\u65b9\u6d89\u6cd5 - \u91cf\u74f6\u5916\u7684\u5929\u7a7a M-Y-Oceane","article_published_time":"2023-04-12T01:17:09+00:00","article_modified_time":"2023-09-23T00:48:29+00:00","og_image":[{"width":2560,"height":1509,"url":"https:\/\/myoceane.fr\/wp-content\/uploads\/2022\/09\/HiveMetastore-scaled.jpeg","type":"image\/jpeg"}],"author":"\u6ab8\u6aac\u7238","twitter_card":"summary_large_image","twitter_misc":{"Written by":"\u6ab8\u6aac\u7238","Est. reading time":"31 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/myoceane.fr\/index.php\/application-hive-metastore-server\/#article","isPartOf":{"@id":"https:\/\/myoceane.fr\/index.php\/application-hive-metastore-server\/"},"author":{"name":"\u6ab8\u6aac\u7238","@id":"https:\/\/myoceane.fr\/#\/schema\/person\/4a4552fb8c27693083d465e12db7658b"},"headline":"[Hive] Hive Server with Spark Standalone","datePublished":"2023-04-12T01:17:09+00:00","dateModified":"2023-09-23T00:48:29+00:00","mainEntityOfPage":{"@id":"https:\/\/myoceane.fr\/index.php\/application-hive-metastore-server\/"},"wordCount":135,"commentCount":2,"publisher":{"@id":"https:\/\/myoceane.fr\/#\/schema\/person\/4a4552fb8c27693083d465e12db7658b"},"image":{"@id":"https:\/\/myoceane.fr\/index.php\/application-hive-metastore-server\/#primaryimage"},"thumbnailUrl":"https:\/\/myoceane.fr\/wp-content\/uploads\/2022\/09\/HiveMetastore-scaled.jpeg","keywords":["Hive Metastore"],"articleSection":["Big Data &amp; Machine Learning"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/myoceane.fr\/index.php\/application-hive-metastore-server\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/myoceane.fr\/index.php\/application-hive-metastore-server\/","url":"https:\/\/myoceane.fr\/index.php\/application-hive-metastore-server\/","name":"[Hive] Hive Server with Spark Standalone - \u60f3\u65b9\u6d89\u6cd5 - \u91cf\u74f6\u5916\u7684\u5929\u7a7a M-Y-Oceane","isPartOf":{"@id":"https:\/\/myoceane.fr\/#website"},"primaryImageOfPage":{"@id":"https:\/\/myoceane.fr\/index.php\/application-hive-metastore-server\/#primaryimage"},"image":{"@id":"https:\/\/myoceane.fr\/index.php\/application-hive-metastore-server\/#primaryimage"},"thumbnailUrl":"https:\/\/myoceane.fr\/wp-content\/uploads\/2022\/09\/HiveMetastore-scaled.jpeg","datePublished":"2023-04-12T01:17:09+00:00","dateModified":"2023-09-23T00:48:29+00:00","breadcrumb":{"@id":"https:\/\/myoceane.fr\/index.php\/application-hive-metastore-server\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/myoceane.fr\/index.php\/application-hive-metastore-server\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/myoceane.fr\/index.php\/application-hive-metastore-server\/#primaryimage","url":"https:\/\/myoceane.fr\/wp-content\/uploads\/2022\/09\/HiveMetastore-scaled.jpeg","contentUrl":"https:\/\/myoceane.fr\/wp-content\/uploads\/2022\/09\/HiveMetastore-scaled.jpeg","width":2560,"height":1509},{"@type":"BreadcrumbList","@id":"https:\/\/myoceane.fr\/index.php\/application-hive-metastore-server\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/myoceane.fr\/"},{"@type":"ListItem","position":2,"name":"[Hive] Hive Server with Spark Standalone"}]},{"@type":"WebSite","@id":"https:\/\/myoceane.fr\/#website","url":"https:\/\/myoceane.fr\/","name":"M-Y-Oceane \u60f3\u65b9\u6d89\u6cd5\u3002\u91cf\u74f6\u5916\u7684\u5929\u7a7a","description":"\u60f3\u65b9\u6d89\u6cd5, France, Taiwan, Health, Information Technology","publisher":{"@id":"https:\/\/myoceane.fr\/#\/schema\/person\/4a4552fb8c27693083d465e12db7658b"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/myoceane.fr\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":["Person","Organization"],"@id":"https:\/\/myoceane.fr\/#\/schema\/person\/4a4552fb8c27693083d465e12db7658b","name":"\u6ab8\u6aac\u7238","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/myoceane.fr\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/6cc678684664f8ad45a8d56a6630b183?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/6cc678684664f8ad45a8d56a6630b183?s=96&d=mm&r=g","caption":"\u6ab8\u6aac\u7238"},"logo":{"@id":"https:\/\/myoceane.fr\/#\/schema\/person\/image\/"},"url":"https:\/\/myoceane.fr\/index.php\/author\/johnny5584767gmail-com\/"}]}},"amp_enabled":false,"_links":{"self":[{"href":"https:\/\/myoceane.fr\/index.php\/wp-json\/wp\/v2\/posts\/8951","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/myoceane.fr\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/myoceane.fr\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/myoceane.fr\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/myoceane.fr\/index.php\/wp-json\/wp\/v2\/comments?post=8951"}],"version-history":[{"count":54,"href":"https:\/\/myoceane.fr\/index.php\/wp-json\/wp\/v2\/posts\/8951\/revisions"}],"predecessor-version":[{"id":9055,"href":"https:\/\/myoceane.fr\/index.php\/wp-json\/wp\/v2\/posts\/8951\/revisions\/9055"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/myoceane.fr\/index.php\/wp-json\/wp\/v2\/media\/8950"}],"wp:attachment":[{"href":"https:\/\/myoceane.fr\/index.php\/wp-json\/wp\/v2\/media?parent=8951"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/myoceane.fr\/index.php\/wp-json\/wp\/v2\/categories?post=8951"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/myoceane.fr\/index.php\/wp-json\/wp\/v2\/tags?post=8951"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}