pyspark:带有spark提交的jars依赖

我编写了一个pyspark脚本,该脚本读取两个json文件,coGroup然后将它们发送到elasticsearch集群。当我在本地运行该elasticsearch-

hadoop文件时,一切都会正常运行(大部分情况下),我下载了org.elasticsearch.hadoop.mr.EsOutputFormatorg.elasticsearch.hadoop.mr.LinkedMapWritable类的jar文件,然后使用pyspark使用--jars参数运行我的工作,并且可以看到在我的Elasticsearch集群中出现的文档。

但是,当我尝试在Spark群集上运行它时,出现此错误:

Traceback (most recent call last):

File "/root/spark/spark_test.py", line 141, in <module>

conf=es_write_conf

File "/root/spark/python/pyspark/rdd.py", line 1302, in saveAsNewAPIHadoopFile

keyConverter, valueConverter, jconf)

File "/root/spark/python/lib/py4j-0.8.2.1-src.zip/py4j/java_gateway.py", line 538, in __call__

File "/root/spark/python/lib/py4j-0.8.2.1-src.zip/py4j/protocol.py", line 300, in get_return_value

py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.saveAsNewAPIHadoopFile.

: java.lang.ClassNotFoundException: org.elasticsearch.hadoop.mr.LinkedMapWritable

at java.net.URLClassLoader$1.run(URLClassLoader.java:366)

at java.net.URLClassLoader$1.run(URLClassLoader.java:355)

at java.security.AccessController.doPrivileged(Native Method)

at java.net.URLClassLoader.findClass(URLClassLoader.java:354)

at java.lang.ClassLoader.loadClass(ClassLoader.java:425)

at java.lang.ClassLoader.loadClass(ClassLoader.java:358)

at java.lang.Class.forName0(Native Method)

at java.lang.Class.forName(Class.java:274)

at org.apache.spark.util.Utils$.classForName(Utils.scala:157)

at org.apache.spark.api.python.PythonRDD$$anonfun$getKeyValueTypes$1$$anonfun$apply$9.apply(PythonRDD.scala:611)

at org.apache.spark.api.python.PythonRDD$$anonfun$getKeyValueTypes$1$$anonfun$apply$9.apply(PythonRDD.scala:610)

at scala.Option.map(Option.scala:145)

at org.apache.spark.api.python.PythonRDD$$anonfun$getKeyValueTypes$1.apply(PythonRDD.scala:610)

at org.apache.spark.api.python.PythonRDD$$anonfun$getKeyValueTypes$1.apply(PythonRDD.scala:609)

at scala.Option.flatMap(Option.scala:170)

at org.apache.spark.api.python.PythonRDD$.getKeyValueTypes(PythonRDD.scala:609)

at org.apache.spark.api.python.PythonRDD$.saveAsNewAPIHadoopFile(PythonRDD.scala:701)

at org.apache.spark.api.python.PythonRDD.saveAsNewAPIHadoopFile(PythonRDD.scala)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)

at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

at java.lang.reflect.Method.invoke(Method.java:606)

at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231)

at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:379)

at py4j.Gateway.invoke(Gateway.java:259)

at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)

at py4j.commands.CallCommand.execute(CallCommand.java:79)

at py4j.GatewayConnection.run(GatewayConnection.java:207)

at java.lang.Thread.run(Thread.java:745)

在我看来,这很清楚:elasticsearch-

hadoop工人无法使用广口瓶;问题是:如何将其与应用程序一起发送?我可以将其sc.addPyFile用于python依赖项,但不能与jars一起使用,并且使用--jars参数spark-

submit也无济于事。

回答:

--jars刚刚作品; 问题是我一开始如何spark-submit工作;正确的执行方式是:

./bin/spark-submit <options> scriptname

因此,--jars必须将选项放在脚本之前:

./bin/spark-submit --jars /path/to/my.jar myscript.py

如果您认为这是将参数传递给脚本本身的唯一方法,那么这很明显,因为脚本名称后面的所有内容都将用作脚本的输入参数:

./bin/spark-submit --jars /path/to/my.jar myscript.py --do-magic=true

以上是 pyspark:带有spark提交的jars依赖 的全部内容, 来源链接: utcz.com/qa/433633.html

回到顶部