如何解决AnalysisException:Spark中已解决的属性

val rdd = sc.parallelize(Seq(("vskp", Array(2.0, 1.0, 2.1, 5.4)),("hyd",Array(1.5, 0.5, 0.9, 3.7)),("hyd", Array(1.5, 0.5, 0.9, 3.2)),("tvm", Array(8.0, 2.9, 9.1, 2.5))))

val df1= rdd.toDF("id", "vals")

val rdd1 = sc.parallelize(Seq(("vskp","ap"),("hyd","tel"),("bglr","kkt")))

val df2 = rdd1.toDF("id", "state")

val df3 = df1.join(df2,df1("id")===df2("id"),"left")

联接操作可以正常工作,但是当我重用df2时,我面临着无法解析的属性错误

val rdd2 = sc.parallelize(Seq(("vskp", "Y"),("hyd", "N"),("hyd", "N"),("tvm", "Y")))

val df4 = rdd2.toDF("id","existance")

val df5 = df4.join(df2,df4("id")===df2("id"),"left")

错误:org.apache.spark.sql.AnalysisException:已解析的属性ID#426

回答:

正如我在评论中提及,它关系到https://issues.apache.org/jira/browse/SPARK-10925,更具体地说https://issues.apache.org/jira/browse/SPARK-14948。重用引用会在命名方面造成歧义,因此您必须克隆df-

例如,请参阅https://issues.apache.org/jira/browse/SPARK-14948中的最后一条注释。

以上是 如何解决AnalysisException:Spark中已解决的属性 的全部内容, 来源链接: utcz.com/qa/415220.html

回到顶部