Pyspark:解析一列json字符串

我有一个由pyspark数据框组成的一列,称为json,其中每一行都是json的unicode字符串。我想解析每一行并返回一个新的数据框,其中每一行都是解析的json。

# Sample Data Frame

jstr1 = u'{"header":{"id":12345,"foo":"bar"},"body":{"id":111000,"name":"foobar","sub_json":{"id":54321,"sub_sub_json":{"col1":20,"col2":"somethong"}}}}'

jstr2 = u'{"header":{"id":12346,"foo":"baz"},"body":{"id":111002,"name":"barfoo","sub_json":{"id":23456,"sub_sub_json":{"col1":30,"col2":"something else"}}}}'

jstr3 = u'{"header":{"id":43256,"foo":"foobaz"},"body":{"id":20192,"name":"bazbar","sub_json":{"id":39283,"sub_sub_json":{"col1":50,"col2":"another thing"}}}}'

df = sql_context.createDataFrame([Row(json=jstr1),Row(json=jstr2),Row(json=jstr3)])

我已经尝试过使用json.loads

(df

.select('json')

.rdd

.map(lambda x: json.loads(x))

.toDF()

).show()

但这返回一个 TypeError: expected string or buffer

我怀疑是问题的一部分是从转换时dataframerdd,架构信息丢失,所以我也尝试手动进入该模式的信息:

schema = StructType([StructField('json', StringType(), True)])

rdd = (df

.select('json')

.rdd

.map(lambda x: json.loads(x))

)

new_df = sql_context.createDataFrame(rdd, schema)

new_df.show()

但是我也一样TypeError

看着这个答案,似乎flatMap在这里将行扁平化可能很有用,但是我也没有成功:

schema = StructType([StructField('json', StringType(), True)])

rdd = (df

.select('json')

.rdd

.flatMap(lambda x: x)

.flatMap(lambda x: json.loads(x))

.map(lambda x: x.get('body'))

)

new_df = sql_context.createDataFrame(rdd, schema)

new_df.show()

我收到此错误:AttributeError: 'unicode' object has no attribute 'get'

回答:

如果您之前将数据帧转换为字符串的RDD,则将带有json字符串的数据帧转换为结构化数据帧实际上非常简单(请参阅:http ://spark.apache.org/docs/latest/sql-

programming-guide 。 html#json-

datasets)

例如:

>>> new_df = sql_context.read.json(df.rdd.map(lambda r: r.json))

>>> new_df.printSchema()

root

|-- body: struct (nullable = true)

| |-- id: long (nullable = true)

| |-- name: string (nullable = true)

| |-- sub_json: struct (nullable = true)

| | |-- id: long (nullable = true)

| | |-- sub_sub_json: struct (nullable = true)

| | | |-- col1: long (nullable = true)

| | | |-- col2: string (nullable = true)

|-- header: struct (nullable = true)

| |-- foo: string (nullable = true)

| |-- id: long (nullable = true)

以上是 Pyspark:解析一列json字符串 的全部内容, 来源链接: utcz.com/qa/431168.html

回到顶部