我来自熊猫的背景,我习惯了从CSV文件读取数据到一个dataframe,然后简单地改变列名使用简单的命令有用的东西:

df.columns = new_column_name_list

然而,这在使用sqlContext创建的PySpark数据框架中是行不通的。 我能想到的唯一解决办法是:

df = sqlContext.read.format("com.databricks.spark.csv").options(header='false', inferschema='true', delimiter='\t').load("data.txt")
oldSchema = df.schema
for i,k in enumerate(oldSchema.fields):
  k.name = new_column_name_list[i]
df = sqlContext.read.format("com.databricks.spark.csv").options(header='false', delimiter='\t').load("data.txt", schema=oldSchema)

这基本上是定义变量两次,首先推断模式,然后重命名列名,然后用更新的模式再次加载数据框架。

有没有更好更有效的方法来做到这一点,就像我们对熊猫做的那样?

我的Spark版本是1.5.0


当前回答

这是我使用的方法:

创建pyspark会话:

import pyspark
from pyspark.sql import SparkSession
spark = SparkSession.builder.appName('changeColNames').getOrCreate()

创建dataframe:

df = spark.createDataFrame(data = [('Bob', 5.62,'juice'),  ('Sue',0.85,'milk')], schema = ["Name", "Amount","Item"])

使用列名查看df:

df.show()
+----+------+-----+
|Name|Amount| Item|
+----+------+-----+
| Bob|  5.62|juice|
| Sue|  0.85| milk|
+----+------+-----+

创建一个包含新列名的列表:

newcolnames = ['NameNew','AmountNew','ItemNew']

修改df的列名:

for c,n in zip(df.columns,newcolnames):
    df=df.withColumnRenamed(c,n)

使用新列名查看df:

df.show()
+-------+---------+-------+
|NameNew|AmountNew|ItemNew|
+-------+---------+-------+
|    Bob|     5.62|  juice|
|    Sue|     0.85|   milk|
+-------+---------+-------+

其他回答

对于单个列重命名,仍然可以使用toDF()。例如,

df1.selectExpr("SALARY*2").toDF("REVISED_SALARY").show()
df = df.withColumnRenamed("colName", "newColName")\
       .withColumnRenamed("colName2", "newColName2")

使用这种方式的优点:对于一个很长的列列表,您只需要更改几个列名。这在这些场景中非常方便。在连接具有重复列名的表时非常有用。

您可以放入for循环,并使用zip将两个数组中的每个列名配对。

new_name = ["id", "sepal_length_cm", "sepal_width_cm", "petal_length_cm", "petal_width_cm", "species"]

new_df = df
for old, new in zip(df.columns, new_name):
    new_df = new_df.withColumnRenamed(old, new)

我喜欢使用字典重命名df。

rename = {'old1': 'new1', 'old2': 'new2'}
for col in df.schema.names:
    df = df.withColumnRenamed(col, rename[col])

有很多方法可以做到这一点:

Option 1. Using selectExpr. data = sqlContext.createDataFrame([("Alberto", 2), ("Dakota", 2)], ["Name", "askdaosdka"]) data.show() data.printSchema() # Output #+-------+----------+ #| Name|askdaosdka| #+-------+----------+ #|Alberto| 2| #| Dakota| 2| #+-------+----------+ #root # |-- Name: string (nullable = true) # |-- askdaosdka: long (nullable = true) df = data.selectExpr("Name as name", "askdaosdka as age") df.show() df.printSchema() # Output #+-------+---+ #| name|age| #+-------+---+ #|Alberto| 2| #| Dakota| 2| #+-------+---+ #root # |-- name: string (nullable = true) # |-- age: long (nullable = true) Option 2. Using withColumnRenamed, notice that this method allows you to "overwrite" the same column. For Python3, replace xrange with range. from functools import reduce oldColumns = data.schema.names newColumns = ["name", "age"] df = reduce(lambda data, idx: data.withColumnRenamed(oldColumns[idx], newColumns[idx]), xrange(len(oldColumns)), data) df.printSchema() df.show() Option 3. using alias, in Scala you can also use as. from pyspark.sql.functions import col data = data.select(col("Name").alias("name"), col("askdaosdka").alias("age")) data.show() # Output #+-------+---+ #| name|age| #+-------+---+ #|Alberto| 2| #| Dakota| 2| #+-------+---+ Option 4. Using sqlContext.sql, which lets you use SQL queries on DataFrames registered as tables. sqlContext.registerDataFrameAsTable(data, "myTable") df2 = sqlContext.sql("SELECT Name AS name, askdaosdka as age from myTable") df2.show() # Output #+-------+---+ #| name|age| #+-------+---+ #|Alberto| 2| #| Dakota| 2| #+-------+---+