当前位置: 代码迷 >> 综合 >> DataFrame API 小记
  详细解决方案

DataFrame API 小记

热度:14   发布时间:2023-11-03 16:32:37.0

http://spark.apache.org/docs/latest/sql-getting-started.html
官网写的很详细。
这里只是个人觉得一些必要内容的摘抄。

第一步:Starting Point: SparkSession

import org.apache.spark.sql.SparkSessionval spark = SparkSession.builder().appName("Spark SQL basic example").config("spark.some.config.option", "some-value").getOrCreate()// For implicit conversions like converting RDDs to DataFrames
import spark.implicits._

第二步:Creating DataFrames:

val df = spark.read.json("examples/src/main/resources/people.json")// Displays the content of the DataFrame to stdout
df.show()
// +----+-------+
// | age|   name|
// +----+-------+
// |null|Michael|
// |  30|   Andy|
// |  19| Justin|
// +----+-------+

Untyped Dataset Operations (aka DataFrame Operations):


studentDF.sort(\$"name".desc,\$"id".desc).show() ##先安name排序,在按id排序
studentDF.select(\$"phone".as("mobile")).show() ## 别名stu1.join(stu2,stu1("id")===stu2("id"),"join方式").show() ## join
stuDF.take(3).foreach(println)
stuDF.first    ###first调用head,head调用head(1)
stuDF.head
stuDF.head(3)
stuDF.filter("name=''or name='null'").show ## where()底层也是调用filter// This import is needed to use the $-notation
import spark.implicits._
// Print the schema in a tree format
df.printSchema()
// root
// |-- age: long (nullable = true)
// |-- name: string (nullable = true)// Select only the "name" column
df.select("name").show()
// +-------+
// |   name|
// +-------+
// |Michael|
// |   Andy|
// | Justin|
// +-------+// Select everybody, but increment the age by 1
df.select($"name", $"age" + 1).show() ## 需要导入隐式转换
df.select(df("name"),df("age")+10).show() ## 也可
// +-------+---------+
// |   name|(age + 1)|
// +-------+---------+
// |Michael|     null|
// |   Andy|       31|
// | Justin|       20|
// +-------+---------+// Select people older than 21
df.filter($"age" > 21).show()
// +---+----+
// |age|name|
// +---+----+
// | 30|Andy|
// +---+----+// Count people by age
df.groupBy("age").count().show()
// +----+-----+
// | age|count|
// +----+-----+
// |  19|    1|
// |null|    1|
// |  30|    1|
// +----+-----+

Interoperating with RDDs:

Spark SQL支持两种不同的方法将现有RDD转换为数据集。第一种方法使用反射来推断包含特定类型对象的RDD的模式。这种基于反射的方法可以提供更简洁的代码,并且在您编写Spark应用程序时已经了解模式时可以很好地工作。

创建数据集的第二种方法是通过编程接口,允许您构建模式,然后将其应用于现有RDD。虽然此方法更详细,但它允许您在直到运行时才知道列及其类型时构造数据集。

Inferring the Schema Using Reflection:
Spark SQL的Scala接口支持自动将包含RDD的案例类转换为DataFrame。case类定义表的模式。使用反射读取case类的参数名称,并成为列的名称。案例类也可以嵌套或包含复杂类型,如Seqs或Arrays。此RDD可以隐式转换为DataFrame,然后注册为表。表可以在后续SQL语句中使用。

// For implicit conversions from RDDs to DataFrames
import spark.implicits._// Create an RDD of Person objects from a text file, convert it to a Dataframe
val peopleDF = spark.sparkContext.textFile("examples/src/main/resources/people.txt").map(_.split(",")).map(attributes => Person(attributes(0), attributes(1).trim.toInt)).toDF()
// Register the DataFrame as a temporary view
peopleDF.createOrReplaceTempView("people")// SQL statements can be run by using the sql methods provided by Spark
val teenagersDF = spark.sql("SELECT name, age FROM people WHERE age BETWEEN 13 AND 19")// The columns of a row in the result can be accessed by field index
teenagersDF.map(teenager => "Name: " + teenager(0)).show()
// +------------+
// |       value|
// +------------+
// |Name: Justin|
// +------------+// or by field name
teenagersDF.map(teenager => "Name: " + teenager.getAs[String]("name")).show()
// +------------+
// |       value|
// +------------+
// |Name: Justin|
// +------------+// No pre-defined encoders for Dataset[Map[K,V]], define explicitly
implicit val mapEncoder = org.apache.spark.sql.Encoders.kryo[Map[String, Any]]
// Primitive types and case classes can be also defined as
// implicit val stringIntMapEncoder: Encoder[Map[String, Any]] = ExpressionEncoder()// row.getValuesMap[T] retrieves multiple columns at once into a Map[String, T]
teenagersDF.map(teenager => teenager.getValuesMap[Any](List("name", "age"))).collect()
// Array(Map("name" -> "Justin", "age" -> 19))

Programmatically Specifying the Schema:
如果无法提前定义案例类(例如,记录的结构以字符串形式编码,或者文本数据集将被解析,而字段将针对不同的用户进行不同的投影),DataFrame则可以通过三个步骤以编程方式创建a 。

  • 1、从原始RDD 创建Rows的RDD;
  • 2、创建由StructType匹配步骤1中创建rows的RDD中的结构 表示的模式。
  • 3、通过SparkSession提供的createDataFrame方法将模式应用于Rows 的RDD 。
import org.apache.spark.sql.types._// Create an RDD
val peopleRDD = spark.sparkContext.textFile("examples/src/main/resources/people.txt")// The schema is encoded in a string
val schemaString = "name age"// Generate the schema based on the string of schema
val fields = schemaString.split(" ").map(fieldName => StructField(fieldName, StringType, nullable = true))
val schema = StructType(fields)// Convert records of the RDD (people) to Rows
val rowRDD = peopleRDD.map(_.split(",")).map(attributes => Row(attributes(0), attributes(1).trim))// Apply the schema to the RDD
val peopleDF = spark.createDataFrame(rowRDD, schema)// Creates a temporary view using the DataFrame
peopleDF.createOrReplaceTempView("people")// SQL can be run over a temporary view created using DataFrames
val results = spark.sql("SELECT name FROM people")// The results of SQL queries are DataFrames and support all the normal RDD operations
// The columns of a row in the result can be accessed by field index or by field name
results.map(attributes => "Name: " + attributes(0)).show()
// +-------------+
// |        value|
// +-------------+
// |Name: Michael|
// |   Name: Andy|
// | Name: Justin|
// +-------------+

  相关解决方案