当前位置: 代码迷 >> java >> Spark 1.4.0 org.apache.spark.sql.AnalysisException:无法解析给定输入列的“概率”
  详细解决方案

Spark 1.4.0 org.apache.spark.sql.AnalysisException:无法解析给定输入列的“概率”

热度:17   发布时间:2023-07-31 11:10:31.0

我目前正在使用Spark 1.4.0 ,并开始使用ML pipeline framework

我运行了使用LogisticRegression的示例程序“ ml.JavaSimpleTextClassificationPipeline” 但是我想进行多类分类,因此我使用了org.apache.spark.ml.classification包中提供的DecisionTreeClassifier

使用fit方法对模型进行了适当的训练,但是当使用上述示例中的print语句测试模型时,出现以下错误,即“概率”列不存在。

此列仅用于LogisticRegression吗? 如果可以,在DecisionTreeClassifier预测输出之后,我可以看到哪些可能的列?

另外,如果我使用StringIndexer ,该如何将预测的输出转换回String格式。

org.apache.spark.sql.AnalysisException: cannot resolve 'probability' given input columns id, prediction, labelStr, data, features, words, label;
    at org.apache.spark.sql.catalyst.analysis.package$AnalysisErrorAt.failAnalysis(package.scala:42) 
    at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1$$anonfun$apply$2.applyOrElse(CheckAnalysis.scala:63) 
    at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1$$anonfun$apply$2.applyOrElse(CheckAnalysis.scala:52) 
    at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformUp$1.apply(TreeNode.scala:286) 
    at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformUp$1.apply(TreeNode.scala:286) 
    at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:51) 
    at org.apache.spark.sql.catalyst.trees.TreeNode.transformUp(TreeNode.scala:285) 
    at org.apache.spark.sql.catalyst.plans.QueryPlan.org$apache$spark$sql$catalyst$plans$QueryPlan$$transformExpressionUp$1(QueryPlan.scala:108) 
    at org.apache.spark.sql.catalyst.plans.QueryPlan$$anonfun$2$$anonfun$apply$2.apply(QueryPlan.scala:123) 
    at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244) 
    at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244) 
    at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) 
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47) 
    at scala.collection.TraversableLike$class.map(TraversableLike.scala:244) 
    at scala.collection.AbstractTraversable.map(Traversable.scala:105) 
    at org.apache.spark.sql.catalyst.plans.QueryPlan$$anonfun$2.apply(QueryPlan.scala:122) 
    at scala.collection.Iterator$$anon$11.next(Iterator.scala:328) 
    at scala.collection.Iterator$class.foreach(Iterator.scala:727) 
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1157) 
    at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48) 
    at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103) 
    at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47) 
    at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273) 
    at scala.collection.AbstractIterator.to(Iterator.scala:1157) 
    at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265) 
    at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157) 
    at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252) 
    at scala.collection.AbstractIterator.toArray(Iterator.scala:1157) 
    at org.apache.spark.sql.catalyst.plans.QueryPlan.transformExpressionsUp(QueryPlan.scala:127) 
    at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1.apply(CheckAnalysis.scala:52) 
    at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1.apply(CheckAnalysis.scala:50) 
    at org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:98) 
    at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$class.checkAnalysis(CheckAnalysis.scala:50) 
    at org.apache.spark.sql.catalyst.analysis.Analyzer.checkAnalysis(Analyzer.scala:42) 
    at org.apache.spark.sql.SQLContext$QueryExecution.assertAnalyzed(SQLContext.scala:920) 
    at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:131) 
    at org.apache.spark.sql.DataFrame.org$apache$spark$sql$DataFrame$$logicalPlanToDataFrame(DataFrame.scala:154) 
    at org.apache.spark.sql.DataFrame.select(DataFrame.scala:595) 
    at org.apache.spark.sql.DataFrame.select(DataFrame.scala:611) 
    at org.apache.spark.sql.DataFrame.select(DataFrame.scala:611) 
    at com.xxx.ml.xxx.execute(xxx.java:129) 

这是因为“概率”不是您rdd中的一列。 如果要对概率字段进行操作,则应添加它。

  相关解决方案