Search code examples
pysparkapache-spark-mlpmml

PySpark to PMML - "Field label does not exist" error


I am new to PySpark so this might be a basic question. I am trying to export PySpark code to PMML using JPMML-SparkML library. When running an example from JPMML-SparkML website:

from pyspark.ml import Pipeline
from pyspark.ml.classification import DecisionTreeClassifier
from pyspark.ml.feature import RFormula

df = spark.read.csv("Iris.csv", header = True, inferSchema = True)
formula = RFormula(formula = "Species ~ .")
classifier = DecisionTreeClassifier()
pipeline = Pipeline(stages = [formula, classifier])
pipelineModel = pipeline.fit(df)

I am getting an error Field "label" does not exist. Same error pops up when running a Scala code from the same page. Does anyone know what this label field refer to? It seems like it's something hidden in the Spark code executed in the background. I doubt whether this label field could be a part of Iris data set.

Complete error message:

Traceback (most recent call last): File "/usr/lib/spark/spark-2.1.1-bin-hadoop2.7/python/pyspark/sql/utils.py", line 63, in deco return f(*a, **kw) File "/usr/lib/spark/spark-2.1.1-bin-hadoop2.7/python/lib/py4j-0.10.4-src.zip/py4j/protocol.py", line 319, in get_return_value format(target_id, ".", name), value) py4j.protocol.Py4JJavaError: An error occurred while calling o48.fit. :
 java.lang.IllegalArgumentException: Field "label" does not exist. at
 org.apache.spark.sql.types.StructType$$anonfun$apply$1.apply(StructType.scala:264) at
 org.apache.spark.sql.types.StructType$$anonfun$apply$1.apply(StructType.scala:264) at
 scala.collection.MapLike$class.getOrElse(MapLike.scala:128) at scala.collection.AbstractMap.getOrElse(Map.scala:59) at
 org.apache.spark.sql.types.StructType.apply(StructType.scala:263) at 
 org.apache.spark.ml.util.SchemaUtils$.checkNumericType(SchemaUtils.scala:71) at 
 org.apache.spark.ml.PredictorParams$class.validateAndTransformSchema(Predictor.scala:53) at
 org.apache.spark.ml.classification.Classifier.org$apache$spark$ml$classification$ClassifierParams$$super$validateAndTransformSchema(Cla
 ssifier.scala:58) at org.apache.spark.ml.classification.ClassifierParams$class.validateAndTransformSchema(Classifier.scala:42) at org.apache.spark.ml.classification.ProbabilisticClassifier.org$apache$spark$ml$classification$ProbabilisticClassifierParams$$super$vali
 dateAndTransformSchema(ProbabilisticClassifier.scala:53) at org.apache.spark.ml.classification.ProbabilisticClassifierParams$class.validateAndTransformSchema(ProbabilisticClassifier.scala:37) at
 org.apache.spark.ml.classification.ProbabilisticClassifier.validateAndTransformSchema(ProbabilisticClassifier.scala:53) at
 org.apache.spark.ml.Predictor.transformSchema(Predictor.scala:122) at org.apache.spark.ml.PipelineStage.transformSchema(Pipeline.scala:74) at org.apache.spark.ml.Predictor.fit(Predictor.scala:90) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at
 java.lang.reflect.Method.invoke(Method.java:497) at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244) at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) at py4j.Gateway.invoke(Gateway.java:280) at
 py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) at py4j.commands.CallCommand.execute(CallCommand.java:79) at py4j.GatewayConnection.run(GatewayConnection.java:214) at java.lang.Thread.run(Thread.java:745)

Thanks, Michal


Solution

  • You need to provide the column to be predicted as label. Either you can alias the column in dataframe as 'label' and use the Classifier , or can provide the column as labelCol argument in the Classifier's constructor.

    classifier = DecisionTreeClassifier(labelCol='some prediction field')