Search code examples
pysparkapache-spark-mllib

How to load spark saved pipeline and retrain with new data


I hope to load a saved pipeline with spark and than re-fit it with new data collected in a day by day strategy. Here is my current code:

new_data_df = data in current day
if target path exists:
  model = PipelineModel.load("path/to/pipeline")
  first_round = model.transform(new_data_df)
  evaluator = BinaryClassificationEvaluator()
  evaluator.evaluate(first_round)
else:
  assembler = VectorAssembler().setInputCols(ft_cols).setOutputCol('features')
  lr = LogisticRegression(maxIter=150, elasticNetParam=0.3, regParam=0.01, labelCol=target, featuresCol='features',
                                standardization=False,
                                predictionCol='prediction')
  model = Pipeline().setStages([assembler, lr])

trained_model = model.fit(new_data_df)

lrm = trained_model.stages[-1]
trainingSummary = lrm.summary
objectiveHistory = trainingSummary.objectiveHistory
trained_model.save("path/to/model/current date")

My issue is in the loading part. If I use PipelineModel, it gives error no fit() method. Then if I use Pipeline(), the loading will fail Error loading metadata: Expected class name org.apache.spark.ml.Pipeline but found class name org.apache.spark.ml.PipelineModel. So, my question is that, is there any way to achieve the incremental learning way I want?


Solution

  • after search and try for a long time, I believe currently there is no way to achieve this without source code changing. cmiiw