Search code examples
pysparkmetadatascalingapache-spark-mllib

Pyspark Loses Metadata After MinMaxScaler


I'm using the student data set from: https://archive.ics.uci.edu/ml/machine-learning-databases/00320/

If I scale the features in the pipeline it loses the bulk of the metadata which I need later. Here is the basic setup without scaling to produce the metadata. The scaling options are commented for easy replication.

I'm selecting out numeric and categorical columns I wish to use for the model. Here is my data setup and pipeline without scaling to see the metadata.

# load data
from pyspark.sql import SparkSession
spark = SparkSession.builder.appName('student-performance').getOrCreate()
df_raw = spark.read.options(delimiter=';', header=True, inferSchema=True).csv('student-mat.csv')

# specify columns and filter
cols_cate = ['school', 'sex', 'Pstatus', 'Mjob', 'Fjob', 'famsup', 'activities', 'higher', 'internet', 'romantic']
cols_num = ['age', 'Medu', 'Fedu', 'studytime', 'failures', 'famrel', 'goout', 'Dalc', 'Walc', 'health', 'absences', 'G1', 'G2']
col_label = ['G3']
keep = cols_cate + cols_num + col_label
df_keep = df_raw.select(keep)

# setup pipeline
from pyspark.ml.feature import OneHotEncoder, StringIndexer, VectorAssembler, MinMaxScaler
cols_assembly = []
stages = []
for col in cols_cate:
    string_index = StringIndexer(inputCol=col, outputCol=col+'-indexed')
    encoder = OneHotEncoder(inputCol=string_index.getOutputCol(), outputCol=col+'-encoded')
    cols_assembly.append(encoder.getOutputCol())
    stages += [string_index, encoder]
# assemble vectors
assembler_input = cols_assembly + cols_num
assembler = VectorAssembler(inputCols=assembler_input, outputCol='features')
stages += [assembler]
# MinMaxScalar option - will need to change 'features' -> 'scaled-features' later
#scaler = MinMaxScaler(inputCol='features', outputCol='scaled-features')
#stages += [scaler]

# apply pipeline
from pyspark.ml import Pipeline
pipeline = Pipeline(stages=stages)
pipelineModel = pipeline.fit(df_keep)
df_pipe = pipelineModel.transform(df_keep)
cols_selected = ['features'] + cols_cate + cols_num + ['G3']
df_pipe = df_pipe.select(cols_selected)

Make the training data, fit a model, and get predictions.

from pyspark.ml.regression import LinearRegression
train, test = df_pipe.randomSplit([0.7, 0.3], seed=14)
lr = LinearRegression(featuresCol='features',labelCol='G3', maxIter=10, regParam=0.3, elasticNetParam=0.8)
lrModel = lr.fit(train)
lr_preds = lrModel.transform(test)

Checking the metadata of the "features" column I have a lot of information here.

lr_preds.schema['features'].metadata

Output:

{'ml_attr': {'attrs': {'numeric': [{'idx': 16, 'name': 'age'},
    {'idx': 17, 'name': 'Medu'},
    {'idx': 18, 'name': 'Fedu'},
    {'idx': 19, 'name': 'studytime'},
    {'idx': 20, 'name': 'failures'},
    {'idx': 21, 'name': 'famrel'},
    {'idx': 22, 'name': 'goout'},
    {'idx': 23, 'name': 'Dalc'},
    {'idx': 24, 'name': 'Walc'},
    {'idx': 25, 'name': 'health'},
    {'idx': 26, 'name': 'absences'},
    {'idx': 27, 'name': 'G1'},
    {'idx': 28, 'name': 'G2'}],
   'binary': [{'idx': 0, 'name': 'school-encoded_GP'},
    {'idx': 1, 'name': 'sex-encoded_F'},
    {'idx': 2, 'name': 'Pstatus-encoded_T'},
    {'idx': 3, 'name': 'Mjob-encoded_other'},
    {'idx': 4, 'name': 'Mjob-encoded_services'},
    {'idx': 5, 'name': 'Mjob-encoded_at_home'},
    {'idx': 6, 'name': 'Mjob-encoded_teacher'},
    {'idx': 7, 'name': 'Fjob-encoded_other'},
    {'idx': 8, 'name': 'Fjob-encoded_services'},
    {'idx': 9, 'name': 'Fjob-encoded_teacher'},
    {'idx': 10, 'name': 'Fjob-encoded_at_home'},
    {'idx': 11, 'name': 'famsup-encoded_yes'},
    {'idx': 12, 'name': 'activities-encoded_yes'},
    {'idx': 13, 'name': 'higher-encoded_yes'},
    {'idx': 14, 'name': 'internet-encoded_yes'},
    {'idx': 15, 'name': 'romantic-encoded_no'}]},
  'num_attrs': 29}}

If I add scaling after the VectorAssembler (commented-out above) in the pipeline, retrain, and make predictions again, it loses all of this metadata.

lr_preds.schema['scaled-features'].metadata

Output:

{'ml_attr': {'num_attrs': 29}}

Is there any way to get this metadata back? Thanks in advance!


Solution

  • mck's suggestion of using 'features' from lr_preds works to get the metadata, it's unchanged. Thank you.

    the column features should remain in the dataframelr_preds, maybe you can get it from that column instead?