I am trying to run a custom transformer FunctionTransformer()
within a Scikit-Learn pipeline in Python 2.7. I have used the example from the documentation here. This example performs a PCA and then selects only the 2nd transformed component.i.e. transform a NumPy array X and extract the 2nd column of the transformed NumPy array.
The changes I have made to the code from the official documentation are below:
Here is the full working code:
import matplotlib.pyplot as plt
import numpy as np
from sklearn.cross_validation import train_test_split
from sklearn.decomposition import PCA
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import FunctionTransformer
def _generate_vector(shift=0.5, noise=15):
return np.arange(1000) + (np.random.rand(1000) - shift) * noise
def generate_dataset():
"""
This dataset is two lines with a slope ~ 1, where one has
a y offset of ~100
"""
return np.vstack((
np.vstack((
_generate_vector(),
_generate_vector() + 100,
)).T,
np.vstack((
_generate_vector(),
_generate_vector(),
)).T,
)), np.hstack((np.zeros(1000), np.ones(1000)))
def all_but_first_column(X):
return X[:, 1:]
def drop_first_component(X, y):
"""
Create a pipeline with PCA and the column selector and use it to
transform the dataset.
"""
pipeline = make_pipeline(
FunctionTransformer(all_but_first_column),
)
pipeline.fit(X,y)
return pipeline.transform(X), y
if __name__ == '__main__':
X, y = generate_dataset()
print X[:20,:]
X_transformed, y_transformed = drop_first_component(*generate_dataset())
print X_transformed[:20,:]
When I run this code, I get the following output:
Before pipeline:
[[ -9.54109780e-01 1.00849257e+02]
[ -6.44868525e+00 9.89713451e+01]
[ 6.00611903e+00 9.86368545e+01]
[ -1.02307489e-01 9.91617270e+01]
[ 1.12423836e+01 1.04240711e+02]
[ 6.94957296e+00 1.09557543e+02]
[ 5.41042855e+00 1.09859950e+02]
[ 9.54984210e-01 1.03636786e+02]
[ 1.11194327e+01 1.06942524e+02]
[ 1.32146748e+01 1.16489221e+02]
[ 1.72316993e+01 1.16995924e+02]
[ 1.22797187e+01 1.08568249e+02]
[ 1.14360695e+01 1.06799741e+02]
[ 1.75291161e+01 1.13610682e+02]
[ 1.38768685e+01 1.07815267e+02]
[ 1.29773817e+01 1.12404830e+02]
[ 1.54218007e+01 1.11786074e+02]
[ 1.73923980e+01 1.19284226e+02]
[ 1.97373775e+01 1.16807048e+02]
[ 1.26896716e+01 1.26467393e+02]]
After pipeline:
[[ 94.35392453]
[ 107.08036958]
[ 96.42404642]
[ 96.07304368]
[ 109.33207232]
[ 102.67435761]
[ 106.34131846]
[ 108.45857447]
[ 105.33376831]
[ 107.79576699]
[ 110.71367112]
[ 116.73589447]
[ 117.74629814]
[ 112.48947773]
[ 109.7573836 ]
[ 121.95472733]
[ 119.62476775]
[ 120.0264124 ]
[ 115.00315794]
[ 120.60368954]]
From this Github post, it mentions that FunctionTransformer()
can be used to do some simple things. I am hoping to only drop one column inside a pipeline.
The X before and after the pipeline are different. If all I want is for the pipeline to drop the last column of X, then should this pipeline return the same X before and after the pipeline?
Additional information (if necessary):
In my final application, I will need to use the transformer as the first step in the pipeline and then PCA()
in the 2nd stop. Therefore, I am first testing the pipeline in this post with only the first step - FunctionTransformer()
.
You're making two calls to the generate_dataset()
so the matrix being processed by your drop_first_component
function is not X
, y
, but some newly generated data.
Passing the same (X, y)
directly to generate_dataset
fixes the problem:
if __name__ == '__main__':
X, y = generate_dataset()
print X[:20, :]
X_transformed, y_transformed = drop_first_component(X, y)
print X_transformed[:20, :]
That said, I think using a pipeline stage here is totally overkill. You're importing a few extra libraries, including several additional lines of configuration and logic which is spread across three functions -- all for a calculation that does nothing other than a simple column select X[:, 1:]
.