I have a big dataframe with over 12 million rows and one of the columns timelogs
is a mix of alphanumeric and some special characters. I want to remove all the non-numeral characters from timelogs
before finally converting that column to datetime by performing pd.to_datetime(df['timestr'])
. I am performing below operation to remove non-numeral characters and it is taking 30-45 mins. to perform this operation:
df.loc[:, 'timestr'] = df['timelogs'].str.replace('([^0-9]+)', '')
Is there a way to achieve this in a faster way?
You could use translate
with the following translation table:
import string
tt = str.maketrans('', '', string.ascii_letters + string.punctuation + string.whitespace)
In my test with a series of 100K alphanumeric strings of length 20 this is about 35 % faster than replace
.
x = np.random.choice(list(string.ascii_letters + string.digits), [100_000, 20])
s = pd.Series([''.join(x[i]) for i in range(len(x))])
0 4r7xNfZyvbZjcg6sb9UY
1 GqQywPb0JCHcvRXWV8yV
2 8zyOOyC38qoztCZzshoP
3 iemM6xXIkf6xaoAPFlSr
4 uJYCeuftjkDQSwNchYU2
...
99995 ugH4TvzuEvB5f2Cp5Mlt
99996 SYXsz75l9qApOHJDoIF9
99997 34Xyz45JDx1HFojpWTL2
99998 BSyhzbx57H9V237PZgqp
99999 q9Bo9lwKw6O7y7G9G5aQ
Length: 100000, dtype: object
%timeit s.apply(lambda x: "".join([c for c in x if c.isdigit()]))
#174 ms ± 960 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
%timeit s.str.replace('([^0-9]+)', '')
#136 ms ± 443 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
%timeit s.str.translate(tt)
#88.5 ms ± 348 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
The longer the strings the better is translate
in relation to replace
: