I'm creating a new DataFrame from scratch, but I'm not sure the way I'm doing it is the most efficient way.
I'm creating:
I'm also creating a new column Police:
Code:
# create dataframes for each column
df1 = pd.concat([pd.DataFrame([1], columns=['NEVER']) for i in range(3070)],
ignore_index=True)
df2 = pd.concat([pd.DataFrame([1], columns=['OCCASIONAL']) for i in range(1100)],
ignore_index=True)
df3 = pd.concat([pd.DataFrame([1], columns=['FREQUENT']) for i in range(2200)],
ignore_index=True)
# combine dataframes into one
frames = [df1, df2, df3]
df = pd.concat(frames)
# reset index
df = df.reset_index(drop=True)
df['POLICE'] = 0.0
# replace police column values
df.loc[0:69,'POLICE']=1.0
df.loc[3071:3180,'POLICE']=1.0
df.loc[5271:5490,'POLICE']=1.0
# convert NaN into 0
values=(0.0)
df = df.fillna(value=values)
I think I've done it, but my code takes ages to process. Is it a normal thing as I'm creating 6000+ rows or my code is inefficient?
I suggest an entirely different approach which is far more efficient. Create a 2D list of your data, then turn it into a dataframe as one piece.
lst = []
for row in range(6370):
lst.append([None, None, None, None])
for col in range(4):
if (col == 0 and row < 3070)\
or (col == 1 and row >= 3070 and row < 1100)\
or (col == 2 and row >= 4170)\
or (col == 3 and row < 70)\
or (col == 3 and row > 3070 and row <= 3180)\
or (col == 3 and row > 5270 and row <= 5490):
lst[row][col] = 1.0
else:
lst[row][col] = 0.0
df = pd.DataFrame(lst)
df.columns = ["NEVER", "OCCASIONAL", "FREQUENT", "POLICE"]
print(df)
Here is the output: