Supposed I have some financial data of minutes as below, I would like to write a user-defined function(below code is ugly and complicated), how do I get the 5-minute/10-minute/30-minute/1 hour/8 hour/24 hours data with rows summary using Python/pandas out of CSV?
TIME OPEN HIGH LOW CLOSE VOLUME
----------------------------------------------
0 1592194620 3046.00 3048.50 3046.00 3047.50 505
1 1592194630 3047.00 3048.00 3046.00 3047.00 162
2 1592194640 3047.50 3048.00 3047.00 3047.50 98
3 1592194650 3047.50 3047.50 3047.00 3047.50 228
4 1592194660 3048.00 3048.00 3047.50 3048.00 136
5 1592194670 3048.00 3048.00 3046.50 3046.50 174
6 1592194680 3046.50 3046.50 3045.00 3045.00 134
7 1592194690 3045.50 3046.00 3044.00 3045.00 43
8 1592194700 3045.00 3045.50 3045.00 3045.00 214
9 1592194710 3045.50 3045.50 3045.50 3045.50 8
10 1592194720 3045.50 3046.00 3044.50 3044.50 152
.......
.......
19999 1591594660 3048.00 3048.00 3047.50 3048.00 136
The sample output as below:
3048.50 2140 2020-06-13 04:34:00
3050.50 67 2020-06-13 04:35:00
3049.50 1489 2020-06-13 04:36:00
3047.50 987 2020-06-13 04:37:00
......
3099.50 2 2020-06-14 04:34:00
Below is my stupid code:
import pandas as pd
import pymysql
conn = pymysql.connect( host = "localhost",
user="root",
passwd="root",
db="demo")
sql = "SELECT TIME, OPEN, HIGH, LOW, CLOSE, VOLUME FROM demo_table;"
df = pd.read_sql(sql, conn)
# 12 hours for 1000 records
for i in range(1000, 20000-1000,1):
high_price = df.loc[i,['high']][0]
df_1000 = df.loc[i-1000:i]
df_high = df_1000[df_1000['high']>high_price]
high_count = df_high.shape[0]
df_last = df_high.tail(1)
time_dt = pd.Timestamp(df_last['TIME'], unit='s')
print(high_price, high_count, time_dt )
First I would recommend to read the CSV and set TIME as the index:
import pandas as pd
import numpy as np
df = pd.read_csv(csv_file, delim_whitespace=True)
df['TIME'] = pd.to_datetime(df['TIME'], unit='s')
df.set_index('TIME', inplace=True)
In case you would simply like to reduce the time intervals to another one (so for example to go from the current 1 minute to 5 minutes), you can easily re-sample it using Dataframe.resample method:
# Tells what the aggregation should do for each column
colls_agg = {'OPEN': lambda x: x.iloc[0],
'HIGH': 'max',
'LOW': 'min',
'CLOSE': lambda x: x.iloc[-1],
'VOLUME': 'sum'}
def get_summary(df, time_interval):
# Tells what the aggregation should do for each column
return df.resample(pd.Timedelta(time_interval)).agg(colls_agg)
If you would like that each line of your dataframe corresponds to the summary of the last X minutes (which I believe is what you want), you need to recalculate it for each line, as shown below.
colls_agg = {'OPEN': lambda x: x.iloc[0],
'HIGH': 'max',
'LOW': 'min',
'CLOSE': lambda x: x.iloc[-1],
'VOLUME': 'sum'}
def recompute_summary_line(line, full_df, time_interval):
"""Recomputes the summary for a line of the dataframe.
line should be a line of the dataframe,
full_df is the full dataframe
time_interval is the interval of time which will be selected"""
# Selects time betwen current time - time_interval
# until current time (including it)
lines_to_select = (full_df.index > line.name - time_interval) & \
(full_df.index <= (line.name))
agg_value = full_df[lines_to_select].agg(colls_agg)
# For the first few lines, this is not possible, so it returns nan
# Since we have included the current time, it will never happen.
# If you do NOT to include the current time, you might use this.
if agg_value.empty:
return pd.Series({'OPEN': np.nan, 'HIGH': np.nan,
'LOW': np.nan, 'VOLUME': np.nan})
return agg_value
def recompute_summary (df, time_interval):
"""Given a dataframe df, recomputes the summary for the
current time of each row using the information from the the previous
interval given in time_interval (for example '5min', '30s')"""
# Use df.apply to apply it in each line of the dataframe
return df.apply(lambda x: recompute_summary_line(
x, df, pd.Timedelta(time_interval)), axis='columns')
recompute_summary(df, '1min')
recompute_summary(df, '12h')