I have a big amount of data with me(93 files, ~150mb each). The data is a time series, i.e, information about a given set of coordinates(3.3 million latitude-longitude values) is recorded and stored everyday for 93 days, and the whole data is broken up into 93 files respectively. Example of two such files:
Day 1:
lon lat A B day1
68.4 8.4 NaN 20 20
68.4 8.5 16 20 18
68.6 8.4 NaN NaN NaN
.
.
Day 2:
lon lat C D day2
68.4 8.4 NaN NaN NaN
68.4 8.5 24 25 24.5
68.6 8.4 NaN NaN NaN
.
.
I am interested in understanding the nature of the missing data in the columns 'day1', 'day2', 'day3', etc. For example, if the values missing in the concerned columns are evenly distributed among all the set of coordinates then the data is probably missing at random, but if the missing values are concentrated more in a particular set of coordinates then my data will become biased. Consider the way my data is divided into multiple files of large sizes and isn't in a very standard form to operate on making it harder to use some tools.
I am looking for a diagnostic tool or visualization in python that can check/show how the missing data is distributed over the set of coordinates so I can impute/ignore it appropriately.
Thanks.
P.S: This is the first time I am handling missing data so it would be great to see if there exists a workflow which people who do similar kind of work follow.
Assuming that you read file and name it df
. You can count amount of NaNs using:
df.isnull().sum()
It will return you amount of NaNs per column. You could also use:
df.isnull().sum(axis=1).value_counts()
This on the other hand will sum number of NaNs per row and then calculate number of rows with no NaNs, 1 NaN, 2 NaNs and so on.
Regarding working with files of such size, to speed up loading data and processing it I recommend using Dask and change format of your files preferably to parquet so that you can read and write to it in parallel.
You could easily recreate function above in Dask like this:
from dask import dataframe as dd
dd.read_parquet(file_path).isnull().sum().compute()
Answering the comment question:
Use .loc
to slice your dataframe, in code below I choose all rows :
and two columns ['col1', 'col2']
.
df.loc[:, ['col1', 'col2']].isnull().sum(axis=1).value_counts()