I'm trying to learn how to combine my dataframes in an idiomatic way in polars.
I often have two or more dataframes which hold an identifier in one column and then various results or data points for each identifier in other columns, but where both the rows and columns overlap to some extent, like in the following:
import polars as pl
df = pl.DataFrame(
{
"i": [1, 2, 3, 4],
"a": [2.6, 5.3, None, None],
"b": ["ab", "cd", "ef", "gh"]
}
)
df2 = pl.DataFrame(
{
"i": [2, 3, 5, 6],
"a": [None, 3.5, 2.5, 0.9],
"c": [True, False, False, True]
}
)
In the two simpler situations I think I know how they should be combined:
If only columns i
and a
were present above, or if I only care about keeping the values of a
, and just want to put them into one large df with all unique entries and the columns for a
combined, I can do a diagonal concat and then use unique()
to make sure I only have one entry per identifier
If each dataframe dealt with different data for the same identifiers (i.e. no column b
above) and I want to have it all in one place I can do an outer/full join on i
, and by using coalesce=True
I can make sure there is at the end only one identifier column
Where I come unstuck is what the right approach is when I have dataframes like the minimal example, where some rows overlap (by identifier) and some columns overlap, and I'd like to merge everything that's common between the two, but be able to specify which df should be preferred as the source for a particular cell. If I use approach 1 I get:
>>> df3 = pl.concat([df, df2], how="diagonal")
>>> df3 = df3.unique(subset="i", keep="first")
>>> df3
shape: (6, 4)
┌─────┬──────┬──────┬───────┐
│ i ┆ a ┆ b ┆ c │
│ --- ┆ --- ┆ --- ┆ --- │
│ i64 ┆ f64 ┆ str ┆ bool │
╞═════╪══════╪══════╪═══════╡
│ 5 ┆ 2.5 ┆ null ┆ false │
│ 2 ┆ 5.3 ┆ cd ┆ null │
│ 4 ┆ null ┆ gh ┆ null │
│ 1 ┆ 2.6 ┆ ab ┆ null │
│ 3 ┆ null ┆ ef ┆ null │
│ 6 ┆ 0.9 ┆ null ┆ true │
└─────┴──────┴──────┴───────┘
which means I have lost data, such as the value of c
for i=2
.
Using the second approach I get:
>>> df3 = df.join(df2, on="i", how="full", coalesce=True)
>>> df3
shape: (6, 5)
┌─────┬──────┬──────┬─────────┬───────┐
│ i ┆ a ┆ b ┆ a_right ┆ c │
│ --- ┆ --- ┆ --- ┆ --- ┆ --- │
│ i64 ┆ f64 ┆ str ┆ f64 ┆ bool │
╞═════╪══════╪══════╪═════════╪═══════╡
│ 2 ┆ 5.3 ┆ cd ┆ null ┆ true │
│ 3 ┆ null ┆ ef ┆ 3.5 ┆ false │
│ 5 ┆ null ┆ null ┆ 2.5 ┆ false │
│ 6 ┆ null ┆ null ┆ 0.9 ┆ true │
│ 1 ┆ 2.6 ┆ ab ┆ null ┆ null │
│ 4 ┆ null ┆ gh ┆ null ┆ null │
└─────┴──────┴──────┴─────────┴───────┘
which means I then have to go about manually combining a
and a_right
post-join. I am also not clear on how I can even do that. And on the assumption that there probably is a function for exactly that, it still seems tedious to have to type them all out in the event that there are many overlapping columns.
coalesce
is just a bool so if I want to coalesce a
as well I need to include them in on
, but then rows with the same i
are not combined when the values in a
differ or are absent in one df:
>>> df3 = df.join(df2, on=["i", "a"], how="full", coalesce=True)
>>> df3
shape: (8, 4)
┌─────┬──────┬──────┬───────┐
│ i ┆ a ┆ b ┆ c │
│ --- ┆ --- ┆ --- ┆ --- │
│ i64 ┆ f64 ┆ str ┆ bool │
╞═════╪══════╪══════╪═══════╡
│ 2 ┆ null ┆ null ┆ true │
│ 3 ┆ 3.5 ┆ null ┆ false │
│ 5 ┆ 2.5 ┆ null ┆ false │
│ 6 ┆ 0.9 ┆ null ┆ true │
│ 3 ┆ null ┆ ef ┆ null │
│ 4 ┆ null ┆ gh ┆ null │
│ 1 ┆ 2.6 ┆ ab ┆ null │
│ 2 ┆ 5.3 ┆ cd ┆ null │
└─────┴──────┴──────┴───────┘
All I want, and this doesn't seem like a particularly crazy thing to want to do, is to merge them so I get:
┌─────┬──────┬──────┬───────┐
│ i ┆ a ┆ b ┆ c │
│ --- ┆ --- ┆ --- ┆ --- │
│ i64 ┆ f64 ┆ str ┆ bool │
╞═════╪══════╪══════╪═══════╡
│ 2 ┆ 5.3 ┆ cd ┆ true │
│ 3 ┆ 3.5 ┆ ef ┆ false │
│ 5 ┆ 2.5 ┆ null ┆ false │
│ 6 ┆ 0.9 ┆ null ┆ true │
│ 1 ┆ 2.6 ┆ ab ┆ null │
│ 4 ┆ null ┆ gh ┆ null │
└─────┴──────┴──────┴───────┘
with all known data for each identifier combined appropriately. How should I be doing these sorts of operations? And if I have an entry for e.g. i=7
in both dataframes with different values for a
in each (perhaps df
is data measured by one person and df2
is measured by another), how can I go about choosing which one is kept preferentially?
Edit: For anyone interested in the same problem, I opened an issue to suggest adding a df.supplement()
method to achieve precisely what I wanted.
The proposed df.supplement()
would complement df.update()
, which is similar, but df.supplement(df2)
: a), would prioritize the non-null results in df
and only add the values from df2
that don't have corresponding values in df
already (i.e. the opposite of update()
), and b), if df2
has columns that aren't in df
, those columns and values would be added to df
as well.
Instead of unique
in your approach #1
df = pl.DataFrame({
"i": [1, 2, 3, 4, 7],
"a": [2.6, 5.3, None, None, 1],
"b": ["ab", "cd", "ef", "gh", "jk"]
})
df2 = pl.DataFrame({
"i": [2, 3, 5, 6, 7],
"a": [None, 3.5, 2.5, 0.9, 2],
"c": [True, False, False, True, None]
})
Can you .group_by("i")
and use pl.all().drop_nulls()
+ .first()
(or .last()
) to control which values to prefer?
(pl.concat([df, df2], how="diagonal")
.group_by("i")
.agg(pl.all().drop_nulls().first())
)
shape: (7, 4)
┌─────┬──────┬──────┬───────┐
│ i ┆ a ┆ b ┆ c │
│ --- ┆ --- ┆ --- ┆ --- │
│ i64 ┆ f64 ┆ str ┆ bool │
╞═════╪══════╪══════╪═══════╡
│ 2 ┆ 5.3 ┆ cd ┆ true │
│ 5 ┆ 2.5 ┆ null ┆ false │
│ 7 ┆ 1.0 ┆ jk ┆ null │ # 1.0 from df
│ 6 ┆ 0.9 ┆ null ┆ true │
│ 1 ┆ 2.6 ┆ ab ┆ null │
│ 4 ┆ null ┆ gh ┆ null │
│ 3 ┆ 3.5 ┆ ef ┆ false │
└─────┴──────┴──────┴───────┘
(pl.concat([df, df2], how="diagonal")
.group_by("i")
.agg(pl.all().drop_nulls().last())
)
shape: (7, 4)
┌─────┬──────┬──────┬───────┐
│ i ┆ a ┆ b ┆ c │
│ --- ┆ --- ┆ --- ┆ --- │
│ i64 ┆ f64 ┆ str ┆ bool │
╞═════╪══════╪══════╪═══════╡
│ 1 ┆ 2.6 ┆ ab ┆ null │
│ 4 ┆ null ┆ gh ┆ null │
│ 7 ┆ 2.0 ┆ jk ┆ null │ # 2.0 from df2
│ 5 ┆ 2.5 ┆ null ┆ false │
│ 3 ┆ 3.5 ┆ ef ┆ false │
│ 2 ┆ 5.3 ┆ cd ┆ true │
│ 6 ┆ 0.9 ┆ null ┆ true │
└─────┴──────┴──────┴───────┘