I usually do this using zip:
>>> df = pd.DataFrame([[i] for i in range(10)], columns=['num'])
>>> df
num
0 0
1 1
2 2
3 3
4 4
5 5
6 6
7 7
8 8
9 9
>>> def powers(x):
>>> return x, x**2, x**3, x**4, x**5, x**6
>>> df['p1'], df['p2'], df['p3'], df['p4'], df['p5'], df['p6'] = \
>>> zip(*df['num'].map(powers))
>>> df
num p1 p2 p3 p4 p5 p6
0 0 0 0 0 0 0 0
1 1 1 1 1 1 1 1
2 2 2 4 8 16 32 64
3 3 3 9 27 81 243 729
4 4 4 16 64 256 1024 4096
5 5 5 25 125 625 3125 15625
6 6 6 36 216 1296 7776 46656
7 7 7 49 343 2401 16807 117649
8 8 8 64 512 4096 32768 262144
9 9 9 81 729 6561 59049 531441
Answer from ostrokach on Stack OverflowI usually do this using zip:
>>> df = pd.DataFrame([[i] for i in range(10)], columns=['num'])
>>> df
num
0 0
1 1
2 2
3 3
4 4
5 5
6 6
7 7
8 8
9 9
>>> def powers(x):
>>> return x, x**2, x**3, x**4, x**5, x**6
>>> df['p1'], df['p2'], df['p3'], df['p4'], df['p5'], df['p6'] = \
>>> zip(*df['num'].map(powers))
>>> df
num p1 p2 p3 p4 p5 p6
0 0 0 0 0 0 0 0
1 1 1 1 1 1 1 1
2 2 2 4 8 16 32 64
3 3 3 9 27 81 243 729
4 4 4 16 64 256 1024 4096
5 5 5 25 125 625 3125 15625
6 6 6 36 216 1296 7776 46656
7 7 7 49 343 2401 16807 117649
8 8 8 64 512 4096 32768 262144
9 9 9 81 729 6561 59049 531441
In 2020, I use apply() with argument result_type='expand'
applied_df = df.apply(lambda row: fn(row.text), axis='columns', result_type='expand')
df = pd.concat([df, applied_df], axis='columns')
fn() should return a dict; its keys will be the new column names.
Alternatively you can do a one-liner by also specifying the column names:
df[["col1", "col2", ...]] = df.apply(lambda row: fn(row.text), axis='columns', result_type='expand')
An possible solution is to make the expanding part of the function, and use GroupBy.apply:
def foo1(_df):
return _df['x1'].expanding().max() * _df['x2'].expanding().apply(lambda x: x[-1], raw=True)
df['foo_result'] = df.groupby('group').apply(foo1).reset_index(level=0, drop=True)
print (df)
group time x1 x2 foo_result
0 A 1 10 1 10.0
3 B 1 100 2 200.0
1 A 2 40 2 80.0
4 B 2 200 0 0.0
2 A 3 30 1 40.0
5 B 3 300 3 900.0
This is not a direct solution to the problem of applying a dataframe function to an expanding dataframe, but it achieves the same functionality.
Applying a dataframe function on an expanding window is apparently not possible (at least not for pandas version 0.23.0; EDITED - and also not 1.3.0), as one can see by plugging a print statement into the function.
Running df.groupby('group').expanding().apply(lambda x: bool(print(x)) , raw=False) on the given DataFrame (where the bool around the print is just to get a valid return value) returns:
0 1.0
dtype: float64
0 1.0
1 2.0
dtype: float64
0 1.0
1 2.0
2 3.0
dtype: float64
0 10.0
dtype: float64
0 10.0
1 40.0
dtype: float64
0 10.0
1 40.0
2 30.0
dtype: float64
(and so on - and also returns a dataframe with '0.0' in each cell, of course).
This shows that the expanding window works on a column-by-column basis (we see that first the expanding time series is printed, then x1, and so on), and does not really work on a dataframe - so a dataframe function can't be applied to it.
So, to get the obtained functionality, one would have to put the expanding inside the dataframe function, like in the accepted answer.