# When to use aggreagate/filter/transform with pandas

The pandas groupby method is a very powerful problem solving tool, but that power can make it confusing. Let's take a look at the three most common ways to use it.

- What was the average total bill on each day?
- Which meals were eaten on days where the average bill was greater than 20?
- How did the cost of each meal compare to the average for the day?
- In conclusion

I've been teaching quite a lot of Pandas recently, and a lot of the recurring questions are about grouping. That's no surprise, as it's one of the most flexible features of Pandas. However, that flexibility also makes it sometimes confusing.

**Tip:**For a much more detailed explanation of grouping operations, checkout the chapter on working with groups in the

*Drawing from Data*book.

I think that most of the confusion arises because the same grouping logic is used for (at least) three distinct operations in Pandas. In the order that we normally learn them, these are:

- calculating some aggregate measurement for each group (size, mean, etc.)
- filtering the rows on a property of the group they belong to
- calculating a new value for each row based on a property of the group.

This leads commonly to situations where we know that we need to use `groupby()`

- and may even be able to easily figure out what the arguments to `groupby()`

should be - but are unsure about what to do next.

Here's a trick that I've found useful when teaching these ideas: think about the result you want, and work back from there. If you want to get a single value for each group, use `aggregate()`

(or one of its shortcuts). If you want to get a subset of the original rows, use `filter()`

. And if you want to get a new value for each original row, use `transpose()`

.

Here's a minimal example of the three different situations, all of which require exactly the same call to `groupby()`

but which do different things with the result. We'll use the well known `tips`

dataset which we can load directly from the web:

```
import pandas as pd
df = pd.read_csv("https://raw.githubusercontent.com/mwaskom/seaborn-data/master/tips.csv")
pd.options.display.max_rows = 10
df
```

If you're not familiar with this dataset, all you need to know is that each row represents a meal at a restaurant, and the columns store the value of the total bill and the tip, plus some metadata about the customer - their sex, whether or not they were a smoker, what day and time they ate at, and the size of their party. Also, notice that we have 244 rows - this will be important later on.

## What was the average total bill on each day?

To answer this, let's imagine that we have already figured out that we need to group by day:

`df.groupby('day')`

now what's the next step? Use the trick that I just described and start by imagining what we want the output to look like. We want a single value for each group, so we need to use `aggregate()`

:

```
df.groupby('day').aggregate('mean')
```

We're only interested in the `total_bill`

column, so we can select it (either before or after we do the aggregation):

```
df.groupby('day')['total_bill'].aggregate('mean')
```

Pandas has lots of shortcuts for the various ways to aggregate group values - we could use `mean()`

here instead:

```
df.groupby('day')['total_bill'].mean()
```

## Which meals were eaten on days where the average bill was greater than 20?

For this question, think again about the output we want - our goal here is to get a subset of the original rows, so this is a job for `filter()`

. The argument to `filter()`

must be a function or lambda that will take a group and return `True`

or `False`

to determine whether rows belonging to that group should be included in the output. Here's how we might do it with a lambda:

```
df.groupby('day').filter(lambda x : x['total_bill'].mean() > 20)
```

Notice that our output dataframe has only 163 rows (compared to the 244 that we started with), and that the columns are exactly the same as the input.

Compared to our first example, it's a bit harder to see why this is useful - typically we'll do a filter like this and then follow it up with another operation. For example, we might want to compare the average party size on days where the average bill is high:

```
# surrounding parens let us split the different parts of the expression
# over multiple lines
(
df
.groupby('day')
.filter( lambda x : x['total_bill'].mean() > 20)
['size']
.mean()
)
```

with the average party size on days where the average bill is low:

```
(
df
.groupby('day')
.filter(lambda x : x['total_bill'].mean() <= 20)
['size']
.mean()
)
```

Incidentally, a question that I'm often asked is what the type of the argument to the lambda is - what actually is the variable `x`

in our examples above? We can find out by passing a lambda that just prints the type of its input:

```
df.groupby('day').filter(lambda x: print(type(x)))
```

And we see that each group is passed to our lambda function as a Pandas DataFrame, so we already know how to use it.

## How did the cost of each meal compare to the average for the day?

This last example is the trickiest to understand, but remember our trick - start by thinking about the desired output. In this case we are trying to generate a new value for each input row - the total bill divided by the average total bill for each day. (If you have a scientific or maths background then you might think of this as a *normalized* or *scaled* total bill). To make a new value for each row, we use `transform()`

.

To start with, let's see what happens when we pass in a lambda to `transform()`

that just gives us the mean of its input:

```
df.groupby('day').transform(lambda x : x.mean())
```

Notice that we get the same number of output rows as input rows - Pandas has calculated the mean for each group, then used the results as the new values for each row. We're only interested in the total bill, so let's get rid of the other columns:

```
df.groupby('day')['total_bill'].transform(lambda x : x.mean())
```

This gives us a series with the same number of rows as our input data. We could assign this to a new column in our dataframe:

```
df['day_average'] = df.groupby('day')['total_bill'].transform(lambda x : x.mean())
df
```

Which would allow us to calculate the scaled total bills:

```
df['total_bill'] / df['day_average']
```

But we could also calculate the scaled bill as part of the transform:

```
df['scaled bill'] = df.groupby('day')['total_bill'].transform(lambda x : x/x.mean())
df.head()
```

## In conclusion

All of our three examples used exactly the same `groupby()`

call to begin with:

```
df.groupby('day')['total_bill'].mean()
df.groupby('day').filter(lambda x : x['total_bill'].mean() > 20)
df.groupby('day')['total_bill'].transform(lambda x : x/x.mean())
```

but by doing different things with the resulting groups we get very different outputs. To reiterate:

- if we want to get a single value for each group -> use
`aggregate()`

- if we want to get a subset of the input rows -> use
`filter()`

- if we want to get a new value for each input row -> use
`transform()`