-
Notifications
You must be signed in to change notification settings - Fork 8
/
categorical-glmm.qmd
193 lines (151 loc) · 6.03 KB
/
categorical-glmm.qmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
---
pagetitle: "Feature Engineering A-Z | GLMM Encoding"
---
# GLMM Encoding {#sec-categorical-glmm}
::: {style="visibility: hidden; height: 0px;"}
## GLMM Encoding
:::
Generalized linear mixed models (GLMM) encoding @Pargent2022 follows as an extension to target encoding which is laid out in detail at @sec-categorical-target.
A hierarchical generalized linear model is fit, using no intercept.
When applying target encoding, some classes have different numbers of observations associated with them.
The `"horse"` class only has 1 observation in this data set, how confident are we that the mean calculated from this value is as valid as the mean that was calculated over the 3 values for the `"cat"` class?
Knowing that the global mean of the target is our baseline when we have no information.
We can combine the level mean with the global mean, in accordance with how many observations we observe.
If we have a lot of observations at a level, we will let the global mean have little influence, and if there are fewer observations we will let the global mean have a higher influence.
```{r}
#| label: weighted_mean-example
#| echo: false
#| message: false
library(tidyverse)
set.seed(1234)
n <- 50
data <- tibble(
size = ceiling(rlnorm(n, 1.5)),
value = runif(n, 50, 100)
)
weighted_mean <- weighted.mean(data$value, data$value)
range <- range(data$value)
range <- range + diff(range) * c(-0.1, 0.1)
```
We can visualize this effect in the following charts.
First, we have an example of what happens with a smaller amount of smoothing.
The points are mostly along the diagonal.
Remember that if we didn't do this, all the points would be along the diagonal regardless of their size.
```{r}
#| label: fig-target-smoothness-1
#| echo: false
#| message: false
#| fig-cap: |
#| With a small amount of smoothing, the adjusted means are close to the
#| original means, regardless of the number of observations.
#| fig-alt: |
#| Scatter chart. A green line with slope 1 and intercept, and a blue line
#| with slope 0 and an intercept equal to the global mean. The points are sized
#| according to the number of observations that were used to calculate the
#| value. The points lie along the green line mostly, with some of the smaller
#| points getting closer to the blue line.
smoothness <- 1
data |>
mutate(new_value = (size * value + weighted_mean * smoothness) / (size + smoothness)) |>
ggplot(aes(value, new_value)) +
geom_abline(slope = 0, intercept = weighted_mean, color ="lightblue") +
geom_abline(slope = 1, intercept = 0, color = "lightgreen") +
geom_point(aes(size = size), alpha = 0.2) +
theme_minimal() +
xlim(range) +
ylim(range) +
labs(x = "Original mean value",
y = "Smoothed mean value",
size = "# observations",
title = "Small amount of smoothing")
```
In this next chart, we see the effect of a higher amount of smoothing, now the levels with fewer observations are pulled quite a bit closer to the global mean.
```{r}
#| label: fig-target-smoothness-5
#| echo: false
#| message: false
#| fig-cap: |
#| With a large amount of smoothing, the adjusted means are not as close to
#| the original means, with smaller points getting quite close to the global
#| mean.
#| fig-alt: |
#| Scatter chart. A green line with slope 1 and intercept, and a blue line
#| with slope 0 and an intercept equal to the global mean. The points are sized
#| according to the number of observations that were used to calculate the
#| value. The points lie between the green and blue lines, with the smaller
#| points being closer to the blue line than the larger points.
smoothness <- 5
data |>
mutate(new_value = (size * value + weighted_mean * smoothness) / (size + smoothness)) |>
ggplot(aes(value, new_value)) +
geom_abline(slope = 0, intercept = weighted_mean, color ="lightblue") +
geom_abline(slope = 1, intercept = 0, color = "lightgreen") +
geom_point(aes(size = size), alpha = 0.2) +
theme_minimal() +
xlim(range) +
ylim(range) +
labs(x = "Original mean value",
y = "Smoothed mean value",
size = "# observations",
title = "Large amount of smoothing")
```
The exact way this is done will vary from method to method, and the strength of this smoothing can and should properly be tuned as there isn't an empirical best way to choose it.
The big benefit is that by fitting a hierarchical generalized linear model is fit, using no intercept is that it will handle the amount of smoothing for us. Giving us a method that handles smoothing, using a sound statistical method, without needed a hyperparameter.
## Pros and Cons
### Pros
- No hyperparameters to tune, as shrinkage is automatically done.
- Can deal with categorical variables with many levels
- Can deal with unseen levels in a sensible way
### Cons
- Can be prone to overfitting
## R Examples
::: callout-caution
# TODO
find a better data set
:::
We apply the smoothed GLMM encoder using the `step_lencode_mixed()` step.
```{r}
#| label: set-seed
#| echo: false
set.seed(1234)
```
```{r}
#| label: step_lencode_mixed
#| message: false
library(recipes)
library(embed)
data(ames, package = "modeldata")
rec_target_smooth <- recipe(Sale_Price ~ Neighborhood, data = ames) |>
step_lencode_mixed(Neighborhood, outcome = vars(Sale_Price)) |>
prep()
rec_target_smooth |>
bake(new_data = NULL)
```
And we can pull out the values of the encoding like so.
```{r}
#| label: tidy
rec_target_smooth |>
tidy(1)
```
## Python Examples
```{python}
#| label: python-setup
#| echo: false
import pandas as pd
from sklearn import set_config
set_config(transform_output="pandas")
pd.set_option('display.precision', 3)
```
We are using the `ames` data set for examples.
{category_encoders} provided the `GLMMEncoder()` method we can use.
```{python}
#| label: glmmencoder
from feazdata import ames
from sklearn.compose import ColumnTransformer
from category_encoders.glmm import GLMMEncoder
ct = ColumnTransformer(
[('glmm', GLMMEncoder(), ['MS_Zoning'])],
remainder="passthrough")
ct.fit(ames, y=ames[["Sale_Price"]].values.flatten())
ct.transform(ames)
```