-
Notifications
You must be signed in to change notification settings - Fork 75
/
Copy path11-eda.qmd
1593 lines (1206 loc) · 62.3 KB
/
11-eda.qmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
---
engine: knitr
---
# Exploratory data analysis {#sec-exploratory-data-analysis}
**Prerequisites**
- Read *The Future of Data Analysis*, [@tukey1962future]
- John Tukey, the twentieth century statistician, made many contributions to statistics. From this paper focus on Part 1 "General Considerations", which was ahead of its time about the ways in which we ought to learn something from data.
- Read *Best Practices in Data Cleaning*, [@bestpracticesindatacleaning]
- Focus on Chapter 6 "Dealing with Missing or Incomplete Data" which is a chapter-length treatment of this issue.
- Read *R for Data Science*, [@r4ds]
- Focus on Chapter 11 "Exploratory data analysis", which provides a written self-contained EDA worked example.
- Watch *Whole game*, [@hadleycodes]
- A video providing a self-contained EDA worked example. One nice aspect is that you get to see an expert make mistakes and then fix them.
**Key concepts and skills**
- Exploratory data analysis is the process of coming to terms with a new dataset by looking at the data, constructing graphs, tables, and models. We want to understand three aspects:
1) each individual variable by itself;
2) each individual in the context of other, relevant, variables; and
3) the data that are not there.
- During EDA we want to come to understand the issues and features of the dataset and how this may affect analysis decisions. We are especially concerned about missing values and outliers.
**Software and packages**
- Base R [@citeR]
- `arrow` [@arrow]
- `janitor` [@janitor]
- `lubridate` [@GrolemundWickham2011]
- `mice` [@mice]
- `modelsummary` [@citemodelsummary]
- `naniar` [@naniar]
- `opendatatoronto` [@citeSharla]
- `tidyverse` [@tidyverse]
- `tinytable` [@tinytable]
```{r}
#| message: false
#| warning: false
library(arrow)
library(janitor)
library(lubridate)
library(mice)
library(modelsummary)
library(naniar)
library(opendatatoronto)
library(tidyverse)
library(tinytable)
```
## Introduction
> The future of data analysis can involve great progress, the overcoming of real difficulties, and the provision of a great service to all fields of science and technology. Will it? That remains to us, to our willingness to take up the rocky road of real problems in preference to the smooth road of unreal assumptions, arbitrary criteria, and abstract results without real attachments. Who is for the challenge?
>
> @tukey1962future [p. 64].
Exploratory data analysis\index{exploratory data analysis} is never finished. It is the active process of exploring and becoming familiar with our data. Like a farmer with their hands in the earth, we need to know every contour and aspect of our data. We need to know how it changes, what it shows, hides, and what are its limits. Exploratory data analysis (EDA) is the unstructured process of doing this.
EDA is a means to an end.\index{exploratory data analysis!how to} While it will inform the entire paper, especially the data section, it is not typically something that ends up in the final paper. The way to proceed is to make a separate Quarto document. Add code and brief notes on-the-go. Do not delete previous code, just add to it. By the end of it we will have created a useful notebook that captures your exploration of the dataset. This is a document that will guide the subsequent analysis and modeling.
EDA draws on a variety of skills and there are a lot of options when conducting EDA [@staniak2019landscape]. Every tool should be considered. Look at the data and scroll through it. Make tables, plots, summary statistics, even some models. The key is to iterate, move quickly rather than perfectly, and come to a thorough understanding of the data. Interestingly, coming to thoroughly understand the data that we have often helps us understand what we do not have.
We are interested in the following process:\index{exploratory data analysis!aims}
- Understand the distribution and properties of individual variables.
- Understand relationships between variables.
- Understand what is not there.
There is no one correct process or set of steps that are required to undertake and complete EDA. Instead, the relevant steps and tools depend on the data and question of interest. As such, in this chapter we will illustrate approaches to EDA through various examples of EDA including US state populations, subway delays in Toronto, and Airbnb listings in London. We also build on @sec-farm-data and return to missing data.
## 1975 United States population and income data
As a first example we consider US state populations\index{United States} as of 1975. This dataset is built into R with `state.x77`.\index{exploratory data analysis} Here is what the dataset looks like:
```{r}
#| message: false
#| warning: false
us_populations <-
state.x77 |>
as_tibble() |>
clean_names() |>
mutate(state = rownames(state.x77)) |>
select(state, population, income)
us_populations
```
We want to get a quick sense of the data.\index{exploratory data analysis} The first step is to have a look at the top and bottom of it with `head()` and `tail()`, then a random selection, and finally to focus on the variables and their class with `glimpse()`. The random selection is an important aspect, and when you use `head()` you should also quickly consider a random selection.
```{r}
us_populations |>
head()
us_populations |>
tail()
us_populations |>
slice_sample(n = 6)
us_populations |>
glimpse()
```
We are then interested in understanding key summary statistics, such as the minimum, median, and maximum values for numeric variables with `summary()` from base R and the number of observations.
```{r}
us_populations |>
summary()
```
Finally, it is especially important to understand the behavior of these key summary statistics at the limits.\index{exploratory data analysis!limits} In particular, one approach is to randomly remove some observations and compare what happens to them. For instance, we can randomly create five datasets that differ on the basis of which observations were removed. We can then compare the summary statistics. If any of them are especially different, then we would want to look at the observations that were removed as they may contain observations with high influence\index{influential points}.
```{r}
#| echo: true
#| label: tbl-summarystatesrandom
#| tbl-cap: "Comparing the mean population when different states are randomly removed"
#| message: false
#| warning: false
sample_means <- tibble(seed = c(), mean = c(), states_ignored = c())
for (i in c(1:5)) {
set.seed(i)
dont_get <- c(sample(x = state.name, size = 5))
sample_means <-
sample_means |>
rbind(tibble(
seed = i,
mean =
us_populations |>
filter(!state %in% dont_get) |>
summarise(mean = mean(population)) |>
pull(),
states_ignored = str_c(dont_get, collapse = ", ")
))
}
sample_means |>
tt() |>
style_tt(j = 1:3, align = "lrr") |>
format_tt(digits = 0, num_mark_big = ",", num_fmt = "decimal") |>
setNames(c("Seed", "Mean", "Ignored states"))
```
In the case of the populations of US states, we know that larger states, such as California and New York, will have an out sized effect on our estimate of the mean. @tbl-summarystatesrandom supports that, as we can see that when we use seeds 2 and 3, there is a lower mean.
## Missing data
We have discussed missing data a lot throughout this book, especially in @sec-farm-data. Here we return to it because understanding missing data\index{data!missing}\index{missing data} tends to be a substantial focus of EDA.\index{exploratory data analysis!missing data} When we find missing data---and there are always missing data of some sort or another---we want to establish what type of missingness we are dealing with. Focusing on known-missing observations, that is where there are observations that we can see are missing in the dataset, based on @gelmanhillvehtari2020 [p. 323] we consider three main categories of missing data:\index{missing data!types}
1) Missing Completely At Random;
2) Missing at Random; and
3) Missing Not At Random.
When data are Missing Completely At Random (MCAR),\index{missing data!MCAR} observations are missing from the dataset independent of any other variables---whether in the dataset or not. As discussed in @sec-farm-data, when data are MCAR there are fewer concerns about summary statistics and inference, but data are rarely MCAR. Even if they were it would be difficult to be convinced of this. Nonetheless we can simulate an example. For instance we can remove the population data for three randomly selected states.\index{simulation}
```{r}
set.seed(853)
remove_random_states <-
sample(x = state.name, size = 3, replace = FALSE)
us_states_MCAR <-
us_populations |>
mutate(
population =
if_else(state %in% remove_random_states, NA_real_, population)
)
summary(us_states_MCAR)
```
When observations are Missing at Random (MAR) they are missing from the dataset in a way that is related to other variables in the dataset.\index{missing data!MAR} For instance, it may be that we are interested in understanding the effect of income and gender on political participation, and so we gather information on these three variables. But perhaps for some reason males are less likely to respond to a question about income.
In the case of the US states dataset, we can simulate a MAR dataset by making the three US states with the highest population not have an observation for income.
```{r}
highest_income_states <-
us_populations |>
slice_max(income, n = 3) |>
pull(state)
us_states_MAR <-
us_populations |>
mutate(population =
if_else(state %in% highest_income_states, NA_real_, population)
)
summary(us_states_MAR)
```
Finally when observations are Missing Not At Random (MNAR) they are missing from the dataset in a way that is related to either unobserved variables, or the missing variable itself.\index{missing data!MNAR} For instance, it may be that respondents with a higher income, or that respondents with higher education (a variable that we did not collect), are less likely to fill in their income.
In the case of the US states dataset, we can simulate a MNAR dataset by making the three US states with the highest population not have an observation for population.
```{r}
highest_population_states <-
us_populations |>
slice_max(population, n = 3) |>
pull(state)
us_states_MNAR <-
us_populations |>
mutate(population =
if_else(state %in% highest_population_states,
NA_real_,
population))
us_states_MNAR
```
The best approach will be bespoke to the circumstances, but in general we want to use simulation to better understand the implications of our choices. From a data side we can choose to remove observations that are missing or input a value. (There are also options on the model side, but those are beyond the scope of this book.) These approaches have their place, but need to be used with humility and well communicated. The use of simulation is critical.
We can return to our US states dataset, generate some missing data, and consider a few common approaches for dealing with missing data, and compare the implied values for each state, and the overall US mean population.\index{missing data!strategies} We consider the following options:
1) Drop observations with missing data.
2) Impute the mean of observations without missing data.
3) Use multiple imputation.
To drop the observations with missing data, we can use `mean()`. By default it will exclude observations with missing values in its calculation. To impute the mean, we construct a second dataset with the observations with missing data removed. We then compute the mean of the population column, and impute that into the missing values in the original dataset. Multiple imputation involves creating many potential datasets, conducting inference, and then bringing them together potentially though averaging [@gelmanandhill, p. 542]. We can implement multiple imputation with `mice()` from `mice`.
```{r}
#| message: false
#| warning: false
multiple_imputation <-
mice(
us_states_MCAR,
print = FALSE
)
mice_estimates <-
complete(multiple_imputation) |>
as_tibble()
```
```{r}
#| echo: false
#| label: tbl-imputationoptions
#| tbl-cap: "Comparing the imputed values of population for three US states and the overall mean population"
#| message: false
#| warning: false
# Mean US state population
actual_us_mean <- mean(us_populations$population)
# Mean US state population with missing values dropped
us_mean_drop_missing <- mean(us_states_MCAR$population, na.rm = TRUE)
# Mean imputation for missing values
us_states_MCAR_mean_imputation <- us_states_MCAR |>
mutate(population = if_else(is.na(population), us_mean_drop_missing, population))
# Mean US state population after mean imputation
mean_imputation_overall <- mean(us_states_MCAR_mean_imputation$population)
tibble(
observation = c("Florida", "Montana", "New Hampshire", "Overall"),
dropped = c(NA, NA, NA, us_mean_drop_missing),
impute_mean = c(
us_mean_drop_missing,
us_mean_drop_missing,
us_mean_drop_missing,
mean_imputation_overall
),
multiple_imputation = c(
mice_estimates |> filter(state == "Florida") |> select(population) |> pull(),
mice_estimates |> filter(state == "Montana") |> select(population) |> pull(),
mice_estimates |> filter(state == "New Hampshire") |> select(population) |> pull(),
mice_estimates |> summarise(mean = mean(population)) |> pull()
),
actual = c(
us_populations |> filter(state == "Florida") |> select(population) |> pull(),
us_populations |> filter(state == "Montana") |> select(population) |> pull(),
us_populations |> filter(state == "New Hampshire") |> select(population) |> pull(),
actual_us_mean
)
) |>
tt() |>
style_tt(j = 1:5, align = "lcccr") |>
format_tt(digits = 0,
num_mark_big = ",",
num_fmt = "decimal") |>
setNames(c(
"Observation",
"Drop missing",
"Input mean",
"Multiple imputation",
"Actual"
))
```
@tbl-imputationoptions makes it clear that none of these approaches should be naively imposed. For instance, Florida's population should be 8,277. Imputing the mean across all the states would result in an estimate of 4,308, and multiple imputation results in an estimate of 11,197, the former is too low and the latter is too high. If imputation is the answer, it may be better to look for a different question. It is worth pointing out that it was developed for specific circumstances of limiting public disclosure of private information [@myboynick].
Nothing can make up for missing data [@manskiwow].\index{missing data!strategies} The conditions under which it makes sense to impute the mean or the prediction based on multiple imputation are not common, and even more rare is our ability to verify them. What to do depends on the circumstances and purpose of the analysis. Simulating the removal of observations that we have and then implementing various options can help us better understand the trade-offs we face. Whatever choice is made---and there is rarely a clear-cut solution---try to document and communicate what was done, and explore the effect of different choices on subsequent estimates. We recommend proceeding by simulating different scenarios that remove some of the data that we have, and evaluating how the approaches differ.
Finally, more prosaically, but just as importantly, sometimes missing data is encoded in the variable with particular values.\index{missing data!encoding} For instance, while R has the option of "NA", sometimes numerical data is entered as "-99" or alternatively as a very large integer such as "9999999", if it is missing. In the case of the Nationscape survey dataset introduced in @sec-hunt-data, there are three types of known missing data:
- "888": "Asked in this wave, but not asked of this respondent"
- "999": "Not sure, don't know"
- ".": Respondent skipped
It is always worth looking explicitly for values that seem like they do not belong and investigating them. Graphs and tables are especially useful for this purpose.
## TTC subway delays
As a second, and more involved, example of EDA we use `opendatatoronto`, introduced in @sec-fire-hose, and the `tidyverse` to obtain and explore data about the Toronto subway system\index{Canada!Toronto subway delays}.\index{exploratory data analysis!Toronto} We want to get a sense of the delays that have occurred.
To begin, we download the data on Toronto Transit Commission (TTC) subway delays in 2021. The data are available as an Excel file with a separate sheet for each month. We are interested in 2021 so we filter to just that year then download it using `get_resource()` from `opendatatoronto` and bring the months together with `bind_rows()`.
```{r}
#| eval: false
#| echo: true
all_2021_ttc_data <-
list_package_resources("996cfe8d-fb35-40ce-b569-698d51fc683b") |>
filter(name == "ttc-subway-delay-data-2021") |>
get_resource() |>
bind_rows() |>
clean_names()
write_csv(all_2021_ttc_data, "all_2021_ttc_data.csv")
all_2021_ttc_data
```
```{r}
#| include: false
#| eval: false
# INTERNAL - Run above and then run this to save it in the appropriate place.
write_csv(all_2021_ttc_data, "inputs/data/all_2021_ttc_data.csv")
```
```{r}
#| eval: true
#| echo: false
#| warning: false
#| message: false
all_2021_ttc_data <- read_csv("inputs/data/all_2021_ttc_data.csv")
all_2021_ttc_data
```
The dataset has a variety of columns, and we can find out more about each of them by downloading the codebook. The reason for each delay is coded, and so we can also download the explanations. One variable of interest appears is "min_delay", which gives the extent of the delay in minutes.
```{r}
#| eval: false
#| include: true
# Data codebook
delay_codebook <-
list_package_resources(
"996cfe8d-fb35-40ce-b569-698d51fc683b"
) |>
filter(name == "ttc-subway-delay-data-readme") |>
get_resource() |>
clean_names()
write_csv(delay_codebook, "delay_codebook.csv")
# Explanation for delay codes
delay_codes <-
list_package_resources(
"996cfe8d-fb35-40ce-b569-698d51fc683b"
) |>
filter(name == "ttc-subway-delay-codes") |>
get_resource() |>
clean_names()
write_csv(delay_codes, "delay_codes.csv")
```
```{r}
#| include: false
#| eval: false
# INTERNAL - Run above and then run this to save it in the appropriate place.
write_csv(delay_codebook, "inputs/data/delay_codebook.csv")
write_csv(delay_codes, "inputs/data/delay_codes.csv")
```
```{r}
#| eval: true
#| include: false
delay_codebook <- read_csv("inputs/data/delay_codebook.csv")
delay_codebook
delay_codes <- read_csv("inputs/data/delay_codes.csv")
delay_codes
```
There is no one way to explore a dataset while conducting EDA, but we are usually especially interested in:
- What should the variables look like? For instance, what is their class, what are the values, and what does the distribution of these look like?
- What aspects are surprising, both in terms of data that are there that we do not expect, such as outliers, but also in terms of data that we may expect but do not have, such as missing data.
- Developing a goal for our analysis. For instance, in this case, it might be understanding the factors such as stations and the time of day that are associated with delays. While we would not answer these questions formally here, we might explore what an answer could look like.
It is important to document all aspects as we go through and note anything surprising. We are looking to create a record of the steps and assumptions that we made as we were going because these will be important when we come to modeling. In the natural sciences, a research notebook of this type can even be a legal document [@nihtalk].
### Distribution and properties of individual variables
We should check that the variables are what they say they are. If they are not, then we need to work out what to do. For instance, should we change them, or possibly even remove them? It is also important to ensure that the class of the variables is as we expect. For instance, variables that should be a factor are a factor and those that should be a character are a character. And that we do not accidentally have, say, factors as numbers, or vice versa. One way to do this is to use `unique()`, and another is to use `table()`. There is no universal answer to which variables should be of certain classes, because the answer depends on the context.
```{r}
unique(all_2021_ttc_data$day)
unique(all_2021_ttc_data$line)
table(all_2021_ttc_data$day)
table(all_2021_ttc_data$line)
```
We have likely issues in terms of the subway lines. Some of them have a clear fix, but not all. One option would be to drop them, but we would need to think about whether these errors might be correlated with something that is of interest. If they were then we may be dropping important information. There is usually no one right answer, because it will usually depend on what we are using the data for. We would note the issue, as we continued with EDA and then decide later about what to do. For now, we will remove all the lines that are not the ones that we know to be correct based on the codebook.
```{r}
delay_codebook |>
filter(field_name == "Line")
all_2021_ttc_data_filtered_lines <-
all_2021_ttc_data |>
filter(line %in% c("YU", "BD", "SHP", "SRT"))
```
Entire careers are spent understanding missing data, and the presence, or lack, of missing values can haunt an analysis. To get started we could look at known-unknowns, which are the NAs for each variable. For instance, we could create counts by variable.
In this case we have many missing values in "bound" and two in "line". For these known-unknowns, as discussed in @sec-farm-data, we are interested in whether they are missing at random. We want to, ideally, show that data happened to just drop out. But this is unlikely, and so we are usually trying to look at what is systematic about how the data are missing.
Sometimes data happen to be duplicated. If we did not notice this, then our analysis would be wrong in ways that we would not be able to consistently expect. There are a variety of ways to look for duplicated rows, but `get_dupes()` from `janitor` is especially useful.
```{r}
#| eval: true
#| include: true
#| message: false
#| warning: false
get_dupes(all_2021_ttc_data_filtered_lines)
```
This dataset has many duplicates. We are interested in whether there is something systematic going on. Remembering that during EDA we are trying to quickly come to terms with a dataset, one way forward is to flag this as an issue to come back to and explore later, and to just remove duplicates for now using `distinct()`.
```{r}
#| eval: true
#| include: true
all_2021_ttc_data_no_dupes <-
all_2021_ttc_data_filtered_lines |>
distinct()
```
The station names have many errors.
```{r}
all_2021_ttc_data_no_dupes |>
count(station) |>
filter(str_detect(station, "WEST"))
```
We could try to quickly bring a little order to the chaos by just taking just the first word or first few words, accounting for names like "ST. CLAIR" and "ST. PATRICK" by checking if the name starts with "ST", as well as distinguishing between stations like "DUNDAS" and "DUNDAS WEST" by checking if the name contains "WEST". Again, we are just trying to get a sense of the data, not necessarily make binding decisions here. We use `word()` from `stringr` to extract specific words from the station names.
```{r}
#| eval: true
#| include: true
all_2021_ttc_data_no_dupes <-
all_2021_ttc_data_no_dupes |>
mutate(
station_clean =
case_when(
str_starts(station, "ST") &
str_detect(station, "WEST") ~ word(station, 1, 3),
str_starts(station, "ST") ~ word(station, 1, 2),
str_detect(station, "WEST") ~ word(station, 1, 2),
TRUE ~ word(station, 1)
)
)
all_2021_ttc_data_no_dupes
```
We need to see the data in its original state to understand it, and we often use bar charts, scatterplots, line plots, and histograms for this. During EDA we are not so concerned with whether the graph looks nice, but are instead trying to acquire a sense of the data as quickly as possible. We can start by looking at the distribution of "min_delay", which is one outcome of interest.
```{r}
#| eval: true
#| include: true
#| message: false
#| warning: false
#| fig-cap: "Distribution of delay, in minutes"
#| label: fig-delayhist
#| layout-ncol: 2
#| fig-subcap: ["Distribution of delay", "With a log scale"]
all_2021_ttc_data_no_dupes |>
ggplot(aes(x = min_delay)) +
geom_histogram(bins = 30)
all_2021_ttc_data_no_dupes |>
ggplot(aes(x = min_delay)) +
geom_histogram(bins = 30) +
scale_x_log10()
```
The largely empty graph in @fig-delayhist-1 suggests the presence of outliers. There are a variety of ways to try to understand what could be going on, but one quick way to proceed is to use logarithms, remembering that we would expect values of zero to drop away (@fig-delayhist-2).
This initial exploration suggests there are a small number of large delays that we might like to explore further. We will join this dataset with "delay_codes" to understand what is going on.
```{r}
#| eval: true
#| include: true
fix_organization_of_codes <-
rbind(
delay_codes |>
select(sub_rmenu_code, code_description_3) |>
mutate(type = "sub") |>
rename(
code = sub_rmenu_code,
code_desc = code_description_3
),
delay_codes |>
select(srt_rmenu_code, code_description_7) |>
mutate(type = "srt") |>
rename(
code = srt_rmenu_code,
code_desc = code_description_7
)
)
all_2021_ttc_data_no_dupes_with_explanation <-
all_2021_ttc_data_no_dupes |>
mutate(type = if_else(line == "SRT", "srt", "sub")) |>
left_join(
fix_organization_of_codes,
by = c("type", "code")
)
all_2021_ttc_data_no_dupes_with_explanation |>
select(station_clean, code, min_delay, code_desc) |>
arrange(-min_delay)
```
From this we can see that the 348 minute delay was due to "Traction Power Rail Related", the 343 minute delay was due to "Signals - Track Circuit Problems", and so on.
Another thing that we are looking for is various groupings of the data, especially where sub-groups may end up with only a small number of observations in them. This is because our analysis could be especially influenced by them. One quick way to do this is to group the data by a variable that is of interest, for instance, "line", using color.
```{r}
#| eval: true
#| include: true
#| message: false
#| warning: false
#| fig-cap: "Distribution of delay, in minutes"
#| label: fig-delaydensity
#| layout-ncol: 2
#| fig-subcap: ["Density", "Frequency"]
all_2021_ttc_data_no_dupes_with_explanation |>
ggplot() +
geom_histogram(
aes(
x = min_delay,
y = ..density..,
fill = line
),
position = "dodge",
bins = 10
) +
scale_x_log10()
all_2021_ttc_data_no_dupes_with_explanation |>
ggplot() +
geom_histogram(
aes(x = min_delay, fill = line),
position = "dodge",
bins = 10
) +
scale_x_log10()
```
@fig-delaydensity-1 uses density so that we can look at the distributions more comparably, but we should also be aware of differences in frequency (@fig-delaydensity-2). In this case, we see that "SHP" and "SRT" have much smaller counts.
To group by another variable, we can add facets (@fig-delayfreqfacet).
```{r}
#| eval: true
#| fig-cap: "Frequency of the distribution of delay, in minutes, by day"
#| include: true
#| label: fig-delayfreqfacet
#| message: false
#| warning: false
all_2021_ttc_data_no_dupes_with_explanation |>
ggplot() +
geom_histogram(
aes(x = min_delay, fill = line),
position = "dodge",
bins = 10
) +
scale_x_log10() +
facet_wrap(vars(day)) +
theme(legend.position = "bottom")
```
We can also plot the top five stations by mean delay, faceted by line (@fig-whatisthisagraphforants). This raises something that we would need to follow up on, which is what is "ZONE" in "YU"?
```{r}
#| eval: true
#| include: true
#| message: false
#| fig-cap: "Top five stations, by mean delay and line"
#| label: fig-whatisthisagraphforants
all_2021_ttc_data_no_dupes_with_explanation |>
summarise(mean_delay = mean(min_delay), n_obs = n(),
.by = c(line, station_clean)) |>
filter(n_obs > 1) |>
arrange(line, -mean_delay) |>
slice(1:5, .by = line) |>
ggplot(aes(station_clean, mean_delay)) +
geom_col() +
coord_flip() +
facet_wrap(vars(line), scales = "free_y")
```
As discussed in @sec-clean-and-prepare, dates are often difficult to work with because they are so prone to having issues. For this reason, it is especially important to consider them during EDA. Let us create a graph by week, to see if there is any seasonality over the course of a year. When using dates, `lubridate` is especially useful. For instance, we can look at the average delay, of those that were delayed, by week, using `week()` to construct the weeks (@fig-delaybyweek).
```{r}
#| eval: true
#| fig-cap: "Average delay, in minutes, by week, for the Toronto subway"
#| include: true
#| label: fig-delaybyweek
#| message: false
#| warning: false
all_2021_ttc_data_no_dupes_with_explanation |>
filter(min_delay > 0) |>
mutate(week = week(date)) |>
summarise(mean_delay = mean(min_delay),
.by = c(week, line)) |>
ggplot(aes(week, mean_delay, color = line)) +
geom_point() +
geom_smooth() +
facet_wrap(vars(line), scales = "free_y")
```
Now let us look at the proportion of delays that were greater than ten minutes (@fig-longdelaybyweek).
```{r}
#| eval: true
#| fig-cap: "Delays longer than ten minutes, by week, for the Toronto subway"
#| include: true
#| label: fig-longdelaybyweek
#| message: false
#| warning: false
all_2021_ttc_data_no_dupes_with_explanation |>
mutate(week = week(date)) |>
summarise(prop_delay = sum(min_delay > 10) / n(),
.by = c(week, line)) |>
ggplot(aes(week, prop_delay, color = line)) +
geom_point() +
geom_smooth() +
facet_wrap(vars(line), scales = "free_y")
```
These figures, tables, and analysis may not have a place in a final paper. Instead, they allow us to become comfortable with the data. We note aspects about each that stand out, as well as the warnings and any implications or aspects to return to.
### Relationships between variables
We are also interested in looking at the relationship between two variables. We will draw heavily on graphs for this. Appropriate types, for different circumstances, were discussed in @sec-static-communication. Scatter plots are especially useful for continuous variables, and are a good precursor to modeling. For instance, we may be interested in the relationship between the delay and the gap, which is the number of minutes between trains (@fig-delayvsgap).
```{r}
#| eval: true
#| fig-cap: "Relationship between delay and gap for the Toronto subway in 2021"
#| include: true
#| label: fig-delayvsgap
#| message: false
#| warning: false
all_2021_ttc_data_no_dupes_with_explanation |>
ggplot(aes(x = min_delay, y = min_gap, alpha = 0.1)) +
geom_point() +
scale_x_log10() +
scale_y_log10()
```
The relationship between categorical variables takes more work, but we could also, for instance, look at the top five reasons for delay by station. We may be interested in whether they differ, and how any difference could be modelled (@fig-categorical).
```{r}
#| eval: true
#| fig-cap: "Relationship between categorical variables for the Toronto subway in 2021"
#| include: true
#| label: fig-categorical
#| message: false
#| warning: false
all_2021_ttc_data_no_dupes_with_explanation |>
summarise(mean_delay = mean(min_delay),
.by = c(line, code_desc)) |>
arrange(-mean_delay) |>
slice(1:5) |>
ggplot(aes(x = code_desc, y = mean_delay)) +
geom_col() +
facet_wrap(vars(line), scales = "free_y", nrow = 4) +
coord_flip()
```
<!-- ## Case study - Historical Canadian elections -->
<!-- https://twitter.com/semrasevi/status/1122889166008745985?s=21 -->
<!-- ```{r}
#| eval: false
#| include: true
<!-- library(tidyverse) -->
<!-- elections_data <- read_csv("inputs/federal_candidates-1.csv") -->
<!-- elections_data$Province |> table() -->
<!-- # There are inconsistencies in the province names -->
<!-- elections_data$Province[elections_data$Province == "Québec"] <- "Quebec" -->
<!-- elections_data <- elections_data |> -->
<!-- filter(!is.na(Province)) -->
<!-- # Check gender -->
<!-- elections_data$Gender |> table() -->
<!-- # Check occupation -->
<!-- elections_data$occupation |> table() -->
<!-- # Get a count of how many uniques there are -->
<!-- elections_data$occupation |> unique() |> length() -->
<!-- # Check party -->
<!-- elections_data$party_short |> table() -->
<!-- elections_data$party_short |> unique() |> length() -->
<!-- # Check incumbency -->
<!-- elections_data$incumbent_candidate |> table() -->
<!-- elections_data$party_short |> unique() |> length() -->
<!-- # Add year -->
<!-- elections_data <- -->
<!-- elections_data |> -->
<!-- mutate(year = year(edate)) -->
<!-- # Add a counter for year -->
<!-- elections_data <- -->
<!-- elections_data |> -->
<!-- mutate(counter = year - 1867) -->
<!-- #### Save #### -->
<!-- write_csv(elections_data, "outputs/elections.csv") -->
<!-- ``` -->
<!-- ```{r}
#| eval: false
#| include: true
<!-- library(lubridate) -->
<!-- library(tidyverse) -->
<!-- elections_data <- read_csv("outputs/elections.csv") -->
<!-- elections_data |> -->
<!-- ggplot(aes(x = edate)) + -->
<!-- geom_histogram() -->
<!-- elections_data |> -->
<!-- ggplot(aes(x = edate, y = Result, color = Gender)) + -->
<!-- geom_point() + -->
<!-- facet_wrap(vars(Province)) -->
<!-- elections_data |> -->
<!-- ggplot(aes(x = Province, fill = Gender)) + -->
<!-- geom_bar(position = "dodge") -->
<!-- # We discovered WW1! -->
<!-- ``` -->
<!-- ```{r}
#| eval: false
#| include: true
<!-- #### Set up workspace ### -->
<!-- library(broom) -->
<!-- library(tidyverse) -->
<!-- #### Read in data #### -->
<!-- elections_data <- read_csv("outputs/elections.csv") -->
<!-- #### Model #### -->
<!-- # Just gender -->
<!-- model1 <- lm(Result ~ Gender, data = elections_data) -->
<!-- tidy(model1) -->
<!-- # Gender and incumbent -->
<!-- model2 <- lm(Result ~ Gender + incumbent_candidate, data = elections_data) -->
<!-- tidy(model2) -->
<!-- # Gender and incumbent and province -->
<!-- model3 <- lm(Result ~ Gender + incumbent_candidate + Province, data = elections_data) -->
<!-- tidy(model3) -->
<!-- # Gender and incumbent and province and year -->
<!-- model4 <- lm(Result ~ Gender + incumbent_candidate + Province + year, data = elections_data) -->
<!-- model5 <- lm(Result ~ Gender + incumbent_candidate + Province + year*Gender, data = elections_data) -->
<!-- tidy(model4) -->
<!-- tidy(model5) -->
<!-- # CHange the year into a counter that increments one for each year -->
<!-- model6 <- lm(Result ~ Gender + incumbent_candidate + Province + counter + counter*Gender, data = elections_data) -->
<!-- tidy(model6) -->
<!-- ``` -->
## Airbnb listings in London, England
In this case study we look at Airbnb\index{Airbnb} listings in London, England,\index{United Kingdom!London} as at 14 March 2023.\index{exploratory data analysis!Airbnb} The dataset is from [Inside Airbnb](http://insideairbnb.com) [@airbnbdata] and we will read it from their website, and then save a local copy. We can give `read_csv()` a link to where the dataset is and it will download it. This helps with reproducibility because the source is clear. But as that link could change at any time, longer-term reproducibility, as well as wanting to minimize the effect on the Inside Airbnb\index{Airbnb!Inside Airbnb} servers, suggests that we should also save a local copy of the data and then use that.
To get the dataset that we need, go to Inside Airbnb $\rightarrow$ "Data" $\rightarrow$ "Get the Data", then scroll down to London. We are interested in the "listings dataset", and we right click to get the URL that we need (@fig-getairbnb). Inside Airbnb update the data that they make available, and so the particular dataset that is available will change over time.
![Obtaining the Airbnb data from Inside Airbnb](figures/11-inside_airbnb.png){#fig-getairbnb width=95% fig-align="center"}
As the original dataset is not ours, we should not make that public without first getting written permission. For instance, we may want to add it to our inputs folder, but use a ".gitignore" entry, covered in @sec-reproducible-workflows, to ensure that we do not push it to GitHub. The "guess_max" option in `read_csv()` helps us avoid having to specify the column types. Usually `read_csv()` takes a best guess at the column types based on the first few rows. But sometimes those first ones are misleading and so "guess_max" forces it to look at a larger number of rows to try to work out what is going on. Paste the URL that we copied from Inside Airbnb into the URL part. And once it is downloaded, save a local copy.
```{r}
#| include: true
#| message: false
#| warning: false
#| eval: false
url <-
paste0(
"http://data.insideairbnb.com/united-kingdom/england/",
"london/2023-03-14/data/listings.csv.gz"
)
airbnb_data <-
read_csv(
file = url,
guess_max = 20000
)
write_csv(airbnb_data, "airbnb_data.csv")
airbnb_data
```
We should refer to this local copy of our data when we run our scripts to explore the data, rather than asking the Inside Airbnb servers for the data each time. It might be worth even commenting out this call to their servers to ensure that we do not accidentally stress their service.
Again, add this filename---"airbnb_data.csv"---to the ".gitignore" file so that it is not pushed to GitHub. The size of the dataset will create complications that we would like to avoid.
```{r}
#| eval: false
#| include: false
# INTERNAL
write_csv(airbnb_data, "dont_push/2023-03-14-london-airbnb_data.csv")
airbnb_data <-
read_csv(
file = "dont_push/2023-03-14-london-airbnb_data.csv",
guess_max = 20000
)
```
While we need to archive this CSV because that is the original, unedited data, at more than 100MB it is a little unwieldy. For exploratory purposes we will create a parquet file\index{parquet} with selected variables (we do this in an iterative way, using `names(airbnb_data)` to work out the variable names).
```{r}
#| eval: false
#| include: true
airbnb_data_selected <-
airbnb_data |>
select(
host_id,
host_response_time,
host_is_superhost,
host_total_listings_count,
neighbourhood_cleansed,
bathrooms,
bedrooms,
price,
number_of_reviews,
review_scores_rating,
review_scores_accuracy,
review_scores_value
)
write_parquet(
x = airbnb_data_selected,
sink =
"2023-03-14-london-airbnblistings-select_variables.parquet"
)
rm(airbnb_data)
```
```{r}
#| eval: true
#| include: false
#| warning: false
#| message: false
airbnb_data_selected <-
read_parquet(file = "inputs/data/2023-03-14-london-airbnblistings-select_variables.parquet")
airbnb_data_selected
```
### Distribution and properties of individual variables
First we might be interested in price. It is a character at the moment and so we need to convert it to a numeric. This is a common problem and we need to be a little careful that it does not all just convert to NAs. If we just force the price variable to be a numeric then it will go to NA because there are a lot of characters where it is unclear what the numeric equivalent is, such as "$". We need to remove those characters first.
```{r}
#| eval: true
#| include: true
airbnb_data_selected$price |>
head()
airbnb_data_selected$price |>
str_split("") |>
unlist() |>
unique()
airbnb_data_selected |>
select(price) |>
filter(str_detect(price, ","))
airbnb_data_selected <-
airbnb_data_selected |>
mutate(
price = str_remove_all(price, "[\\$,]"),
price = as.integer(price)
)
```
Now we can look at the distribution of prices (@fig-airbnbpricesfirst-1). There are outliers, so again we might like to consider it on the log scale (@fig-airbnbpricesfirst-2).
```{r}
#| eval: true
#| include: true
#| warning: false
#| message: false