-
Notifications
You must be signed in to change notification settings - Fork 2
/
Copy pathindex.html
450 lines (374 loc) · 27.3 KB
/
index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
<!DOCTYPE html>
<html>
<head>
<script async src="https://www.googletagmanager.com/gtag/js?id=UA-116924853-1"></script>
<script>
window.dataLayer = window.dataLayer || [];
function gtag(){dataLayer.push(arguments);}
gtag('js', new Date());
gtag('config', 'UA-116924853-1');
</script>
<meta charset="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1">
<title>Hazel Doughty</title>
<meta name="description" content="">
<link rel="stylesheet" href="css/main.css">
<link rel="stylesheet" href="https://cdn.rawgit.com/jpswalsh/academicons/master/css/academicons.min.css"/>
<link rel="shortcut icon" type="image/ico" href="favicon.ico" />
<!-- Custom fonts for this template -->
<link href="https://fonts.googleapis.com/css?family=Saira+Extra+Condensed:100,200,300,400,500,600,700,800,900" rel="stylesheet">
<link href="https://fonts.googleapis.com/css?family=Open+Sans:300,300i,400,400i,600,600i,700,700i,800,800i" rel="stylesheet">
<link href="vendor/font-awesome/css/font-awesome.min.css" rel="stylesheet">
<link href="vendor/devicons/css/devicons.min.css" rel="stylesheet">
<link href="vendor/simple-line-icons/css/simple-line-icons.css" rel="stylesheet">
<link rel='stylesheet' id='open-sans-css' href='//fonts.googleapis.com/css?family=Open+Sans%3A300italic%2C400italic%2C600italic%2C300%2C400%2C600&subset=latin%2Clatin-ext&ver=4.2.4' type='text/css' media='all' />
<link href='https://fonts.googleapis.com/css?family=Titillium+Web:600italic,600,400,400italic' rel='stylesheet' type='text/css'>
<!-- fontawesome and academicons -->
<link rel="stylesheet" href="https://use.fontawesome.com/releases/v5.1.0/css/all.css" integrity="sha384-lKuwvrZot6UHsBSfcMvOkWwlCMgc0TaWr+30HWe3a4ltaBwTZhyTEggF5tJv8tbt" crossorigin="anonymous">
<link rel="stylesheet" href="https://cdn.rawgit.com/jpswalsh/academicons/master/css/academicons.min.css">
</head>
<body>
<header class="site-header">
<div class="wrapper">
<a class="site-title" href="/">Hazel Doughty</a>
<nav class="site-nav">
<a href="#" class="menu-icon menu.open">
<svg viewBox="0 0 18 15">
<path fill="#4CAC9D" d="M18,1.484c0,0.82-0.665,1.484-1.484,1.484H1.484C0.665,2.969,0,2.304,0,1.484l0,0C0,0.665,0.665,0,1.484,0 h15.031C17.335,0,18,0.665,18,1.484L18,1.484z"/>
<path fill="#4CAC9D" d="M18,7.516C18,8.335,17.335,9,16.516,9H1.484C0.665,9,0,8.335,0,7.516l0,0c0-0.82,0.665-1.484,1.484-1.484 h15.031C17.335,6.031,18,6.696,18,7.516L18,7.516z"/>
<path fill="#4CAC9D" d="M18,13.516C18,14.335,17.335,15,16.516,15H1.484C0.665,15,0,14.335,0,13.516l0,0 c0-0.82,0.665-1.484,1.484-1.484h15.031C17.335,12.031,18,12.696,18,13.516L18,13.516z"/>
</svg>
</a>
<div class="trigger"><h1>Main Navigation</h1>
<ul class="menu">
</ul>
</div>
</nav>
</div>
</header>
<div class="page-content">
<div class="wrapper">
<p><img src="img/profile.jpg" style="width: 180px; float: right" hspace="20" /></p>
<p>I am an Assistant Professor at Leiden University in the <a href="https://liacs.leidenuniv.nl">Leiden Institute for Advanced Computer Science (LIACS)</a>. Previously I was a postdoctoral researcher at the University of Amsterdam, working with with <a href="http://www.ceessnoek.info/">Prof. Cees Snoek</a>.
I completed my PhD at the University of Bristol, advised by <a href="https://dimadamen.github.io/">Prof. Dima Damen</a> and <a href="http://www.bristol.ac.uk/engineering/people/walterio-w-mayol-cuevas/index.html">Prof. Walterio Mayol-Cuevas</a>. My area of interest is Video Understanding, with my PhD thesis (which you can find <a href="SkillDeterminationFromLongVideosThesis.pdf">here</a>) focussing on Skill Determination. I am particularly interested in fine-grained and detailed video understanding with weak, noisy or other forms of incomplete supervision.</p>
<!-- Icons from fontawesome (Make less ugly later) -->
<ul class="list-inline list-social-icons mb-0">
<li class="list-inline-item">
<a href="https://scholar.google.com/citations?user=b3koBVwAAAAJ&hl=en">
<span class="fa-stack fa-lg">
<i class="ai ai-google-scholar-square ai-2x"></i>
</span>
</a>
</li>
<li class="list-inline-item">
<a href="https://github.com/hazeld">
<span class="fa-stack fa-lg">
<i class="fa fa-square fa-stack-2x"></i>
<i class="fab fa-github fa-stack-1x fa-inverse"></i>
</span>
</a>
</li>
<li class="list-inline-item">
<a href="https://twitter.com/doughty_hazel">
<span class="fa-stack fa-lg">
<i class="fa fa-square fa-stack-2x"></i>
<i class="fab fa-twitter fa-stack-1x fa-inverse"></i>
</span>
</a>
</li>
<li class="list-inline-item">
<a href="HazelDoughtyCV.pdf">
<span class="fa-stack fa-lg">
<i class="fa fa-square fa-stack-2x"></i>
<i class="fa fa-id-card fa-stack-1x fa-inverse"></i>
</span>
</a>
</li>
</ul>
<p><strong>Contact</strong>: h.r.doughty *at* liacs.leidenuniv.nl<br />
<h1 id="news--activities">News & Activities</h1>
<hr />
<div class="container"> <div class="events">
<ul>
<li> February: I gave a talk at the <a href="https://dsc.uva.nl/programmes/interdisciplinary-phd-programme/hava-lab/hava-lab.html">HAVA lab</a> in the University of Amsterdam
<li> January: I'll be giving a keynote at the CVPR 2025 workshop on <a href="https://sites.google.com/view/ivise2025">Interactive Video Search and Exploration (IViSE)</a>
<li> October: Welcome to <a href="https://lucstrater.github.io/">Luc Sträter</a> who started his PhD at LIACS
<li> September: Two papers accepted to ACCV as orals. Details coming soon.
<li> September: I'll be serving as area chair for CVPR 2025 and ICCV 2025
<li> July 2024: I gave a talk at the <a href="https://www.acvss.ai/home">African Summer School on Computer Vision</a>
<li> July 2024: Our paper SelEx: Self-Expertise in Fine-Grained Generalized Category Discovery is accepted to ECCV 2024, more details coming soon.
<li> May 2024: I have a <a href="https://www.universiteitleiden.nl/en/vacancies/2024/q2/24-30314860phd-candidate-detailed-video-understanding">PhD vacancy on 'Detailed Video Understanding'</a>
<li> April 2024: I gave a talk in the <a href="https://www.aicentre.dk/events/visipedia-workshop-2024">Visipedia workshop</a> at the University of Copenhagen
<li> April 2024: Welcome to Kaiting Liu who started her PhD at LIACS
<li> March 2024: I gave a talk at the University of Bath
<li> February 2024: Our paper <a href="https://arxiv.org/abs/2401.04716">Low-Resource Vision Challenges for Foundational Models</a> is accepted to CVPR 2024
<li> February 2024: I'm co-organizing the CVPR 2024 workshop on <a href="https://winvu.github.io/cvpr-24/">What is Next in Video Understanding?</a>
<li> January 2024: I am organizing <a href="https://sites.google.com/view/nccv2024/home">NCCV 2024</a>
<li> December 2023: Congratulations to <a href="https://fmthoker.github.io">Dr. Fida Mohammad Thoker</a> who successfully defended his thesis titled <a href="https://fmthoker.github.io/pdfs/colored_digital_thesis_final.pdf">Video-Efficient Foundation Models</a>
<li> Novemeber 2023: Happy to be a <a href="https://neurips.cc/Conferences/2023/ProgramCommittee#top-reviewers">top reviewer</a> for NeurIPS 2023.
<li> October 2023: I'm hiring for <a href="https://www.universiteitleiden.nl/en/vacancies/2023/qw4/23-683141412-phd-candidates-detailed-video-understanding">two PhD positions in 'Detailed Video Understanding'</a>
<li> September 2023: Two papers accepted to NeurIPS
<li> September 2023: I joined Leiden University as an Assistant Professor
<li> August 2023: I'm thrilled to receive a <a href="https://www.nwo.nl/en/researchprogrammes/nwo-talent-programme/projects-veni/veni-2022">Veni</a> grant for my project "From What to How: Perceiving Subtle Details in Videos"
<li> July 2023: Our paper "Tubelet-Contrastive Self-Supervision for Video-Efficient Generalization" was accepted to ICCV, <a href="https://arxiv.org/abs/2303.11003">pre-print available here</a>
<li> July 2023: In September I'll join Leiden University as an Assistant Professor, look out for PhD openings!
<li> June 2023: I gave a talk at the CVPR 2023 workshop on <a href="https://sites.google.com/view/l3d-ivu-2023"> Learning with Limited Labelled Data for Image and Video Understanding</a>
<li> April 2023: I became an associate editor of CVIU
<li> February 2023: I gave a talk at the Rising Stars in AI Symposium 2023 in KAUST
<li> December 2022: Excited to be a Workshop Chair for BMVC 2023
<li> December 2022: I'm honored to serve as an Area Chair for ICCV 2023
<li> December 2022: I gave a guest lecture at the University of Catania
<li> October 2022: Happy to be an <a href="https://eccv2022.ecva.net/program/outstanding-reviewers/">outstanding reviewer for ECCV 2022</a>
<li> September 2022: I became an <a href="https://ellis.eu/">ELLIS</a> member
<li> September 2022: I gave a talk at the <a href="https://sites.google.com/view/videosymposium2022/homepage">2022 Video Understanding Symposium</a>
<li> July 2022: <a href="https://arxiv.org/abs/2203.14221">'How Severe is Benchmark-Sensitivity in Video Self-Supervised Learning?'</a> is accepted to ECCV
<li> June 2022: I was a panelist at Women in Computer Vision CVPR 2022
<li> May 2022: I gave a talk at the <a href="http://computervisionbylearning.info/">Computer Vision by Learning Summer School</a>
<li> March 2022: Our papers on <a href="https://hazeldoughty.github.io/Papers/PseudoAdverbs">Pseudo Adverbs</a> and <a href="https://xiaobai1217.github.io/DomainAdaptation" >Audio-Adaptive Action Recognition</a> are accepted to CVPR.</li>
<li> September 2021: Our paper <a href="https://arxiv.org/abs/2006.13256">Rescaling Egocentric Vision</a> is accepted for publication in IJCV
<li> September 2021: I'm an <a href="http://iccv2021.thecvf.com/outstanding-reviewers">Outstanding Reviewer for ICCV 2021</a>
<li> August 2021: Our paper <a href="https://arxiv.org/abs/2108.03656">Skeleton-Contrastive 3D Action Representation Learning</a> was accepted at ACM Multimedia 2021
<li> July 2021: I'm co-organizing the <a href="http://preregister.science">NeurIPS'21 Workshop on Pre-registration in ML</a>
<li> May 2021: Happy to be an <a href="http://cvpr2021.thecvf.com/node/184">outstanding reviewer</a> for CVPR 2021
<li> April 2021: I'm co-organizing the <a href="https://sites.google.com/view/srvu-iccv21-workshop">Workshop on Structured Representations for Video Understanding</a> at ICCV.
<li> March 2021: Our paper <a href="https://arxiv.org/abs/2103.10095">On Semantic Similarity in Video Retrieval</a> got accepted at CVPR 2021.
<li> February 2021: I gave a <a href="https://www.youtube.com/watch?v=0Tz-4_c3A-E&ab_channel=AIinRoboticsSeminarSeries">talk</a> at the <a href="https://www.pair.toronto.edu/robotics-rg/">University of Toronto's AI in Robotics Seminar Series</a>.
<li>October 2020: Successfully defended my PhD thesis "Skill Determination from Long Videos". Thank you to my examiners Josef Sivic and Bill Freeman.
<li> August 2020: Proud to be an <a href="https://eccv2020.eu/outstanding-reviewers/"> Outstanding Reviewer for ECCV 2020</a>
<li> July 2020: <a href="https://epic-kitchens.github.io/2020-100">EPIC-Kitchens-100</a> released. This is an extension of the original EPIC-Kitchens, now up 100 hours of video and 90,000 action segments.
<li> June 2020: I presented our CVPR paper Action Modifers: Learning from Adverbs in Instructional Videos at the <a href="https://www.robots.ox.ac.uk/~vgg/challenges/video-pentathlon/">Video Pentathlon workshop</a>.
<li> April 2020: The journal paper EPIC-KITCHENS Dataset: Collection Challenges and Baselines has been accepted to IEEE Transactions on Pattern Analysis and Machine Intelligence
<li> Feb 2020: <a href="https://arxiv.org/abs/1912.06617">Action Modifers: Learning from Adverbs in Instructional Videos</a> is accepted in CVPR 2020.</li>
<li> Jan 2020: I'm co-organizing the <a href="https://sites.google.com/view/wicvworkshop-cvpr2020/">Women in Computer Vision</a> and <a href="https://eyewear-computing.org/EPIC_CVPR20/">Egocentric Perception, Interaction and Computing</a> workshops at CVPR 2020.</li>
<li> Dec 2019: Our new paper on 'Action Modifiers' is available on <a href="https://arxiv.org/abs/1912.06617">arXiv</a></li>
<li> June 2019: We're presenting our paper on rank-aware temporal attention for skill determination at CVPR 2019.
</ul>
</div></div>
<h1 id="publications">Publications</h1>
<hr />
<table class="researchtable">
<tbody>
<tr>
<td class="img"> <img src="img/hd-epic.png" /> </td>
<td valign="top">
<strong>HD-EPIC: A Highly-Detailed Egocentric Video Dataset</strong><br />
Toby Perrett, Ahmad Darkhalil, Saptarshi Sinha, Omar Emara, Sam Pollard, Kranti Parida, Kaiting Liu, Prajwal Gatti, Siddhant Bansal, Kevin Flanagan, Jacob Chalk, Zhifan Zhu, Rhodri Guerrier, Fahd Abdelazim, Bin Zhu, Davide Moltisanti, Michael Wray, <u>Hazel Doughty</u>, Dima Damen<br />
ArXiv, 2025.<br />
<strong><a href="https://hd-epic.github.io/">[Webpage]</a> <a href="https://arxiv.org/abs/2502.04144">[ArXiv]</a> <a href="https://github.com/hd-epic/hd-epic-annotations">[Dataset]</a></strong>
</td>
</tr>
<tr>
<td class="img"> <img src="img/locomotion_concept.png" /> </td>
<td valign="top">
<strong>LocoMotion: Learning Motion-Focused Video-Language Representations</strong><br />
<u>Hazel Doughty</u>, Fida Mohammad Thoker, Cees Snoek<br />
Asian Conference on Computer Vision (<strong>ACCV</strong>), 2024. (<strong>Oral</strong>)<br />
<strong><a href="Papers/LocoMotion/">[Webpage]</a> <a href="https://arxiv.org/abs/2410.12018">[ArXiv]</a></strong>
</td>
</tr>
<tr>
<td class="img"> <img src="img/beyondcoarse_concept.png" /> </td>
<td valign="top">
<strong>Beyond Coarse-Grained Matching in Video-Text Retrieval</strong><br />
Aozhu Chen, <u>Hazel Doughty</u>, Xirong Li, Cees Snoek<br />
Asian Conference on Computer Vision (<strong>ACCV</strong>), 2024. (<strong>Oral</strong>)<br />
<strong><a href="https://arxiv.org/abs/2410.12407">[ArXiv]</a></strong>
</td>
</tr>
<tr>
<td class="img"> <img src="img/selex.png" /> </td>
<td valign="top">
<strong>SelEx: Self-Expertise In Fine-Grained Generalized Category Discovery</strong><br />
Sarah Rastegar, Mohammadreza Salehi, Yuki Asano, <u>Hazel Doughty</u>, Cees Snoek<br />
European Conference on Computer Vision (<strong>ECCV</strong>), 2024. <br />
<strong><a href="https://arxiv.org/abs/2408.14371">[arXiv]</a> <a href="https://github.com/SarahRastegar/SelEx">[Code]</a> </strong>
</td>
</tr>
<tr>
<td class="img"> <img src="img/concept_low_resource.png" /> </td>
<td valign="top">
<strong>Low-Resource Vision Challenges for Foundation Models</strong><br />
Yunhua Zhang, <u>Hazel Doughty</u>, Cees Snoek<br />
Conference on Computer Vision and Pattern Recognition (<strong>CVPR</strong>), 2024. <br />
<strong> <a href="https://xiaobai1217.github.io/Low-Resource-Vision">[Webpage]</a> <a href="https://arxiv.org/abs/2401.04716">[arXiv]</a> </strong>
</td>
</tr>
<tr>
<td class="img"> <img src="img/unseen_modality_teaser.png" /> </td>
<td valign="top">
<strong>Learning Unseen Modality Interaction</strong><br />
Yunhua Zhang, <u>Hazel Doughty</u>, Cees Snoek<br />
Advances in Neural Information Processing Systems (<strong>NeurIPS</strong>), 2023. <br />
<strong> <a href="https://xiaobai1217.github.io/Unseen-Modality-Interaction">[Webpage]</a> <a href="https://arxiv.org/abs/2306.12795">[arXiv]</a> <a href="https://github.com/xiaobai1217/Unseen-Modality-Interaction">[Code]</a> </strong>
</td>
</tr>
<tr>
<td class="img"> <img src="img/category_codes_teaser.png" /> </td>
<td valign="top">
<strong>Learn to Categorize or Categorize to Learn? Self-Coding for Generalized Category Discovery</strong><br />
Sarah Rastegar, <u>Hazel Doughty</u>, Cees Snoek<br />
Advances in Neural Information Processing Systems (<strong>NeurIPS</strong>), 2023. <br />
<strong><a href="https://arxiv.org/abs/2310.19776">[arXiv]</a> <a href="https://github.com/SarahRastegar/InfoSieve">[Code]</a> </strong>
</td>
</tr>
<tr>
<td class="img"> <img src="img/tubelet_teaser.png" /> </td>
<td valign="top">
<strong>Tubelet-Contrastive Self-Supervision for Video-Efficient Generalization</strong><br />
Fida Mohammad Thoker, <u>Hazel Doughty</u>, Cees Snoek<br />
International Conference on Computer Vision (<strong>ICCV</strong>), 2023. <br />
<strong> <a href="https://fmthoker.github.io/tubelet-contrastive-learning/">[Webpage]</a> <a href="https://arxiv.org/abs/2303.11003">[arXiv]</a> <a href="https://github.com/fmthoker/tubelet-contrast">[Code]</a></strong>
</td>
</tr>
<tr>
<td class="img"> <img src="img/dark_teaser.png" /> </td>
<td valign="top">
<strong>Day2Dark: Pseudo-Supervised Activity Recognition beyond Silent Daylight</strong><br />
Yunhua Zhang, <u>Hazel Doughty</u>, Cees Snoek<br />
ArXiv, 2022. <br />
<strong> <a href="https://arxiv.org/abs/2212.02053">[arXiv]</a> </strong>
</td>
</tr>
<tr>
<td class="img"> <img src="img/BenchmarkTeaser.png" /> </td>
<td valign="top">
<strong>How Severe is Benchmark-Sensitivity in Video Self-Supervised Learning?</strong><br />
Fida Mohammad Thoker, <u>Hazel Doughty</u>, Piyush Bagad, Cees Snoek<br />
European Conference on Computer Vision (<strong>ECCV</strong>), 2022. <br />
<strong><a href="https://bpiyush.github.io/SEVERE-website/">[Webpage]</a> <a href="https://arxiv.org/abs/2203.14221">[arXiv]</a> <a href="https://github.com/fmthoker/SEVERE-BENCHMARK">[Code]</a></strong>
</td>
</tr>
<tr>
<td class="img"> <img src="Papers/PseudoAdverbs/PseudoAdverbsTeaser.png" /> </td>
<td valign="top">
<strong>How Do You Do It? Fine-Grained Action Understanding with Pseudo-Adverbs</strong><br />
<u>Hazel Doughty</u> and Cees Snoek<br />
Conference on Computer Vision and Pattern Recognition (<strong>CVPR</strong>), 2022. <br />
<strong><a href="Papers/PseudoAdverbs/">[Webpage]</a> <a href="https://arxiv.org/abs/2203.12344">[arXiv]</a> <a href="https://github.com/hazeld/PseudoAdverbs/">[Dataset and Code]</a></strong>
</td>
</tr>
<tr>
<td class="img"> <img src="img/audioadaptive.png" /> </td>
<td valign="top">
<strong>Audio-Adaptive Activity Recognition Across Video Domains</strong><br />
Yunhua Zhang, <u>Hazel Doughty</u>, Ling Shao, Cees Snoek<br />
Conference on Computer Vision and Pattern Recognition (<strong>CVPR</strong>), 2022. <br />
<strong><a href="https://xiaobai1217.github.io/DomainAdaptation/">[Webpage]</a> <a href="https://arxiv.org/abs/2203.14240">[arXiv]</a> <a href="https://github.com/xiaobai1217/DomainAdaptation">[Code]</a> </strong>
</td>
</tr>
<tr>
<td class="img"> <img src="img/skeleton_contrast.png" /> </td>
<td valign="top">
<strong>Skeleton-Contrastive 3D Action Representation Learning</strong><br />
Fida Mohammad Thoker, <u>Hazel Doughty</u>, Cees Snoek<br />
ACM International Conference on Multimedia (<strong>ACMMM</strong>), 2021<br />
<strong> <a href="https://arxiv.org/abs/2108.03656">[arXiv]</a> <a href="https://github.com/fmthoker/skeleton-contrast">[Code]</a></strong>
</td>
</tr>
<tr>
<td class="img"> <img src="img/SSR.png" /> </td>
<td valign="top">
<strong>On Semantic Similarity in Video Retrieval</strong><br />
Michael Wray, <u>Hazel Doughty</u> and Dima Damen<br />
Conference on Computer Vision and Pattern Recognition (<strong>CVPR</strong>), 2021. <br />
<strong><a href="https://mwray.github.io/SSVR/">[Webpage]</a> <a href="https://arxiv.org/abs/2103.10095">[arXiv]</a></strong>
</td>
</tr>
<tr>
<td class="img"> <img src="img/epic_100_teaser.PNG" /> </td>
<td valign="top">
<strong>Rescaling Egocentric Vision: EPIC-KITCHENS-100</strong><br />
Dima Damen, <u>Hazel Doughty</u>, Giovanni Maria Farinella, Antonino Furnari, Evangelos Kazakos, Jian Ma, Davide Moltisanti, Jonathan Munro, Toby Perrett, Will Price, Michael Wray<br />
International Journal of Computer Vision (<strong>IJCV</strong>), 2021. <br />
<strong><a href="https://epic-kitchens.github.io/2020-100">[Webpage]</a> <a href="https://arxiv.org/abs/2006.13256">[arXiv]</a> <a href="https://github.com/epic-kitchens/epic-kitchens-100-annotations">[Dataset and Code]</a></strong>
</td>
</tr>
<tr>
<td class="img"> <img src="img/action_mods_concept.png" /> </td>
<td valign="top">
<strong>Action Modifiers: Learning from Adverbs in Instructional Videos</strong><br />
<u>Hazel Doughty</u>, Ivan Laptev, Walterio Mayol-Cuevas and Dima Damen<br />
Conference on Computer Vision and Pattern Recognition (<strong>CVPR</strong>), 2020. <br />
<strong><a href="Papers/ActionModifiers/">[Webpage]</a> <a href="https://arxiv.org/abs/1912.06617">[arXiv]</a> <a href="https://github.com/hazeld/action-modifiers">[Dataset and Code]</a></strong>
</td>
</tr>
<tr>
<td class="img"> <img src="img/epic_logo.png" /> </td>
<td valign="top">
<strong>The EPIC-KITCHENS Dataset: Collection, Challenges and Baselines</strong><br />
Dima Damen, <u>Hazel Doughty</u>, Giovanni Maria Farinella, Sanja Fidler, Antonino Furnari, Evangelos Kazakos, Davide Moltisanti, Jonathan Munro, Toby Perrett, Will Price, Michael Wray<br />
Transactions on Pattern Analysis and Machine Intelligence (<strong>TPAMI</strong>), 2020. <br />
<strong><a href="https://arxiv.org/abs/2005.00343">[arXiv Preprint]</a></strong>
</td>
</tr>
<tr>
<td class="img"> <img src="img/pros_teaser.png" /> </td>
<td valign="top">
<strong>The Pros and Cons: Rank-Aware Temporal Attention for Skill Determination in Long Videos</strong><br />
<u>Hazel Doughty</u>, Walterio Mayol-Cuevas and Dima Damen<br />
Conference on Computer Vision and Pattern Recognition (<strong>CVPR</strong>), 2019. <br />
<strong><a href="Papers/TheProsandCons">[Webpage]</a> <a href="https://arxiv.org/abs/1812.05538">[arXiv]</a> <a href="https://github.com/hazeld/rank-aware-attention-network">[Dataset & Code]</strong>
</td>
</tr>
<tr>
<td class="img"> <img src="img/epic_55_teaser.PNG" /> </td>
<td valign="top">
<strong>Scaling Egocentric Vision: The EPIC-Kitchens Dataset</strong><br />
Dima Damen, <u>Hazel Doughty</u>, Giovanni Maria Farinella, Sanja Fidler, Antonino Furnari, Evangelos Kazakos, Davide Moltisanti, Jonathan Munro, Toby Perrett, Will Price, Michael Wray<br />
European Conference on Computer Vision (<strong>ECCV</strong>), 2018. (<strong>Oral</strong>) <br />
<strong><a href="https://arxiv.org/abs/1804.02748">[arXiv]</a> <a href="https://epic-kitchens.github.io/2018">[Webpage & Dataset]</a></strong>
</td>
</tr>
<tr>
<td class="img"> <img src="img/whos_teaser.png" /> </td>
<td valign="top">
<strong>Who's Better? Who's Best? Pairwise Deep Ranking for Skill Determination</strong><br />
<u>Hazel Doughty</u>, Dima Damen and Walterio Mayol-Cuevas<br />
Conference on Computer Vision and Pattern Recognition (<strong>CVPR</strong>), 2018. (<strong>Spotlight</strong>) <br />
<strong><a href="https://arxiv.org/abs/1703.09913">[arXiv]</a> <a href="bib/whos_better.html">[Bibtex]</a> <a href="https://github.com/hazeld/EPIC-Skills2018">[Dataset]</a></strong>
</td>
</tr>
</tbody>
</table>
<br>
<h1 id="people" style="padding-top: 10px">People</h1>
<hr />
<br>
<ul>
<li>2024-present - <a href="https://lucstrater.github.io/">Luc Sträter</a>
<li>2024-present - Kaiting Liu
<li>2024-present - <a href="https://omar-emara.github.io/">Omar Emara</a> (PhD with Dima Damen at University of Bristol)
<li>2023-2024 <a href="https://www.researchgate.net/profile/Aozhu-Chen">Aozhu Chen</a> (Visting PhD student from Renmin University of China)
<li>2021-2024 - <a href="https://xiaobai1217.github.io/">Yunhua Zhang</a> (PhD with Cees Snoek)</li>
<li>2021-2023 - <a href="https://fmthoker.github.io/">Fida Mohammad Thoker</a> (PhD student) now postdoc at KAUST</li>
<li> 2020-present - Sarah Rastegar (PhD with Cees Snoek)</li>
<li>2021-2022 - <a href="https://bpiyush.github.io/">Piyush Bagad</a> (MS intern) now PhD student at University of Oxford</li>
</ul>
<br>
<h1 id="misc" style="padding-top: 10px">Academic Service</h1>
<hr />
<br>
Organizer: <a href="https://winvu.github.io/cvpr-24/">What is Next in Video Understanding? CVPR 2024 Workshop</a>, Workshop Chair for BMVC 2023, <a href="https://sites.google.com/view/nccv-2022">Netherlands Conference on Computer Vision 2022 and 2024</a>, <a href="http://preregister.science">NeurIPS'21 Workshop on Pre-registration in ML</a>, <a href="https://sites.google.com/view/srvu-iccv21-workshop">ICCV'21 Workshop on Structured Representations for Video Understanding</a>, <a href="https://sites.google.com/view/wicvworkshop-cvpr2020/">WiCV@CVPR2020</a>, <a href="https://eyewear-computing.org/EPIC_CVPR20/">EPIC@CVPR2020</a>, <a href="https://eyewear-computing.org/EPIC_ECCV20/">EPIC@ECCV2020</a><br>
Area Chair: CVPR 2025, NeurIPS 2024, ECCV 2024, ACCV 2024, AAAI 2024, ICCV 2023, WACV 2023
<br>
Associate Editor: CVIU since 2023
<br>
Reviewer: CVPR since 2020, ICCV since 2019, ECCV since 2020, TPAMI 2020-2022, IJCV 2021-2022, NeurIPS 2022, ACCV 2020, WACV 2020-2021, AAAI 2020
<br>
Outstading Reviewer: CVPR 2024, NeurIPS 2023, CVPR 2023, ECCV 2022, ICCV 2021, CVPR 2021, ECCV 2020, ACCV 2020
<br>
<h1 id="teaching" style="padding-top: 20px">Teaching</h1>
<hr />
<br>
Computer Vision, 2024 (3rd Year Bachelors), Leiden University
<br>
Seminar in Advances of Deep Learning, 2024 (Masters), Leiden University.
<br>
Leren en Beslissen (Learning and Decision Making), 2022 (BSc AI, Y2), University of Amsterdam.
</div>
</div>
<footer class="site-footer">
</footer>
</body>
</html>