forked from jonbarron/website
-
Notifications
You must be signed in to change notification settings - Fork 5
/
Copy path_index.html
executable file
·829 lines (790 loc) · 56.2 KB
/
_index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
<!DOCTYPE HTML>
<html lang="en"><head><meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<title>Rose E. Wang</title>
<meta name="author" content="Rose E. Wang">
<meta name="viewport" content="width=device-width, initial-scale=1">
<link rel="stylesheet" type="text/css" href="stylesheet.css">
<link rel="icon" type="image/png" href="images/icon.png">
</head>
<body>
<table style="width:100%;max-width:900px;border:0px;border-spacing:0px;border-collapse:separate;margin-right:auto;margin-left:auto;"><tbody>
<tr style="padding:0px">
<td style="padding:0px">
<table style="width:100%;border:0px;border-spacing:0px;border-collapse:separate;margin-right:auto;margin-left:auto;"><tbody>
<tr style="padding:0px">
<td style="padding:2.5%;width:63%;vertical-align:middle">
<p style="text-align:center">
<name>Rose E. Wang</name> </br>
rewang at cs dot stanford dot edu
</p>
<p>I am a Stanford Computer Science PhD student, advised by <a href="https://www.dorademszky.com/">Dora Demszky</a> and <a href="https://cs.stanford.edu/~diyiy/">Diyi Yang</a>.
I closely collaborate with <a href="https://ed.stanford.edu/faculty/sloeb">Susanna Loeb</a> from Stanford's School of Education.
I was the Head TA for Stanford's first class on NLP & Education (<a href="https://web.stanford.edu/class/cs293/">CS293</a>), interned at <a href="https://allenai.org">AI2</a> working on AI and Education, and founded Stanford's interdisciplinary <a href="https://sites.google.com/cs.stanford.edu/stanford-education-rg/">Education Reading Group</a>.
<!-- I also founded and organize Stanford's interdisciplinary <a href="https://sites.google.com/cs.stanford.edu/stanford-education-rg/">Education Reading Group</a>. -->
I am supported by the NSF GRFP, Gates Foundation, and National Student Support Accelerator.
</p>
<p><b>My research focuses in NLP and AI for Education.</b>
Language is central to educational interactions. My work wrestles with the question: How can we improve student learning & build equitable systems at scale through language?
I develop NLP systems measuring effective learning interactions and conduct interventions to answer this question.
<!-- My work centers on developing NLP systems and conducting interventions that can discover, measure and establish effective teaching/learning interactions at scale. -->
</p>
<p> I did my undergraduate at MIT, working with <a href="http://web.mit.edu/cocosci/josh.html">Josh Tenenbaum</a>, <a href="https://www.mit.edu/~jhow/">Jonathan How</a>, <a href="https://research.google/teams/brain/">Google Brain</a> and
<a href="https://research.google/teams/brain/robotics/">Google Brain Robotics</a>.
In a prior lifetime, I was a passionate multilinguist with certification in German (<a href="https://en.wikipedia.org/wiki/Abitur">Abitur</a>), Chinese (<a href="https://en.wikipedia.org/wiki/Hanyu_Shuiping_Kaoshi">HSK Level 6</a>), French (<a href="https://en.wikipedia.org/wiki/Dipl%C3%B4me_d%27%C3%A9tudes_en_langue_fran%C3%A7aise">DELF B2</a>), Spanish (<a href="https://www.dele.org/">DELE B2</a>) and received the <a href="https://www.certilingua.net/">European plurilingual excellence award</a>.</p>
<p style="text-align:center">
[
<a href="https://github.com/rosewang2008/">Github</a>  / 
<a href="https://twitter.com/rose_e_wang">Twitter</a>  / 
<a href="https://scholar.google.com/citations?user=V-dlwF4AAAAJ&hl=en">Google Scholar</a>  / 
<a href="https://rosewang2008.github.io/blog/">Blog</a>
]
</p>
</td>
<td style="padding:2.5%;width:40%;max-width:40%">
<a href="images/rose.png"><img style="width:80%;max-width:80%" alt="profile photo" src="images/rose.png" class="hoverZoomLink"></a>
</td>
</tr>
</tbody></table>
<h2>Recent News</h2>
<ul>
<li><b>September 2024:</b></li>
<ul>
<li>Invited talk at the University of Bocconi.</li>
<li>Presenting at SREE 2024 on <a href="https://osf.io/8d6ha/">Tutor CoPilot</a>, a randomized controlled trial on real-time decision aids for improving virtual math tutoring.</li>
<li>Presenting at Becker Friedman Institute's AI for Social Science conference on <a href="https://osf.io/8d6ha/">Tutor CoPilot</a>, a randomized controlled trial on real-time decision aids for improving virtual math tutoring.</li>
</ul>
<li><b>July 2024:</b></li>
<ul>
<li>🏆 Winner of the Tools Competition with coteach.ai! (<a href="https://tools-competition.org/23-24-accelerating-and-assessing-learning-winners/">link</a>)</li>
<li>Invited talk at the Learning Analytics Learning Network (LALN).</li>
<li>Ambassador talk at AIED 2024 🇧🇷 on <a href="https://aclanthology.org/2023.bea-1.53.pdf">"Is ChatGPT a Good Teacher Coach?"</a>. <a href="https://www.youtube.com/watch?v=bGmauqqLqo4&t=1313s"><span class="highlight"><b>Video Link</b></span></a>.</li>
<li>I'm organizing the <a href="https://sites.google.com/view/llmworkshopedm/home">LLM for EdTech workshop</a> at Education Data Mining (EDM) at Georgia Tech. Come to the workshop!</li>
<li>Presenting <a href="https://github.com/stanfordnlp/edu-convokit">Edu-ConvoKit</a> at EDM 2024's <a href="https://sites.google.com/view/llmworkshopedm/home">LLM for EdTech workshop</a>.</li>
<li>Presenting <a href="https://github.com/stanfordnlp/edu-convokit">Edu-ConvoKit</a> to the National Tutoring Observatory.</li>
<li>I'll be at Learning at Scale (L@S). Let's chat! Collaborators will be presenting <a href="https://dl.acm.org/doi/10.1145/3657604.3664698">ScaffGen: Scaling High-Leverage Curriculum Scaffolding in Middle-School Mathematics</a>!</li>
</ul>
<li><b>June 2024:</b>
<ul>
<li>Presenting at NAACL 2024 🇲🇽 on 2 works: <a href="https://arxiv.org/pdf/2310.10648.pdf">Bridging the Novice-Expert Gap via Models of Decision-Making</a> and <a href="https://github.com/stanfordnlp/edu-convokit">Edu-ConvoKit</a>.</li>
</ul>
<li><b>May 2024:</b>
<ul>
<li>Invited talk at CU Boulder.</li>
<li>Invited talk at the National Student Support Accelerator.</li>
<li>Invited talk at the National Council on Measurement in Education.</li>
</ul>
<li><b>April 2024:</b>
<ul>
<li>Invited talk at UC Irvine.</li>
<!-- <li>Giving the ambassador talk on <a href="https://aclanthology.org/2023.bea-1.53.pdf">"Is ChatGPT a Good Teacher Coach?"</a> at AIED 2024 in Recife, Brazil 🇧🇷!</li> -->
<li>Organizing the <a href="https://sites.google.com/view/llmworkshopedm/home">"Leveraging Large Language Models For Next Generation Education Techologies"</a> workshop at EDM 2024. Submit your work!</li>
</ul>
</li>
<li><b>March 2024:</b>
<ul>
<li>My work on <a href="https://arxiv.org/pdf/2306.03090.pdf">providing teachers LLM-generated feedback</a> was selected as BEA's 2023 Ambassador paper.</li>
<!-- <li> Two works accepted to NAACL 2024 🇲🇽: <a href="https://arxiv.org/pdf/2310.10648.pdf">"Bridging the Novice-Expert Gap via Models of Decision-Making: A Case Study on Remediating Math Mistakes"</a> and <a href="https://github.com/stanfordnlp/edu-convokit">"Edu-ConvoKit: An Open-Source Library for Education Conversation Data"</a>!</li> -->
<li>Presenting at EACL 2024 in Malta 🇲🇹 on <a href="https://arxiv.org/pdf/2403.03956.pdf">"Backtracing: Retrieving the Cause of the Query"</a>.</li>
</ul>
</li>
<!-- My work on <a href="https://arxiv.org/pdf/2306.03090.pdf">providing teachers LLM-generated feedback</a> was selected as BEA's 2023 Ambassador paper. -->
<!-- <li> <b>March 2024</b>: Two works accepted to NAACL 2024 🇲🇽: <a href="https://arxiv.org/pdf/2310.10648.pdf">"Bridging the Novice-Expert Gap via Models of Decision-Making: A Case Study on Remediating Math Mistakes"</a> and <a href="https://github.com/stanfordnlp/edu-convokit">"Edu-ConvoKit: An Open-Source Library for Education Conversation Data"</a>!</li> -->
<!-- <li><b>March 2024</b>: I'll be presenting <a href="https://arxiv.org/pdf/2403.03956.pdf">"Backtracing: Retrieving the Cause of the Query"</a> at EACL 2024 in Malta 🇲🇹 !</li> -->
<li><b>February 2024</b>: Invited talk at <a href="https://datasciencelab.ise.bgu.ac.il/">Ben-Gurion University & University of Edinburgh</a>.</li>
<li><b>January 2024</b>:
<ul>
<li>Forbes featured our work on <a href="https://www.forbes.com/sites/ulrichboser/2024/01/18/now-that-chatgpts-been-introduced-its-time-to-fine-tune-it/?sh=702eb0241b69">NLP in Education</a>!</li>
<li>Invited talk at <a href="https://www.meetup.com/data-science-in-education/events/296537023/">Eedi's Data Science in Education.</a></li>
<li>Invited talk at Google DeepMind.</li>
<li>Invited talk at <a href="https://hai.stanford.edu/events/aieducation-summit-advancing-human-learning-ai-technologies">Stanford's AI+Education Summit</a>.</li>
</ul>
</li>
<!-- <li><b>January 2024</b>: Invited talk at <a href="https://www.meetup.com/data-science-in-education/events/296537023/">Eedi's Data Science in Education</a>, Google DeepMind, and <a href="https://hai.stanford.edu/events/aieducation-summit-advancing-human-learning-ai-technologies">Stanford's AI+Education Summit</a>.</li> -->
<!-- <li>📰 <b>October 2023</b>: Stanford HAI featured our work on <a href="https://hai.stanford.edu/news/designing-natural-language-processing-tools-teachers">NLP in Education</a>! -->
<!-- <li><b>October 2023</b>: Accepted to <a href="https://datascience.uchicago.edu/research/postdoctoral-programs/rising-stars/">Rising Stars in Data Science</a> at the University of Chicago!</li> -->
<!-- <li>📚 <b>September 2023</b>: Excited to be the Head TA for Stanford's new <a href="https://web.stanford.edu/class/cs293/">class on NLP and Education (CS293/EDUC473)</a>! Join this class if you're a Stanford student excited about NLP and Education 🙏</li> -->
<!-- <li><b>August 2023</b>: Invited talk at <a href="https://www.widsconference.org/workshops.html">Women in Data Science</a> on my NLP & education work.</li> -->
<!-- <li> <b>June 2023</b>: Invited talk at Google Deepmind on my NLP & education work.</li> -->
<!-- <li><b>June 2023</b>: Our work on <a href="https://arxiv.org/pdf/2306.03090.pdf">automated teacher coaching with ChatGPT</a> was featured on <a href="https://acceleratelearning.stanford.edu/#featured-content">Stanford's Accelerator for Learning newsletter!</a></li> -->
<!-- <li><b>May 2023</b>: Two papers accepted to the <a href="https://sig-edu.org/bea/2023">Proceedings of Innovative Use of NLP for Building Educational Applications (BEA)</a>. See you in Toronto! </li> -->
<!-- <li>🎤 <b>May 2023</b>: Talked at the <a href="https://studentsupportaccelerator.org/2023-nssa-conference">National Student Support Accelerator</a> on my NLP & education work.</li>
<li>🏆 <b>April 2023</b>: <a href="https://acceleratelearning.stanford.edu/story/generative-ai-seed-grants/">Won a $70k seed grant by the Stanford Accelerator for Learning and HAI to help teachers give more effective feedback, in partnership with Microsoft EDU. </a> </li> -->
</ul>
<!--<h2>Research 🤖</h2>-->
<h2>Research</h2>
<table style="width:100%;border:0px;border-spacing:0px;border-collapse:separate;margin-right:auto;margin-left:auto;"><tbody>
<p>Representative papers are <span class="highlight">highlighted</span>.</p>
<tr onmouseout="nightsight_stop()" onmouseover="nightsight_start()" bgcolor="#ffffd0">
<td style="padding:20px;width:25%;vertical-align:middle">
<img src="images/tutor_copilot.png" alt="" style="border-style: none" width="200">
</td>
<td style="padding:20px;width:75%;vertical-align:middle">
<a href="https://osf.io/8d6ha">
<papertitle>Tutor CoPilot: An Intervention on Real-Time Decision Aids for Improving Virtual Math Tutoring</papertitle>
</a>
<br>
<strong>Rose E. Wang</strong>,
Ana Ribeiro, Carly Robinson, Dorottya (Dora) Demszky, Susanna Loeb.
<br>
<em>SREE 2024 Symposium on Artificial Intelligence and the Future of Educational Measurement and Evaluation.</em>
<br>
<em>SREE 2024 Invited Symposium on Exploring the AI Frontier: Innovations in Social Science Research.</em>
<br>
<em>University of Chicago, Becker Friedman Institute 2024 AI for Social Science Conference.</em>
<br>
<em>AEA 2024 Conference.</em>
<br>
[
<a href="https://osf.io/8d6ha">OSF Pre-Registration</a>
]
<p>
Tutor CoPilot is an expert-guided large language model (LLM) assistant that provides real-time suggestions to novice math tutors, and enhances the interaction between novice math tutors and their students.
We test an intervention on Tutor CoPilot for improving tutor instruction.
</p>
</td>
</tr>
<tr onmouseout="nightsight_stop()" onmouseover="nightsight_start()" bgcolor="#ffffd0">
<td style="padding:20px;width:25%;vertical-align:middle">
<img src="images/sos.png" alt="" style="border-style: none" width="200">
</td>
<td style="padding:20px;width:75%;vertical-align:middle">
<a href="">
<papertitle>How Tutors Share or Split Attention Across Students in Small-Group Tutoring</papertitle>
</a>
<br>
Qingyang Zhang*,
<strong>Rose E. Wang*</strong>,
Ana Ribeiro, Susanna Loeb, Dorottya (Dora) Demszky.
<br>
* = equal contribution
<br>
<em>SREE 2024 Poster.</em>
<br>
<p>
This study measures whom the tutor directs language to and how these measures relate to student performances in the two-on-one tutoring setting.
</p>
</td>
</tr>
<tr onmouseout="nightsight_stop()" onmouseover="nightsight_start()" bgcolor="#ffffd0">
<td style="padding:20px;width:25%;vertical-align:middle">
<img src="images/scaffgen.png" alt="" style="border-style: none" width="200">
</td>
<td style="padding:20px;width:75%;vertical-align:middle">
<a href="https://dl.acm.org/doi/10.1145/3657604.3664698">
<papertitle>ScaffGen: Scaling High-Leverage Curriculum Scaffolding in Middle-School Mathematics </papertitle>
</a>
<br>
Rizwaan Malik, Dorna Abdi,
<strong>Rose E. Wang</strong>,
<a href="https://www.dorademszky.com/">Dorottya (Dora) Demszky</a>
<br>
<em>Winner of 2024 Tools Competition 🏆 (<a href="https://tools-competition.org/23-24-accelerating-and-assessing-learning-winners/">link</a>)</em>
<br>
<em>L@S 2024.</em>
<br>
[
<a href="https://dl.acm.org/doi/10.1145/3657604.3664698">Paper</a>
]
<p>
This paper examines whether and how Large Language Models (LLMs) can be leveraged to enhance K-12 math education by facilitating the creation of high-quality curriculum scaffolds that reflect expert teachers' strategies.
We build on the Cognitive Task Analysis methodology developed in my prior work, <a href="https://arxiv.org/pdf/2310.10648.pdf">Bridge</a>, to work with teachers & understanding how they think about scaffolding curriculum materials for middle-school math students.
</p>
</td>
</tr>
<tr onmouseout="nightsight_stop()" onmouseover="nightsight_start()" bgcolor="#ffffd0">
<td style="padding:20px;width:25%;vertical-align:middle">
<img src="images/bridge3.png" alt="" style="border-style: none" width="200">
</td>
<td style="padding:20px;width:75%;vertical-align:middle">
<a href="https://arxiv.org/pdf/2310.10648.pdf">
<papertitle>🌁 Bridging the Novice-Expert Gap via Models of Decision-Making </papertitle>
</a>
<br>
<strong>Rose E. Wang</strong>,
Qingyang Zhang, Carly Robinson, Susanna Loeb,
<a href="https://www.dorademszky.com/">Dorottya (Dora) Demszky</a>
<br>
<em>NAACL 2024.</em>
<br>
<b style="color:red">Featured in <a href="https://hai.stanford.edu/news/designing-natural-language-processing-tools-teachers">Stanford HAI</a> and <a href="https://danmeyer.substack.com/p/one-way-teachers-and-ai-could-help">Dan Meyer's blog</a></b>
<br>
[
<a href="https://arxiv.org/pdf/2310.10648.pdf">Paper</a>,
<a href="https://github.com/rosewang2008/bridge">Code</a>,
<a href="https://youtu.be/bX5monUe93M?si=p-LCpkSUp8yiCEVT">Video</a>,
<a href="assets/NAACL_Poster_2024_Bridge-2.pdf">Poster</a>
]
<p>
We explore the potential for large language models (LLMs) to assist math tutors in remediating student mistakes.
We present ReMath, a benchmark co-developed with experienced math teachers that deconstructs their thought process for remediation. Our work sheds light on the potential and limitations of using current LLMs to provide high-quality learning experiences for both tutors and students at scale.
</p>
</td>
</tr>
<tr onmouseout="nightsight_stop()" onmouseover="nightsight_start()" bgcolor="#ffffd0">
<td style="padding:20px;width:25%;vertical-align:middle">
<img src="images/edutoolkit.png" alt="" style="border-style: none" width="200">
</td>
<td style="padding:20px;width:75%;vertical-align:middle">
<a href="https://github.com/stanfordnlp/edu-convokit">
<papertitle>🛠️ Edu-ConvoKit: An Open-Source Library for Education Conversation Data</papertitle>
</a>
<br>
<strong>Rose E. Wang</strong>,
<a href="https://www.dorademszky.com/">Dorottya (Dora) Demszky</a>
<br>
<em>NAACL 2024.</em>
<br>
[
<a href="https://arxiv.org/pdf/2402.05111.pdf">Paper</a>,
<a href="https://github.com/stanfordnlp/edu-convokit">Code</a>,
<a href="https://edu-convokit.readthedocs.io/en/latest/">Documentation</a>,
<a href="https://youtu.be/zdcI839vAko?si=MhfP0HznBh6A1FTi">Video</a>,
<a href="assets/NAACL_Poster_2024_Edu_ConvoKit-2.pdf">Poster</a>
]
<p>
Edu-ConvoKit is an open-source library designed to handle preprocessing, annotation and analysis of conversation data in education.
</p>
</td>
</tr>
<tr onmouseout="nightsight_stop()" onmouseover="nightsight_start()" bgcolor="#ffffd0">
<td style="padding:20px;width:25%;vertical-align:middle">
<img src="images/backtracing.png" alt="" style="border-style: none" width="200">
</td>
<td style="padding:20px;width:75%;vertical-align:middle">
<a href="https://arxiv.org/pdf/2403.03956.pdf">
<papertitle>Backtracing: Retrieving the Cause of the Query</papertitle>
</a>
<br>
<strong>Rose E. Wang</strong>,
<a href="">Pawan Wirawarn</a>,
<a href="https://omarkhattab.com/">Omar Khattab</a>,
<a href="">Noah Goodman</a>,
<a href="https://www.dorademszky.com/">Dorottya (Dora) Demszky</a>
<br>
<em>EACL 2024, Long Paper Findings.</em>
<br>
<b style="color:red">Featured in <a href="https://hai.stanford.edu/news/designing-natural-language-processing-tools-teachers">Stanford HAI</a></b>
<br>
[
<a href="https://arxiv.org/pdf/2403.03956.pdf">Paper</a>,
<a href="https://github.com/rosewang2008/backtracing">Code</a>,
<a href="https://www.youtube.com/watch?v=hjkFp4q9urA&ab_channel=RoseWang">Video</a>,
<a href="assets/backtracing_poster_eacl2024.pdf">Poster</a>
]
<p>
Many online content portals allow users to ask questions to supplement their understanding (e.g., of lectures or news articles). While information retrieval (IR) systems may provide answers for such user queries, they do not directly assist content creators identify segments that caused a user to ask those questions; this can be useful for several purposes like helping improve their content. We introduce the task of backtracing, in which systems retrieve the text segment that most likely provoked a user query.
</p>
</td>
</tr>
<tr onmouseout="nightsight_stop()" onmouseover="nightsight_start()" bgcolor="#ffffd0">
<td style="padding:20px;width:25%;vertical-align:middle">
<img src="images/lak_talktime.png" alt="" style="border-style: none" width="200">
</td>
<td style="padding:20px;width:75%;vertical-align:middle">
<a href="https://spaces-cdn.owlstown.com/blobs/ttf4t8cpptbt692mxzao4p3d50af">
<papertitle>Does Feedback on Talk Time Increase Student Engagement? Evidence from a Randomized Controlled Trial on a Math Tutoring Platform</papertitle>
</a>
<br>
<a href="https://www.dorademszky.com/">Dorottya (Dora) Demszky</a>,
<strong>Rose E. Wang</strong>,
Sean Geraghty, Carol Yu
<br>
In the <em> 14th Learning Analytics and Knowledge Conference (LAK '24)</em>.
<br>
[
<a href="https://spaces-cdn.owlstown.com/blobs/ttf4t8cpptbt692mxzao4p3d50af">Paper</a>
]
<p>
Providing ample opportunities for students to express their thinking is pivotal to their learning of mathematical concepts. We introduce the Talk Meter, which provides in-the-moment automated feedback on student-teacher talk ratios. We conduct a randomized controlled trial on a virtual math tutoring platform (n=742 tutors) to evaluate the effectiveness of the Talk Meter at increasing student talk. In one treatment arm, we show the Talk Meter only to the tutor, while in the other arm we show it to both the student and the tutor. We find that the Talk Meter increases student talk ratios in both treatment conditions by 13-14%; this trend is driven by the tutor talking less in the tutor-facing condition, whereas in the studentfacing condition it is driven by the student expressing significantly more mathematical thinking. These results demonstrate the promise of in-the-moment joint talk time feedback to both teachers and students as a low cost, engaging, and scalable way to increase students’ mathematical reasoning.
</p>
</td>
</tr>
<tr onmouseout="nightsight_stop()" onmouseover="nightsight_start()" bgcolor="#ffffd0">
<td style="padding:20px;width:25%;vertical-align:middle">
<img src="images/zeroshot.png" alt="" style="border-style: none" width="200">
</td>
<td style="padding:20px;width:75%;vertical-align:middle">
<a href="https://arxiv.org/pdf/2306.03090.pdf">
<papertitle>Is ChatGPT a Good Teacher Coach? Measuring Zero-Shot Performance For Scoring and Providing Actionable Insights on Classroom Instruction</papertitle>
</a>
<br>
<strong>Rose E. Wang</strong>,
<a href="https://www.dorademszky.com/">Dorottya (Dora) Demszky</a>
<br>
In the <em>Proceedings of Innovative Use of NLP for Building Educational Applications (2023)</em>.
<br>
<b style="color:red">🏆 BEA 2023's Ambassador Paper</b>
<br>
<b style="color:red">AIED 2024 IAALDE Talk (<a href="https://www.youtube.com/watch?v=bGmauqqLqo4&t=1313s">Video Link</a>)</b>.
<br>
<b style="color:red">Featured in <a href="https://www.forbes.com/sites/ulrichboser/2024/01/18/now-that-chatgpts-been-introduced-its-time-to-fine-tune-it/?sh=702eb0241b69">Forbes</a> and </b>
<b style="color:red"><a href="https://hai.stanford.edu/news/designing-natural-language-processing-tools-teachers">Stanford HAI</a></b>
<br>
[
<a href="https://rosewang2008.github.io/zero-shot-teacher-feedback/">Project page</a>,
<a href="https://www.youtube.com/watch?v=M729eZ8pFOU&t=570s&ab_channel=RoseWang">Video</a>,
<a href="https://arxiv.org/pdf/2306.03090.pdf">Paper</a>,
<a href="https://github.com/rosewang2008/zero-shot-teacher-feedback">Code</a>
]
<p>We explore whether generative AI could become a cost-effective complement to expert feedback by serving as an automated teacher coach. We propose three teacher coaching tasks for generative AI: (A) scoring transcript segments based on classroom observation instruments, (B) identifying highlights and missed opportunities for good instructional strategies, and (C) providing actionable suggestions for eliciting more student reasoning.
</p>
</td>
</tr>
<tr onmouseout="nightsight_stop()" onmouseover="nightsight_start()" bgcolor="#ffffd0">
<td style="padding:20px;width:25%;vertical-align:middle">
<img src="images/growth_mindset.png" alt="" style="border-style: none" width="200">
</td>
<td style="padding:20px;width:75%;vertical-align:middle">
<a href="https://arxiv.org/pdf/2310.10637.pdf">
<papertitle>“Mistakes Help Us Grow”: Facilitating and Evaluating Growth Mindset Supportive Language in Classrooms</papertitle>
</a>
<br>
Kunal Handa, Margaret Clapper, Jessica Boyle,
<strong>Rose E. Wang</strong>,
Diyi Yang, David S Yeager,
<a href="https://www.dorademszky.com/">Dorottya (Dora) Demszky</a>
<br>
In the <em>Conference on Empirical Methods in Natural Language Processing (EMNLP 2023)</em>.
<br>
<b style="color:red">Featured in <a href="https://hai.stanford.edu/news/designing-natural-language-processing-tools-teachers">Stanford HAI</a></b>
<br>
[
<a href="https://arxiv.org/pdf/2310.10637.pdf">Paper</a>
]
<p>Teachers’ growth mindset supportive language (GMSL)—rhetoric emphasizing that one's skills can be improved over time—has been shown to significantly reduce disparities in academic achievement and enhance students' learning outcomes. Although teachers espouse growth mindset principles, most find it difficult to adopt GMSL in their practice due the lack of effective coaching in this area. We explore whether large language models (LLMs) can provide automated, personalized coaching to support teachers' use of GMSL. We conduct a large-scale evaluation involving 174 teachers and 1,006 students, finding that both teachers and students perceive GMSL-trained teacher and model reframings as more effective in fostering a growth mindset and promoting challenge-seeking behavior, among other benefits. We also find that model-generated reframings outperform those from the GMSL-trained teachers. These results show promise for harnessing LLMs to provide automated GMSL feedback for teachers and, more broadly, LLMs’ potentiality for supporting students’ learning in the classroom.
</p>
</td>
</tr>
<tr onmouseout="nightsight_stop()" onmouseover="nightsight_start()" bgcolor="#ffffd0">
<td style="padding:20px;width:25%;vertical-align:middle">
<img src="images/sight.png" alt="" style="border-style: none" width="200">
</td>
<td style="padding:20px;width:75%;vertical-align:middle">
<a href="https://arxiv.org/pdf/2306.09343.pdf">
<papertitle>SIGHT: A Large Annotated Dataset on Student Insights Gathered from Higher Education Transcripts</papertitle>
</a>
<br>
<strong>Rose E. Wang</strong>*,
<a href="https://profiles.stanford.edu/pawan-wirawarn">Pawan Wirawarn</a>*,
<a href="https://cocolab.stanford.edu/ndg">Noah Goodman</a>,
<a href="https://www.dorademszky.com/">Dorottya (Dora) Demszky</a>
<br>
In the <em>Proceedings of Innovative Use of NLP for Building Educational Applications (2023)</em>.
<br>
[
<a href="https://rosewang2008.github.io/sight/">Project page</a>,
<a href="https://www.youtube.com/watch?v=Yt-2jLJLKjI&ab_channel=RoseWang">Video</a>,
<a href="https://arxiv.org/pdf/2306.09343.pdf">Paper</a>,
<a href="https://github.com/rosewang2008/sight">Code</a>
]
<p>We build SIGHT, a large dataset of 288 math lecture transcripts and 15,784 comments collected from the Massachusetts Institute of Technology OpenCourseWare (MIT OCW) YouTube channel. We additionally develop a rubric for categorizing student feedback types, and scaling annotation for teachers to better understand the needs of their students.
</p>
</td>
</tr>
<tr onmouseout="nightsight_stop()" onmouseover="nightsight_start()">
<td style="padding:20px;width:25%;vertical-align:middle">
<img src="images/solving.png" alt="" style="border-style: none" width="200">
</td>
<td style="padding:20px;width:75%;vertical-align:middle">
<a href="https://arxiv.org/abs/2304.09102">
<papertitle>Solving math word problems by combining language models with symbolic solvers</papertitle>
</a>
<br>
<a href="https://joyheyueya.github.io/">Joy He-Yueya</a>,
<a href="https://gpoesia.com/">Gabriel Poesia</a>,
<strong>Rose E. Wang</strong>,
<a href="https://cocolab.stanford.edu/ndg">Noah Goodman</a>
<br>
<em>ArXiv (2023)</em>.
<br>
[
<a href="">Paper</a>
]
<p>We propose an approach that combines an LLM that can incrementally formalize word problems as a set of variables and equations with an external symbolic solver that can solve the equations.
</p>
</td>
</tr>
<tr onmouseout="nightsight_stop()" onmouseover="nightsight_start()">
<td style="padding:20px;width:25%;vertical-align:middle">
<img src="images/halie.png" alt="hpp" style="border-style: none" width="200">
</td>
<td style="padding:20px;width:75%;vertical-align:middle">
<a href="https://arxiv.org/pdf/2212.09746.pdf">
<papertitle>Evaluating Human-Language Model Interaction</papertitle>
</a>
<br>
<a href="https://minalee.info/">Mina Lee</a>,
<a href="https://web.stanford.edu/~meghas/">Megha Srivastava</a>,
<a href="https://www.linkedin.com/in/ameliahardy?original_referer=https%3A%2F%2Fwww.google.com%2F">Amelia Hardy</a>,
<a href="https://johnthickstun.com/">John Thickstun</a>,
<a href="https://esdurmus.github.io/">Esin Durmus</a>,
<a href="https://ashwinparanjape.github.io/">Ashwin Paranjape</a>,
<a href="https://uk.linkedin.com/in/ines-gerard-ursin">Ines Gerard-Ursin</a>,
<a href="https://xiangli1999.github.io/">Xiang Lisa Li</a>,
<a href="https://www.cs.columbia.edu/~faisal/">Faisal Ladhak</a>,
<a href="https://friedeggs.github.io/">Frieda Rong</a>,
<strong>Rose E. Wang</strong>,
<a href="https://stanford.edu/~mnkwon/">Minae Kwon</a>,
<a href="http://www.joonsungpark.com/">Joon Sung Park</a>,
<a href="http://hanchengcao.me/">Hancheng Cao</a>,
<a href="https://profiles.stanford.edu/tonyhlee">Tony Lee</a>,
<a href="https://rishibommasani.github.io/">Rishi Bommasani</a>,
<a href="https://profiles.stanford.edu/michael-bernstein">Michael Bernstein</a>,
<a href="https://cs.stanford.edu/~pliang/">Percy Liang</a>
<br>
<em>In submission (2023)</em>.
<br>
[
<a href="https://arxiv.org/pdf/2212.09746.pdf">Paper</a>
]
<p>We develop Human-AI Language-based Interaction Evaluation (HALIE) that expands non-interactive evaluation along three dimensions, capturing (i) the interactive process, not only the final output; (ii) the first-person subjective experience, not just a third-party assessment; and (iii) notions of preference beyond quality.
</p>
</td>
</tr>
<tr onmouseout="nightsight_stop()" onmouseover="nightsight_start()">
<td style="padding:20px;width:25%;vertical-align:middle">
<img src="images/zone.png" alt="kts" style="border-style: none" width="200">
</td>
<td style="padding:20px;width:75%;vertical-align:middle">
<a href="https://openreview.net/pdf?id=SgBHmHMctfd">
<papertitle>In the ZONE: Measuring difficulty and progression in curriculum generation</papertitle>
</a>
<br>
<strong>Rose E. Wang</strong>,
<a href="https://cs.stanford.edu/~muj/">Jesse Mu</a>,
<a href="https://dilipa.github.io/">Dilip Arumugam</a>,
<a href="https://natashajaques.ai/">Natasha Jaques</a>,
<a href="https://cocolab.stanford.edu/ndg">Noah Goodman</a>
<br>
<em>NeurIPS 2022 Deep Reinforcement Learning Workshop</em>.
<br>
[
<a href="https://openreview.net/pdf?id=SgBHmHMctfd">Paper</a>,
<a href="https://www.youtube.com/watch?v=6PAihNlFOzw">Invited Talk at UC Berkeley's Multi-Agent Learning Seminar</a>
]
<br>
<p>
A common strategy in curriculum generation for reinforcement learning is to train a teacher network to generate tasks that enable student learning. But, what kind of tasks enables this? One answer is tasks belonging to a student's zone of proximal development (ZPD), a concept from developmental psychology. These are tasks that are not too easy and not too hard for the student. Albeit intuitive, ZPD is not well understood computationally. We propose ZONE, a novel computational framework that operationalizes ZPD. It formalizes ZPD through the language of Bayesian probability theory, revealing that tasks should be selected by difficulty (the student's probability of task success) and learning progression (the degree of change in the student's model parameters).
</p>
</td>
</tr>
<tr onmouseout="nightsight_stop()" onmouseover="nightsight_start()">
<td style="padding:20px;width:25%;vertical-align:middle">
<img src="images/elign.png" alt="elign" style="border-style: none" width="200">
</td>
<td style="padding:20px;width:75%;vertical-align:middle">
<a href="https://arxiv.org/pdf/2210.04365.pdf">
<papertitle>ELIGN: Expectation Alignment as a Multi-Agent Intrinsic Reward</papertitle>
</a>
<br>
<a href="https://zixianma.github.io/">Zixian Ma</a>,
<strong>Rose E. Wang</strong>,
<a href="https://profiles.stanford.edu/fei-fei-li">Li Fei-Fei</a>,
<a href="https://profiles.stanford.edu/michael-bernstein">Michael Bernstein</a>,
<a href="https://ranjaykrishna.com/index.html">Ranjay Krishna</a>
<br>
<em>36th Conference on Neural Information Processing Systems (NeurIPS 2022)</em>.
<br>
[
<a href="https://arxiv.org/pdf/2210.04365.pdf">Paper</a>,
<a href="https://github.com/StanfordVL/alignment">Code</a>
]
<br>
<p>
Modern multi-agent reinforcement learning frameworks rely on centralized training and reward shaping to perform well. However, centralized training and dense rewards are not readily available in the real world. Current multi-agent algorithms struggle to learn in the alternative setup of decentralized training or sparse rewards. To address these issues, we propose a self-supervised intrinsic reward ELIGN - expectation alignment - inspired by the self-organization principle in Zoology.
</td>
</tr>
<tr onmouseout="nightsight_stop()" onmouseover="nightsight_start()">
<td style="padding:20px;width:25%;vertical-align:middle">
<img src="images/confidence.png" alt="confidence" style="border-style: none" width="200">
</td>
<td style="padding:20px;width:75%;vertical-align:middle">
<a href="assets/final_curis_poster_pawan_wirawarn.pdf">
<papertitle>Speaking with Confidence: Investigating the effect of uncertainty in pragmatic language learning</papertitle>
</a>
<br>
<a href="https://profiles.stanford.edu/pawan-wirawarn">Pawan Wirawarn</a>,
<strong>Rose E. Wang</strong>,
<a href="https://cocolab.stanford.edu/ndg">Noah Goodman</a>
<br>
<em>CURIS 2022</em>.
<br>
[
<a href="assets/final_curis_poster_pawan_wirawarn.pdf">Poster</a>
]
<br>
<p>
Our work explores whether pragmatic language learning is better with a well-calibrated domain-agnostic listener.
</p>
</td>
</tr>
<tr onmouseout="nightsight_stop()" onmouseover="nightsight_start()">
<td style="padding:20px;width:25%;vertical-align:middle">
<img src="images/clap.png" alt="elign" style="border-style: none" width="200">
</td>
<td style="padding:20px;width:75%;vertical-align:middle">
<a href="https://openreview.net/pdf?id=OQP7leJkAu">
<papertitle>CLaP: Conditional Latent Planners for Offline Reinforcement Learning</papertitle>
</a>
<br>
<a href="https://www.linkedin.com/in/harry-shin-34743216a">Harry Donghyeop Shin</a>,
<strong>Rose E. Wang</strong>
<br>
<em> NeurIPS 2022 Workshop on Foundation Models for Decision Making</em>.
<br>
[
<a href="https://openreview.net/pdf?id=OQP7leJkAu">Paper</a>,
Code (coming soon)
]
<br>
<p>
Recent work has formulated offline reinforcement learning (RL) as a sequence
modeling problem, benefiting from the simplicity and scalability of the Transformer
architecture. However, sequence models struggle to model trajectories that are
long-horizon or involve complicated environment dynamics. We propose CLaP
(Conditional Latent Planners) to learn a simple goal-conditioned latent space
from offline agent behavior, and incrementally decode good actions from a latent
plan.
</p>
</td>
</tr>
<tr onmouseout="nightsight_stop()" onmouseover="nightsight_start()">
<td style="padding:20px;width:25%;vertical-align:middle">
<img src="images/kts.png" alt="kts" style="border-style: none" width="200">
</td>
<td style="padding:20px;width:75%;vertical-align:middle">
<a href="https://openreview.net/pdf?id=rpGGNrMJpW9">
<papertitle>Know Thy Student: Interactive Learning with Gaussian Processes</papertitle>
</a>
<br>
<strong>Rose E. Wang</strong>,
<a href="https://www.mikehwu.com/">Mike Wu</a>,
<a href="https://cocolab.stanford.edu/ndg">Noah Goodman</a>
<br>
<em>ICLR 2022 Workshop on From Cells to Societies: Collective Learning across Scales</em>.
<br>
[
<a href="https://openreview.net/pdf?id=rpGGNrMJpW9">Paper</a>
]
<br>
<p>
Learning often involves interaction between multiple agents.
Human teacher-student settings best illustrate how interactions result in efficient knowledge passing where the teacher constructs a curriculum based on their students' abilities.
Prior work in machine teaching studies how the teacher should construct optimal teaching datasets assuming the teacher knows everything about the student.
However, in the real world, the teacher doesn't have complete information and must probe before teaching.
Our work proposes a simple probing algorithm which uses Gaussian processes for inferring student-related information, before constructing a teaching dataset.
</p>
</td>
</tr>
<tr onmouseout="nightsight_stop()" onmouseover="nightsight_start()">
<td style="padding:20px;width:25%;vertical-align:middle">
<img src="images/lm_via_sp.png" alt="hpp" style="border-style: none" width="200">
</td>
<td style="padding:20px;width:75%;vertical-align:middle">
<a href="https://openreview.net/forum?id=pMQwKL1yctf">
<papertitle>Language modeling via stochastic processes</papertitle>
</a>
<br>
<strong>Rose E. Wang</strong>,
<a href="https://esdurmus.github.io/">Esin Durmus</a>,
<a href="https://cocolab.stanford.edu/ndg">Noah Goodman</a>,
<a href="https://thashim.github.io/">Tatsunori Hashimoto</a>,
<br>
<em>International Conference for Learning Representations (ICLR) 2022</em>.
<br>
<b style="color:red">Oral Presentation (1.6% oral acceptance rate)</b>
<br>
[
<a href="https://openreview.net/forum?id=pMQwKL1yctf">Paper</a>,
<a href="https://www.youtube.com/watch?v=AwnoASlxeIs&t=13s&ab_channel=RoseWang">Video</a>,
<a href="https://github.com/rosewang2008/language_modeling_via_stochastic_processes">Code</a> ]
<br>
<p>Modern language models can generate high-quality short texts. However, they often meander or are incoherent when generating longer texts. These issues arise from the next-token-only language modeling objective. To address these issues, we introduce Time Control (TC), a language model that implicitly plans via a latent stochastic process. TC does this by learning a representation which maps the dynamics of how text changes in a document to the dynamics of a stochastic process of interest. Using this representation, the language model can generate text by first implicitly generating a document plan via a stochastic process, and then generating text that is consistent with this latent plan.</p>
</td>
</tr>
<tr onmouseout="nightsight_stop()" onmouseover="nightsight_start()">
<td style="padding:20px;width:25%;vertical-align:middle">
<img src="images/cyl.png" alt="hpp" style="border-style: none" width="200">
</td>
<td style="padding:20px;width:75%;vertical-align:middle">
<a href="https://arxiv.org/pdf/2110.05422.pdf">
<papertitle>Calibrate your listeners! Robust communication-based training for pragmatic speakers</papertitle>
</a>
<br>
<strong>Rose E. Wang</strong>,
Julia White,
<a href="https://cs.stanford.edu/~muj/">Jesse Mu</a>,
<a href="https://cocolab.stanford.edu/ndg">Noah Goodman</a>
<br>
<em>Findings of EMNLP 2021</em>.
<br>
[
<a href="https://arxiv.org/pdf/2110.05422.pdf">Paper</a>,
<a href="https://github.com/rosewang2008/calibrate_your_listeners">Video</a>,
<a href="https://github.com/rosewang2008/calibrate_your_listeners">Code</a> ]
<p> To be good conversational partners, natural language processing (NLP) systems should be trained to produce contextually useful utterances. Prior work has investigated training NLP systems with communication-based objectives, where a neural listener stands in as a communication partner. However, these systems commonly suffer from semantic drift where the learned language diverges radically from natural language. We propose a method that uses a population of neural listeners to regularize speaker training.</p>
</td>
</tr>
<tr onmouseout="nightsight_stop()" onmouseover="nightsight_start()">
<td style="padding:20px;width:25%;vertical-align:middle">
<img src="images/fm.png" alt="hpp" style="border-style: none" width="200">
</td>
<td style="padding:20px;width:75%;vertical-align:middle">
<a href="https://arxiv.org/pdf/2108.07258.pdf">
<papertitle>On the opportunities and risks of foundation models</papertitle>
</a>
<br>
Many authors...,
<strong>Rose E. Wang</strong>,
more authors,...
<br>
<em>August 2021</em>.
<br>
<p>This report provides a thorough account of the opportunities and risks of foundation models, ranging from their capabilities (e.g., language, vision, robotics, reasoning, human interaction) and technical principles(e.g., model architectures, training procedures, data, systems, security, evaluation, theory) to their applications (e.g., law, healthcare, education) and societal impact (e.g., inequity, misuse, economic and environmental impact, legal and ethical
considerations)..</p>
</td>
</tr>
<tr onmouseout="nightsight_stop()" onmouseover="nightsight_start()">
<td style="padding:20px;width:25%;vertical-align:middle">
<img src="images/adhoc.png" alt="hpp" style="border-style: none" width="200">
</td>
<td style="padding:20px;width:75%;vertical-align:middle">
<a href="https://arxiv.org/abs/2003.11778">
<papertitle>Too many cooks: Bayesian inference for coordinating multi-agent collaboration</papertitle>
</a>
<br>
<strong>Rose E. Wang*</strong>,
Sarah Wu*,
<a href="https://sociology.uchicago.edu/directory/james-evans">James A. Evans</a>,
<a href="http://web.mit.edu/cocosci/josh.html">Joshua B. Tenenbaum</a>,
<a href="https://www.eecs.harvard.edu/~parkes/">David C. Parkes</a>,
<a href="http://www.mit.edu/~maxkw/">Max Kleiman-Weiner</a>
<br>
<em>Journal of the Cognitive Science Society, April 2021</em>.
<br>
<em>NeurIPS 2020 Cooperative AI workshop</em>.
<br>
<b style="color:red">🏆 Best paper award, NeurIPS 2020 Cooperative AI Workshop</b>
<br>
[
<a href="https://arxiv.org/abs/2003.11778">Paper</a>,
<a href="https://www.youtube.com/watch?v=Fd4RcVaNthY">Video</a>,
<a href="https://github.com/rosewang2008/gym-cooking">Code</a> ]
<p>We develop Bayesian Delegation, a decentralized multi-agent learning mechanism that enables agents to rapidly infer the sub-tasks of others by inverse planning. We demonstrate that our model is a capable ad-hoc collaborator, scales with team size and makes inferences about intent similar to human observers.</p>
</td>
</tr>
<tr onmouseout="nightsight_stop()" onmouseover="nightsight_start()">
<td style="padding:20px;width:25%;vertical-align:middle">
<img src="images/hpp.png" alt="hpp" style="border-style: none" width="200">
</td>
<td style="padding:20px;width:75%;vertical-align:middle">
<a href="https://arxiv.org/abs/2003.06906">
<papertitle>Model-based Reinforcement Learning for Multiagent Goal Alignment</papertitle>
</a>
<br>
<strong>Rose E. Wang</strong>,
<a href="https://research.google/people/JChaseKew/">J.Chase Kew</a>,
<a href="https://scholar.google.com/citations?user=vOLXDDAAAAAJ&hl=en">Dennis Lee</a>,
<a href="https://deepai.org/profile/tsang-wei-edward-lee">Tsang-Wei Edward Lee</a>,
<a href="https://research.google/people/TingnanZhang/">Tingnan Zhang</a>,
<a href="http://brianichter.com/">Brian Ichter</a>,
<a href="http://www.jie-tan.net/">Jie Tan</a>,
<a href="https://www.afaust.info/">Aleksandra Faust</a>
<br>
<em>Conference on Robot Learning (CoRL) 2020</em>.<br>
<em>Mentioned in <a href="https://ai.googleblog.com/2021/01/google-research-looking-back-at-2020.html">Google AI Year in Review, 2020</a>.</em><br>
[
<a href="https://arxiv.org/abs/2003.06906">Paper</a>,
<a href="https://www.youtube.com/watch?v=-LqgfksqNH8&feature=youtu.be">Video</a>,
<a href="https://sites.google.com/view/multiagent-hpp">Project Page</a>,
<a href="https://ai.googleblog.com/2021/04/model-based-rl-for-decentralized-multi.html">Blog post</a>
]
<br>
<p></p>
<p>In this work, we present hierarchical predictive planning (HPP) for decentralized multiagent navigation tasks. Our approach is trained in simulation and works in unseen settings both in simulation and in the real world (zero shot transfer)!</p>
</td>
</tr>
<tr onmouseout="nightsight_stop()" onmouseover="nightsight_start()">
<td style="padding:20px;width:25%;vertical-align:middle">
<img src="images/overcooked.png" alt="hpp" style="border-style: none" width="200">
</td>
<td style="padding:20px;width:75%;vertical-align:middle">
<a href="https://arxiv.org/abs/2003.11778">
<papertitle>Too many cooks: Coordinating multi-agent collaboration through inverse planning</papertitle>
</a>
<br>
<strong>Rose E. Wang*</strong>,
Sarah Wu*,
<a href="https://sociology.uchicago.edu/directory/james-evans">James A. Evans</a>,
<a href="http://web.mit.edu/cocosci/josh.html">Joshua B. Tenenbaum</a>,
<a href="https://www.eecs.harvard.edu/~parkes/">David C. Parkes</a>,
<a href="http://www.mit.edu/~maxkw/">Max Kleiman-Weiner</a>
<br>
<em>Human-Like Machine Intelligence (book published with Oxford University Press)</em><br>
<em>Annual Meeting of the Cognitive Science Society (CogSci) 2020</em><br>
<em>International Conference on Autonomous Agents and Multi-Agent Systems (AAMAS) 2020</em><br>
<em>Invited paper to OptLearnMAS Workshop at AAMAS 2020</em><br>
<b style="color:red">🏆 Best paper award, CogSci 2020</b>
<br>
[
<a href="https://arxiv.org/abs/2003.11778">Paper</a>,
<a href="https://www.youtube.com/watch?v=Fd4RcVaNthY">Video</a>,
<a href="https://github.com/rosewang2008/gym-cooking">Code</a>
]
<p>We develop Bayesian Delegation, a decentralized multi-agent learning mechanism that enables agents to rapidly infer the sub-tasks of others by inverse planning.</p>
</td>
</tr>
<tr onmouseout="font_stop()" onmouseover="font_start()">
<td style="padding:20px;width:25%;vertical-align:middle">
<img src="images/rmaddpg.png" alt="rmaddpg" style="border-style: none" width="200">
</td>
<td style="padding:20px;width:75%;vertical-align:middle">
<a href="https://arxiv.org/abs/2002.06684">
<papertitle>R-MADDPG for Partially Observable Environments and Limited Communication</papertitle>
</a>
<br>
<strong>Rose E. Wang</strong>,
<a href="http://mfe.scripts.mit.edu/portfolio/">Michael Everett</a>,
<a href="http://www.mit.edu/people/jhow/">Jonathan P. How</a>
<br>
<em>International Conference on Machine Learning (ICML) 2019, Reinforcement Learning for Real Life Workshop</em>
<br>
[
<a href="https://arxiv.org/abs/2002.06684">Paper</a>,
<a href="https://github.com/rosewang2008/rmaddpg">Code</a>,
<a href="https://sites.google.com/view/rmaddpg/home?authuser=0">Project Page</a>
]
<br>
<p></p>
<p>This paper introduces a deep recurrent multiagent actor-critic framework (R-MADDPG) for handling multiagent coordination under partial observable settings and limited communication.</p>
</td>
</tr>
<tr onmouseout="font_stop()" onmouseover="font_start()">
<td style="padding:20px;width:25%;vertical-align:middle">
<img src="images/rc66.jpg" alt="rc66" style="border-style: none" width="200">
</td>
<td style="padding:20px;width:75%;vertical-align:middle">
<a href="https://www.youtube.com/watch?v=dkhRkWSc8Xw&feature=youtu.be">
<papertitle>DRIV3N: Race to Autonomy</papertitle>
</a>
<br>
<strong>Rose E. Wang</strong>, Austin Floyd, Marwa Abdulhai, Luxas Novak, David Klee, Sean Patrick Kelley
<br>
<em>Robotics: Science and Systems I</em>, 2017.
<br>
[
<a href="https://www.youtube.com/watch?v=dkhRkWSc8Xw&feature=youtu.be">Video</a>,
<a href="https://rosewang2008.github.io/rss-team3/">Project Page</a>
]
<br>
<p></p>
<p>A whirlwind of an experience where my team and I developed a <b>fast</b>, <i>autonomous</i>, ~maze-solving~ racecars equipped with no machine learning technology and a decorative safety controller.</p>
</td>
</tr>
</tbody></table>
</td>
</tr>
</table>
Template from <a href="https://jonbarron.info/">this website</a>.
</body>
</html>