Skip to content

Commit

Permalink
adding projects
Browse files Browse the repository at this point in the history
  • Loading branch information
liampaull committed Sep 24, 2024
1 parent af7a273 commit abdf578
Show file tree
Hide file tree
Showing 12 changed files with 160 additions and 6 deletions.
53 changes: 53 additions & 0 deletions _data/people.yml
Original file line number Diff line number Diff line change
Expand Up @@ -665,6 +665,59 @@ james:
webpage: https://www.mcgill.ca/mecheng/james-forbes


yxiu:
display_name: Yuliang Xiu
role: collab
webpage: https://xiuyuliang.cn/


scholkopf:
display_name: Bernard Schölkopf
role: collab
webpage: https://is.mpg.de/~bs

corban:
display_name: Corban Rivera
role: collab
webpage: https://www.jhuapl.edu/work/our-organization/research-and-exploratory-development/red-staff-directory/corban-rivera

william_paul:
display_name: William Paul
role: collab
webpage: https://scholar.google.com/citations?user=92bmh84AAAAJ


rama_chellappa:
display_name: Rama Chellappa
role: collab
webpage: https://engineering.jhu.edu/faculty/rama-chellappa/


chuang_gan:
display_name: Chuang Gan
role: collab
webpage: https://people.csail.mit.edu/ganchuang/

roger_girgis:
display_name: Roger Girgis
role: collab
webpage: https://mila.quebec/en/person/roger-girgis/

anthony_gosselin:
display_name: Anothony Gosselin
role: collab
webpage: https://www.linkedin.com/in/anthony-gosselin-098b7a1a1/?originalSubdomain=ca


bruno_carrez:
display_name: Bruno Carrez
role: collab
webpage: https://mila.quebec/en/person/bruno-carrez/

felix_heide:
display_name: Felix Heide
role: collab
webpage: https://www.cs.princeton.edu/~fheide/



2 changes: 1 addition & 1 deletion _projects/01-gradslam.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
title: gradslam
title: "∇SLAM: Dense SLAM meets Automatic Differentiation"

notitle: false

Expand Down
39 changes: 39 additions & 0 deletions _projects/conceptgraphs.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
---
title: "ConceptGraphs: Open-Vocabulary 3D Scene Graphs for Perception and Planning"

# status: active

notitle: false

description: |
ConceptGraphs builds an open-vocabular scene graph from a sequence of posed RGB-D images. Compared to our previous approach, ConceptFusion, this representation is more sparse and has a better understanding of relationship between entities and objects in the graph.
people:
- ali-k
- sacha
- bipasha
- aditya
- kirsty
- liam

collaborators:
- qiao
- krishna
- corban
- william_paul
- rama_chellappa
- chuang_gan
- celso
- tenenbaum
- torralba
- shkurti

layout: project
image: /img/papers/concept-graphs.png
link: https://concept-graphs.github.io/
last-updated: 2024-09-23
---

## ConceptGraphs: Open-Vocabulary 3D Scene Graphs for Perception and Planning

For robots to perform a wide variety of tasks, they require a 3D representation of the world that is semantically rich, yet compact and efficient for task-driven perception and planning. Recent approaches have attempted to leverage features from large vision-language models to encode semantics in 3D representations. However, these approaches tend to produce maps with per-point feature vectors, which do not scale well in larger environments, nor do they contain semantic spatial relationships between entities in the environment, which are useful for downstream planning. In this work, we propose ConceptGraphs, an open-vocabulary graph-structured representation for 3D scenes. ConceptGraphs is built by leveraging 2D foundation models and fusing their output to 3D by multi-view association. The resulting representations generalize to novel semantic classes, without the need to collect large 3D datasets or finetune models. We demonstrate the utility of this representation through a number of downstream planning tasks that are specified through abstract (language) prompts and require complex reasoning over spatial and semantic concepts.
2 changes: 1 addition & 1 deletion _projects/ctcnet.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
title: Self-supervised visual odometry estimation
title: Geometric Consistency for Self-Supervised End-to-End Visual Odometry

description: |
A self-supervised deep network for visual odometry estimation from monocular imagery.
Expand Down
31 changes: 31 additions & 0 deletions _projects/ctrl-sim.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
---
title: "CtRL-Sim: Reactive and Controllable Driving Agents with Offline Reinforcement Learning"
# status: active

notitle: false

description: |
CtRL-Sim, a framework that leverages return-conditioned offline reinforcement learning (RL) to enable reactive, closed-loop, and controllable behaviour simulation within a physics-enhanced Nocturne environment.
people:
- luke
- liam

collaborators:
- roger_girgis
- anothony_gosselin
- bruno_carrez
- florian
- felix_heide
- chris


layout: project
image: /img/papers/ctrl-sim.png
link: https://montrealrobotics.ca/ctrlsim/
last-updated: 2024-09-25
---

## CtRL-Sim: Reactive and Controllable Driving Agents with Offline Reinforcement Learning

Evaluating autonomous vehicle stacks (AVs) in simulation typically involves replaying driving logs from real-world recorded traffic. However, agents replayed from offline data are not reactive and hard to intuitively control. Existing approaches address these challenges by proposing methods that rely on heuristics or generative models of real-world data but these approaches either lack realism or necessitate costly iterative sampling procedures to control the generated behaviours. In this work, we take an alternative approach and propose CtRL-Sim, a method that leverages return-conditioned offline reinforcement learning to efficiently generate reactive and controllable traffic agents. Specifically, we process real-world driving data through a physics-enhanced Nocturne simulator to generate a diverse offline reinforcement learning dataset, annotated with various reward terms. With this dataset, we train a return-conditioned multi-agent behaviour model that allows for fine-grained manipulation of agent behaviours by modifying the desired returns for the various reward components. This capability enables the generation of a wide range of driving behaviours beyond the scope of the initial dataset, including adversarial behaviours. We demonstrate that CtRL-Sim can generate diverse and realistic safety-critical scenarios while providing fine-grained control over agent behaviours.
2 changes: 1 addition & 1 deletion _projects/gradsim.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
title: gradsim
title: "∇Sim: Differentiable Simulation for System Identification and Visuomotor Control"

notitle: false

Expand Down
30 changes: 30 additions & 0 deletions _projects/gshell3d.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
---
title: "Ghost on the Shell: An Expressive Representation of General 3D Shapes"

# status: active

notitle: false

description: |
G-Shell models both watertight and non-watertight meshes of different shape topology in a differentiable way. Mesh extraction with G-Shell is stable -- no need to compute MLP gradients but simply do sign checks on grid vertices.
people:
- zhen
- liam

collaborators:
- yfeng
- yxiu
- wyliu
- mjb
- scholkopf

layout: project
image: /img/papers/gshell.png
link: https://gshell3d.github.io/
last-updated: 2024-09-24
---

## Ghost on the Shell: An Expressive Representation of General 3D Shapes

The creation of photorealistic virtual worlds requires the accurate modeling of 3D surface geometry for a wide range of objects. For this, meshes are appealing since they 1) enable fast physics-based rendering with realistic material and lighting, 2) support physical simulation, and 3) are memory-efficient for modern graphics pipelines. Recent work on reconstructing and statistically modeling 3D shape, however, has critiqued meshes as being topologically inflexible. To capture a wide range of object shapes, any 3D representation must be able to model solid, watertight, shapes as well as thin, open, surfaces. Recent work has focused on the former, and methods for reconstructing open surfaces do not support fast reconstruction with material and lighting or unconditional generative modelling. Inspired by the observation that open surfaces can be seen as islands floating on watertight surfaces, we parameterize open surfaces by defining a manifold signed distance field on watertight templates. With this parameterization, we further develop a grid-based and differentiable representation that parameterizes both watertight and non-watertight meshes of arbitrary topology. Our new representation, called Ghost-on-the-Shell (G-Shell), enables two important applications: differentiable rasterization-based reconstruction from multiview images and generative modelling of non-watertight meshes. We empirically demonstrate that G-Shell achieves state-of-the-art performance on non-watertight mesh reconstruction and generation tasks, while also performing effectively for watertight meshes.
7 changes: 4 additions & 3 deletions _projects/lamaml.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
title: La-MAML
title: "La-MAML: Look-ahead Meta Learning for Continual Learning"

notitle: false

Expand All @@ -16,10 +16,11 @@ collaborators:


layout: project
image: "https://mila.quebec/wp-content/uploads/2020/11/lamaml_jpg.gif"
image: /img/papers/lamaml.png
link: https://mila.quebec/en/article/la-maml-look-ahead-meta-learning-for-continual-learning/
last-updated: 2020-11-19
---

## La-MAML
## La-MAML: Look-ahead Meta Learning for Continual Learning

The continual learning problem involves training models with limited capacity to perform well on a set of an unknown number of sequentially arriving tasks. While meta-learning shows great potential for reducing interference between old and new tasks, the current training procedures tend to be either slow or offline, and sensitive to many hyper-parameters. In this work, we propose Look-ahead MAML (La-MAML), a fast optimisation-based meta-learning algorithm for online-continual learning, aided by a small episodic memory. Our proposed modulation of per-parameter learning rates in our meta-learning update allows us to draw connections to prior work on hypergradients and meta-descent. This provides a more flexible and efficient way to mitigate catastrophic forgetting compared to conventional prior-based methods. La-MAML achieves performance superior to other replay-based, prior-based and meta-learning based approaches for continual learning on real-world visual classification benchmarks.
Binary file added img/papers/concept-graphs.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added img/papers/ctrl-sim.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added img/papers/gshell.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified img/papers/lamaml.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.

0 comments on commit abdf578

Please sign in to comment.