-
9DTact: A Compact Vision-Based Tactile Sensor for Accurate 3D Shape Reconstruction and Generalizable 6D Force Estimation, RA-L 2023. [Paper] [Website] [Code] [Hardware] [Tutorial] [Changyi Lin, CMU]
-
3D-ViTac: Learning Fine-Grained Manipulation with Visuo-Tactile Sensing, CoRL 2024. [Paper] [Website] [Code comming soon] [Hardware Tutorial] [Hardware Code] [Binghao Huang, Columbia University]
-
GelSlim 4.0: Focusing on Touch and Reproducibility, arXiv 2024. [Paper] [Website] [Hardware] [Code: gelslim_depth] [Code: gelslim_shear]
- Universal Manipulation Interface: In-The-Wild Robot Teaching Without In-The-Wild Robots, RSS 2024. [Paper] [Website] [Code] [Hardware] [real-stanford]
- Fast-UMI: A Scalable and Hardware-Independent Universal Manipulation Interface, arXiv 2024. [Paper] [Website] [Data Code] [Yan Ding, Shanghai AI Laboratory]
- ALOHA: Learning Fine-Grained Bimanual Manipulation with Low-Cost Hardware, RSS 2023. [Paper] [Website] [ACT Code] [ALOHA Code] [Chelsea Finn, Stanford University]
- ALOHA 2: An Enhanced Low-Cost Hardware for Bimanual Teleoperation, arXiv 2024. [Paper] [Website] [Code] [Google DeepMind]
- DexCap: Scalable and Portable Mocap Data Collection System for Dexterous Manipulation, RSS 2024. [Paper] [Website] [Code] [Hardware] [Li Fei-Fei, Stanford University]
-
[awesome-robotics-datasets], 2020.
-
FurnitureBench: Reproducible Real-World Benchmark for Long-Horizon Complex Manipulation, RSS 2023. [Paper] [Website] [Code] [Docs]
-
FMB: a Functional Manipulation Benchmark for Generalizable Robotic Learning, IJRR 2024. [Paper] [Website] [Code]
-
DROID: A Large-Scale In-The-Wild Robot Manipulation Dataset, arXiv 2024. [Paper] [Website] [Code] [Hardware]
-
Open X-Embodiment: Robotic Learning Datasets and RT-X Models, arXiv 2023. [Paper] [Website] [Code] [Dataset] [[Google]]