no code implementations • 4 Feb 2024 • Lirui Wang, Jialiang Zhao, Yilun Du, Edward H. Adelson, Russ Tedrake
Training general robotic policies from heterogeneous data for different tasks is a significant challenge.
no code implementations • 19 Sep 2023 • Jialiang Zhao, Edward H. Adelson
Moreover, existing methods to estimate proprioceptive information such as total forces and torques applied on the finger from camera-based tactile sensors are not effective when the contact geometry is complex.
no code implementations • 14 Mar 2023 • Jialiang Zhao, Maria Bauza, Edward H. Adelson
FingerSLAM is constructed with two constituent pose estimators: a multi-pass refined tactile-based pose estimator that captures movements from detailed local textures, and a single-pass vision-based pose estimator that predicts from a global view of the object.
no code implementations • 9 Aug 2018 • Shaoxiong Wang, Jiajun Wu, Xingyuan Sun, Wenzhen Yuan, William T. Freeman, Joshua B. Tenenbaum, Edward H. Adelson
Perceiving accurate 3D object shape is important for robots to interact with the physical world.
no code implementations • 28 May 2018 • Roberto Calandra, Andrew Owens, Dinesh Jayaraman, Justin Lin, Wenzhen Yuan, Jitendra Malik, Edward H. Adelson, Sergey Levine
This model -- a deep, multimodal convolutional network -- predicts the outcome of a candidate grasp adjustment, and then executes a grasp by iteratively selecting the most promising actions.
1 code implementation • 16 Oct 2017 • Roberto Calandra, Andrew Owens, Manu Upadhyaya, Wenzhen Yuan, Justin Lin, Edward H. Adelson, Sergey Levine
In this work, we investigate the question of whether touch sensing aids in predicting grasp outcomes within a multimodal sensing framework that combines vision and touch.
no code implementations • CVPR 2016 • Andrew Owens, Phillip Isola, Josh Mcdermott, Antonio Torralba, Edward H. Adelson, William T. Freeman
Objects make distinctive sounds when they are hit or scratched.
2 code implementations • 21 Nov 2015 • Phillip Isola, Daniel Zoran, Dilip Krishnan, Edward H. Adelson
We propose a self-supervised framework that learns to group visual entities based on their rate of co-occurrence in space and time.
no code implementations • CVPR 2015 • Phillip Isola, Joseph J. Lim, Edward H. Adelson
Our system works by generalizing across object classes: states and transformations learned on one set of objects are used to interpret the image collection for an entirely new object class.
no code implementations • CVPR 2015 • Ioannis Gkioulekas, Bruce Walter, Edward H. Adelson, Kavita Bala, Todd Zickler
We also discuss the existence of shape and material metamers, or combinations of distinct shape or material parameters that generate the same edge profile.
no code implementations • 26 Dec 2014 • Zhengdong Zhang, Phillip Isola, Edward H. Adelson
In this paper, we study the problem of reproducing the world lighting from a single image of an object covered with random specular microfacets on the surface.
no code implementations • CVPR 2013 • Rui Li, Edward H. Adelson
Sensing surface textures by tou ch is a valuable was difficult to build capability for robots.