I am interested in multimodal representations for adaptable embodied AI.
My long term vision is to develop robots that can perform multiple tasks around the home and learn new skills from their users.
My research focuses at the intersection of language, vision and actions to enhance real-time perception, motion control and dialogue for robots.
Our work on Audio Noise Awareness using Visuals of Indoor environments for NAVIgation (a.k.a ANAVI) is accepted at Conference of Robot Learning (CoRL) 2024, Munich, Germany.
🧵1/8 So annoying when my 🤖 vacuum cleaner buzzes loudly during my Zoom meeting! Can we teach robots to be aware of their noise levels at home? Introducing ANAVI—a framework that uses indoor visuals to predict sound propagation! 🎶🏠 pic.twitter.com/gKIDqdhF3G
Two of the following papers led by collaborators Quan and Montse respectively at Google DeepMind got accepted at ICRA 2024 in Yokohama, Japan.
(1) RT-X introduces a massive multi-institution collaboration on a exploring robotic datasets and policies covering many robot embodiments. The open-sourced datasets enable generalist policies to control many robots across many academic labs. https://t.co/2MZ0pTvM8j
(2) PromptBook extends our prior investigations on leveraging LLMs for generating robot code. Tons of important details about scaling up Code as Policies for robotics.
Check out @montseglz's talk on Tuesday! Presentation: TuBT30-NT.8, 13:30-15:00 Poster: 30.08 16:30-18:00
Preprint: Survey on General-Purpose Robots via Foundation Models
We shared the first preprint of our survey on Foundational Models in Robotics.
🦾🤖📚we’ve been exploring the landscape of foundational models in robotics—unveiling insights on current trends and open challenges. A must-read for those interested in the path towards general-purpose robotics. #Robotics#FoundationModels#SurveyPaperhttps://t.co/VziYf3VScn
One (sad?) takeaway for me: when we see planning-based and learning methods compare on even footing, in terms of time invested, we basically never see learning-based methods working better.
HomeRobot is 100% a test of generalization, as object *classes* + envs are totally unseen https://t.co/3nBBudNgns
2 papers and 2 workshop works presented at CoRL'23
I did not attend CoRL this year but check out some of our recent work presented by colleagues at the main conference:
1. HomeRobot: Open-Vocabulary Mobile Manipulation
The future of robot butlers starts with mobile manipulation. We’re announcing the NeurIPS 2023 Open-Vocabulary Mobile Manipulation Challenge! - Full robot stack ✅ - Parallel sim and real evaluation ✅ - No robot required ✅👀https://t.co/mggAbRhrLPpic.twitter.com/Wartsmkyyl
Also, check out some of the work at LangRob and Robot Learning Workshops.
3. PromptBook leverages LLMs for generating robot code! More than examples that were used in Code-as-Policies, we explore Instructions, Chain of Thought Prompting and State Estimation. Led by Montserrat Gonzalez and Andy Zeng at Google DeepMind Robotics. Here is the paper on OpenReview.
4. Open X-Embodiment is a huge robotics data collection effort to enable training of Robotic Foundational Models across multi-embodiments, different tasks, and different lab setups.
RT-X: generalist AI models lead to 50% improvement over RT-1 and 3x improvement over RT-2, our previous best models. 🔥🥳🧵
I am excited to start as a student researcher at Google DeepMind, Mountain View. I will be working with Debidatta Dwibedi on end-to-end video conditioned policy learning for robotics.
Check out the HomeRobot, a large-scale sim-to-real mobile manipulation challenge at @NeurIPSConf 2023! More details about the challenge here. You can submit to EvalAI here. Our paper (accepted at CoRL 2023) shows RL and heuristic policies for sim to real transfer and identifies the challenges in the domain.
(1/5) Every home is different, and every person likes things done in their particular way. Therefore, home robots of the future need to both reason about the sequential nature of day-to-day tasks and generalize to user's preferences.
Selected among 224 young researchers to meet laureates in the mathematics and computer science (postponed to Sep 2021); Participated in Virtual HLF 2020.
I was a Mitacs Globalink Research Intern at Simon Fraser University, Burnaby, Canada. I worked with Prof. Oliver Schulte on bayesian optimization algorithms for machine learning. Find our code here.
March 2017,
Citi Women Leader Award (CWLA) Scholarship
Awarded one year of study scholarship (Top 3 among 1200 candidates selected nationwide).
Vidhi Jain, Maria Attarian, Nikhil J Joshi Ayzaan Wahid, Danny Driess, Quan Vuong, Pannag R Sanketi, Pierre Sermanet, Stefan Welker, Christine Chan, Igor Gilitschenski, Yonatan Bisk, Debidatta Dwibedi.
20th Edition of Robotics Science and Systems (RSS) Conference 2024.
@INPROCEEDINGS{Jain-RSS-24, AUTHOR = {Vidhi Jain AND Maria Attarian AND Nikhil J Joshi AND Ayzaan Wahid AND Danny Driess AND Quan Vuong AND Pannag R Sanketi AND Pierre Sermanet AND Stefan Welker AND Christine Chan AND Igor Gilitschenski AND Yonatan Bisk AND Debidatta Dwibedi}, TITLE = , BOOKTITLE = {Proceedings of Robotics: Science and Systems}, YEAR = {2024}, ADDRESS = {Delft, Netherlands}, MONTH = {July}, DOI = {10.15607/RSS.2024.XX.052} } Copied!
@article{hu2023Toward, author = {Yafei Hu and Quanting Xie and Vidhi Jain and Jonathan Francis and Jay Patrikar and Nikhil Keetha and Seungchan Kim and Yaqi Xie and Tianyi Zhang and Shibo Zhao and Yu-Quan Chong and Chen Wang and Katia Sycara and Matthew Johnson-Roberson and Dhruv Batra and Xiaolong Wang and Sebastian Scherer and Zsolt Kira and Fei Xia and Yonatan Bisk}, title = {Toward General-Purpose Robots via Foundation Models: A Survey and Meta-Analysis}, booktitle = {arXiv preprint: arXiv:2312.08782 }, year = {2023}, } Copied!
@inproceedings{dwibedi2024flexcap, title={FlexCap: Generating Rich, Localized, and Flexible Captions in Images}, author={Debidatta Dwibedi and Vidhi Jain and Jonathan Tompson and Andrew Zisserman and Yusuf Aytar}, year={2024}, booktitle={Conference of Neural Information Processing Systems (NeurIPS)}, url={https://openreview.net/forum?id=P5dEZeECGu}, MONTH = {December}, } Copied!
Montserrat Gonzalez Arenas, Ted Xiao, Sumeet Singh, Vidhi Jain, Allen Z. Ren, Quan Vuong, Jacob Varley, Alexander Herzog, Isabel Leal, Sean Kirmani, Mario Prats, Dorsa Sadigh, Vikas Sindhwani, Kanishka Rao, Jacky Liang, Andy Zeng.
40th IEEE International Conference on Robotics and Automation (ICRA) 2023.
@inproceedings{ arenas2023how, title={How to Prompt Your Robot: A PromptBook for Manipulation Skills with Code as Policies}, author={Montserrat Gonzalez Arenas and Ted Xiao and Sumeet Singh and Vidhi Jain and Allen Z. Ren and Quan Vuong and Jake Varley and Alexander Herzog and Isabel Leal and Sean Kirmani and Dorsa Sadigh and Vikas Sindhwani and Kanishka Rao and Jacky Liang and Andy Zeng}, booktitle={2nd Workshop on Language and Robot Learning: Language as Grounding}, year={2023}, url={https://openreview.net/forum?id=T8AiZj1QdN} } Copied!
Open X-Embodiment Collaboration, Abhishek Padalkar, Acorn Pooley, Ajinkya Jain, Alex Bewley, Alex Herzog, Alex Irpan, Alexander Khazatsky, Anant Rai, Anikait Singh, Anthony Brohan, Antonin Raffin, Ayzaan Wahid, Ben Burgess-Limerick, Beomjoon Kim, Bernhard Schölkopf, Brian Ichter, Cewu Lu, Charles Xu, Chelsea Finn, Chenfeng Xu, Cheng Chi, Chenguang Huang, Christine Chan, Chuer Pan, Chuyuan Fu, Coline Devin, Danny Driess, Deepak Pathak, Dhruv Shah, Dieter Büchler, Dmitry Kalashnikov, Dorsa Sadigh, Edward Johns, Federico Ceola, Fei Xia, Freek Stulp, Gaoyue Zhou, Gaurav S. Sukhatme, Gautam Salhotra, Ge Yan, Giulio Schiavi, Hao Su, Hao-Shu Fang, Haochen Shi, Heni Ben Amor, Henrik I Christensen, Hiroki Furuta, Homer Walke, Hongjie Fang, Igor Mordatch, Ilija Radosavovic, Isabel Leal, Jacky Liang, Jaehyung Kim, Jan Schneider, Jasmine Hsu, Jeannette Bohg, Jeffrey Bingham, Jiajun Wu, Jialin Wu, Jianlan Luo, Jiayuan Gu, Jie Tan, Jihoon Oh, Jitendra Malik, Jonathan Tompson, Jonathan Yang, Joseph J. Lim, João Silvério, Junhyek Han, Kanishka Rao, Karl Pertsch, Karol Hausman, Keegan Go, Keerthana Gopalakrishnan, Ken Goldberg, Kendra Byrne, Kenneth Oslund, Kento Kawaharazuka, Kevin Zhang, Keyvan Majd, Krishan Rana, Krishnan Srinivasan, Lawrence Yunliang Chen, Lerrel Pinto, Liam Tan, Lionel Ott, Lisa Lee, Masayoshi Tomizuka, Maximilian Du, Michael Ahn, Mingtong Zhang, Mingyu Ding, Mohan Kumar Srirama, Mohit Sharma, Moo Jin Kim, Naoaki Kanazawa, Nicklas Hansen, Nicolas Heess, Nikhil J Joshi, Niko Suenderhauf, Norman Di Palo, Nur Muhammad Mahi Shafiullah, Oier Mees, Oliver Kroemer, Pannag R Sanketi, Paul Wohlhart, Peng Xu, Pierre Sermanet, Priya Sundaresan, Quan Vuong, Rafael Rafailov, Ran Tian, Ria Doshi, Roberto Martín-Martín, Russell Mendonca, Rutav Shah, Ryan Hoque, Ryan Julian, Samuel Bustamante, Sean Kirmani, Sergey Levine, Sherry Moore, Shikhar Bahl, Shivin Dass, Shuran Song, Sichun Xu, Siddhant Haldar, Simeon Adebola, Simon Guist, Soroush Nasiriany, Stefan Schaal, Stefan Welker, Stephen Tian, Sudeep Dasari, Suneel Belkhale, Takayuki Osa, Tatsuya Harada, Tatsuya Matsushima, Ted Xiao, Tianhe Yu, Tianli Ding, Todor Davchev, Tony Z. Zhao, Travis Armstrong, Trevor Darrell, Vidhi Jain, Vincent Vanhoucke, Wei Zhan, Wenxuan Zhou, Wolfram Burgard, Xi Chen, Xiaolong Wang, Xinghao Zhu, Xuanlin Li, Yao Lu, Yevgen Chebotar, Yifan Zhou, Yifeng Zhu, Ying Xu, Yixuan Wang, Yonatan Bisk, Yoonyoung Cho, Youngwoon Lee, Yuchen Cui, Yueh-hua Wu, Yujin Tang, Yuke Zhu, Yunzhu Li, Yusuke Iwasawa, Yutaka Matsuo, Zhuo Xu, Zichen Jeff Cui.
40th IEEE International Conference on Robotics and Automation (ICRA) 2023.
@inproceedings{ArXiv:Collaboration2023, author = {Open X-Embodiment Collaboration and Abhishek Padalkar and Acorn Pooley and Ajinkya Jain and Alex Bewley and Alex Herzog and Alex Irpan and Alexander Khazatsky and Anant Rai and Anikait Singh and Anthony Brohan and Antonin Raffin and Ayzaan Wahid and Ben Burgess-Limerick and Beomjoon Kim and Bernhard Schölkopf and Brian Ichter and Cewu Lu and Charles Xu and Chelsea Finn and Chenfeng Xu and Cheng Chi and Chenguang Huang and Christine Chan and Chuer Pan and Chuyuan Fu and Coline Devin and Danny Driess and Deepak Pathak and Dhruv Shah and Dieter Büchler and Dmitry Kalashnikov and Dorsa Sadigh and Edward Johns and Federico Ceola and Fei Xia and Freek Stulp and Gaoyue Zhou and Gaurav S. Sukhatme and Gautam Salhotra and Ge Yan and Giulio Schiavi and Hao Su and Hao-Shu Fang and Haochen Shi and Heni Ben Amor and Henrik I Christensen and Hiroki Furuta and Homer Walke and Hongjie Fang and Igor Mordatch and Ilija Radosavovic and Isabel Leal and Jacky Liang and Jaehyung Kim and Jan Schneider and Jasmine Hsu and Jeannette Bohg and Jeffrey Bingham and Jiajun Wu and Jialin Wu and Jianlan Luo and Jiayuan Gu and Jie Tan and Jihoon Oh and Jitendra Malik and Jonathan Tompson and Jonathan Yang and Joseph J. Lim and João Silvério and Junhyek Han and Kanishka Rao and Karl Pertsch and Karol Hausman and Keegan Go and Keerthana Gopalakrishnan and Ken Goldberg and Kendra Byrne and Kenneth Oslund and Kento Kawaharazuka and Kevin Zhang and Keyvan Majd and Krishan Rana and Krishnan Srinivasan and Lawrence Yunliang Chen and Lerrel Pinto and Liam Tan and Lionel Ott and Lisa Lee and Masayoshi Tomizuka and Maximilian Du and Michael Ahn and Mingtong Zhang and Mingyu Ding and Mohan Kumar Srirama and Mohit Sharma and Moo Jin Kim and Naoaki Kanazawa and Nicklas Hansen and Nicolas Heess and Nikhil J Joshi and Niko Suenderhauf and Norman Di Palo and Nur Muhammad Mahi Shafiullah and Oier Mees and Oliver Kroemer and Pannag R Sanketi and Paul Wohlhart and Peng Xu and Pierre Sermanet and Priya Sundaresan and Quan Vuong and Rafael Rafailov and Ran Tian and Ria Doshi and Roberto Martín-Martín and Russell Mendonca and Rutav Shah and Ryan Hoque and Ryan Julian and Samuel Bustamante and Sean Kirmani and Sergey Levine and Sherry Moore and Shikhar Bahl and Shivin Dass and Shuran Song and Sichun Xu and Siddhant Haldar and Simeon Adebola and Simon Guist and Soroush Nasiriany and Stefan Schaal and Stefan Welker and Stephen Tian and Sudeep Dasari and Suneel Belkhale and Takayuki Osa and Tatsuya Harada and Tatsuya Matsushima and Ted Xiao and Tianhe Yu and Tianli Ding and Todor Davchev and Tony Z. Zhao and Travis Armstrong and Trevor Darrell and Vidhi Jain and Vincent Vanhoucke and Wei Zhan and Wenxuan Zhou and Wolfram Burgard and Xi Chen and Xiaolong Wang and Xinghao Zhu and Xuanlin Li and Yao Lu and Yevgen Chebotar and Yifan Zhou and Yifeng Zhu and Ying Xu and Yixuan Wang and Yonatan Bisk and Yoonyoung Cho and Youngwoon Lee and Yuchen Cui and Yueh-hua Wu and Yujin Tang and Yuke Zhu and Yunzhu Li and Yusuke Iwasawa and Yutaka Matsuo and Zhuo Xu and Zichen Jeff Cui}, title = , booktitle = {International Conference on Robotics and Automation (ICRA)}, year = {2023}, url = {https://robotics-transformer-x.github.io}, } Copied!
@inproceedings{ parashar2023slap, title={SLAP: Spatial-Language Attention Policies}, author={Priyam Parashar and Vidhi Jain and Xiaohan Zhang and Jay Vakil and Sam Powers and Yonatan Bisk and Chris Paxton}, booktitle={7th Annual Conference on Robot Learning}, year={2023}, url={https://openreview.net/forum?id=7Pkzm2FgUmq} } Copied!
@inproceedings{ yenamandra2023homerobot, title={HomeRobot: Open-Vocabulary Mobile Manipulation}, author={Sriram Yenamandra and Arun Ramachandran and Karmesh Yadav and Austin S Wang and Mukul Khanna and Theophile Gervet and Tsung-Yen Yang and Vidhi Jain and Alexander Clegg and John M Turner and Zsolt Kira and Manolis Savva and Angel X Chang and Devendra Singh Chaplot and Dhruv Batra and Roozbeh Mottaghi and Yonatan Bisk and Chris Paxton}, booktitle={7th Annual Conference on Robot Learning}, year={2023}, url={https://openreview.net/forum?id=b-cto-fetlz} } Copied!
@inproceedings{ jain2022transformers, title={Transformers Are Adaptable Task Planners}, author={Vidhi Jain and Yixin Lin and Eric Undersander and Yonatan Bisk and Akshara Rai}, booktitle={6th Annual Conference on Robot Learning}, year={2022}, url={https://openreview.net/forum?id=Eal_lL08v_l} } Copied!
@article{Jain2021LearningET, title={Learning Embeddings that Capture Spatial Semantics for Indoor Navigation}, author={Vidhi Jain and Prakhar Agarwal and Shishir G. Patil and Katia P. Sycara}, journal={ArXiv}, year={2021}, volume={abs/2108.00159} } Copied!
@inproceedings{10.1145/3027063.3048417, author = {Jain, Vidhi and Agarwal, Prakhar}, title = {Symptomatic Diagnosis and Prognosis of Psychiatric Disorders through Personal Gadgets}, year = {2017}, isbn = {9781450346566}, publisher = {Association for Computing Machinery}, address = {New York, NY, USA}, url = {https://doi.org/10.1145/3027063.3048417}, doi = {10.1145/3027063.3048417}, abstract = {Mental disorder has been shrouded as a stigma and disregarded as a secondary issue to physical health. It has become a major contributor to morbidity, disability and at times, fatality. Through our research, we show that the data generated through our daily interaction with technology has consistent patterns to identify symptoms in prodromal phase of degrading mental health. We propose a methodological data driven system that will help to raise an early alarm on the onset of symptoms of potential psychiatric disorders. The system collects the user's data from different human-computer interfaces to create a fine-grain electronic health portfolio, which can assist doctors in differential diagnosis as well as prognosis.}, booktitle = {Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems}, pages = {118–123}, numpages = {6}, keywords = {mental health symptoms, design technique, data collection and processing}, location = {Denver, Colorado, USA}, series = {CHI EA '17} } Copied!
@inproceedings{gholami2017model, author = {Sajjad Gholami and Oliver Schulte and Vidhi Jian and Qiang Zhao}, title = {Model Selection Scores for Multi-Relational Bayesian Networks}, booktitle = {Extended Abstract for DeLBP Workshop at IJCAI 2017}, year = {2017}, } Copied!
Presented the paper on Vid2Robot: End-to-end Video-conditioned Policy Learning with Cross-Attention Transformers at Robotics Science and Systems (RSS) 2024 at Delft, Netherlands