Mithun George Jacob
I am currently a Software Engineer at X. Previously, I worked at Bosch on mapping and localization for automated driving.
I obtained my Ph.D. from the Intelligent Systems and Technologies (ISAT) Lab at the School of Industrial Engineering, Purdue University in July 2014. Before that, I graduated from the MSE Robotics program at the GRASP Lab, School of Engineering and Applied Sciences, University of Pennsylvania in August 2009. My work experience includes a year at Technicolor Corporate Research from 2009-2010 in Princeton, NJ. My research interests include optimization and statistical pattern recognition applied to problems in computer vision and robotics.
Optimal Modality Selection for Cooperative Human-Robot Task Completion
When robots are expected to work side-by-side with humans, they should be capable of communicating through ways similar to those used in inter-personal communication. To this end, several natural communication channels for human-robot interaction (HRI) have been researched over the years such as gestures, speech, gaze, emotion recognition and brain-based control. It has also been shown that inter-human communication is intrinsically multimodal resulting in several multimodal systems designed for HRI. These interfaces allow humans to communicate with robots through lexicons i.e. a vocabulary of instances of modalities assigned to specific control commands.
The process of determining the instances of modalities to be used for certain tasks, and assigning them to specific commands (i.e. building the lexicon) is an open research question. Even though hybrid human-robot communication frameworks and multimodal communication have been studied, a systematic methodology for designing multimodal interfaces does not exist. This gap is addressed by the proposed RIMAG framework to generate multimodal lexicons which maximize multiple performance metrics over a wide range of communication modalities.
Due to the extremely numerous possibilities of assigning instances of modalities to control commands (i.e. lexicons), new tools and theoretical methods are required to design and assess the quality of lexicons. In this work, a novel methodology utilizing Random Instance Modeling and Generation (RIMAG) is presented to tackle this problem. RIMAG is designed to generate optimal multimodal lexicons for cooperative human-machine task completion. The task is modeled and lexicons which maximize multiple performance metrics are determined. One of the contributions of this work is its ability to model and simulate both human and machine aspects of interaction. Validation of the generated lexicons was conducted with experiments utilizing the Gestonurse robot for mock abdominal incision and opening. Experimental results indicate that predicted optimal lexicons significantly outperform predicted suboptimal lexicons validating the methodology.
Gestix II: Context-based Hand Gesture Recognition in the Operating Room
Due to advances in computer-assisted surgery, human-computer interaction (HCI) in the operating room (OR) is gradually becoming commonplace. Several surgical procedures such as tumor resections mandate the use of computers intra-operatively and during pre-operative planning. Since HCI devices are possible sources of contamination due to the difficulty in sterilization, clinical protocols have been devised to delegate control of the terminal to a sterile human assistant. However, this mode of communication has been shown to be cumbersome and prone to errors and therefore increase the overall duration of the procedure. As a secondary effect, such indirect interaction could increase the surgeon's cognitive load and highlights the need for a sterile method of HCI in the operating room.
Computer systems used to navigate MRI images before and during the surgery (PACs) conventionally requires the use of keyboard, mice or touchscreens for MRI browsing. This project proposes a sterile method for the surgeon to naturally, and efficiently manipulate MRI images through touchless, freehand gestures.
The need for sterile image manipulation has led to the development of touchless HCI based on the use of facial expressions, hand and body gestures, and gaze. It should be noted that none of this research have incorporated surgical contextual cues to disambiguate recognition of false gestures and improve gesture recognition performance.
An alternate modality is proposed to replace HCI devices such as the keyboard, mouse, and touch-screens traditionally used to navigate and manipulate a sequential set of MRI images. The proposed system extends the work previously developed by the authors, with the use of dynamic two-handed gestures and contextual knowledge. Additionally, a novel, analytical method to optimize the gesture recognition system using a priori data was developed.
Selected papers: Context-based hand gesture recognition for the operating room, Intention, Context and Gesture Recognition for Sterile MRI Navigation in the Operating Room and Hand-gesture-based sterile interface for the operating room using contextual cues for the navigation of radiological images.
*Check out the timeline for a description of research prior to my time at Purdue.
Lorenz Wellhausen, Mithun George Jacob, ''Map-optimized Probabilistic Traffic Rules'' in the 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), October 9-14, Daejeon, Korea.
Mithun George Jacob, Juan P. Wachs, ''Optimal Modality Selection for Cooperative Human-Robot Task Completion'' in the IEEE Transactions on Cybernetics, vol. 46, no. 12, pp. 3388-3400, Dec. 2016 (published online in Dec. 2015).
Mithun George Jacob, Juan P. Wachs, ''Optimal Modality Selection for Multimodal Human-Machine Systems using RIMAG'' in the Proceedings of the 2014 IEEE International Conference on Systems, Man, and Cybernetics (SMC), October 5-8, 2014, San Diego, CA.
Mithun George Jacob, Juan P. Wachs, ''Context-based hand gesture recognition for the operating room'' in Pattern Recognition Letters (June 2013).
Mithun George Jacob, Yu-Ting Li, Juan P. Wachs, A. George Akingba ''Collaboration with a Robotic Scrub Nurse'' in Commun. ACM 56(5): 68-75 (2013).
Mithun George Jacob, Yu-Ting Li, Juan P. Wachs, ''Surgical Instrument Handling and Retrieval in the Operating Room with a Multimodal Robotic Assistant'' in the Proceedings of the 2013 IEEE International Conference on Robotics and Automation (ICRA), May 6-10 2013, Karlsruhe, Germany.
Mithun George Jacob, Juan P. Wachs, Rebecca A. Packer, ''Hand Gesture-based Sterile Interface for the Operating Room Using Contextual Cues for the Navigation of Radiological Images'', in the Journal of the American Medical Informatics Association (JAMIA), December 2012.
Yu-Ting Li, Mithun George Jacob, Juan P. Wachs, ''A Cyber-Physical Management System for Delivering and Monitoring Surgical Instruments in the OR'' in Surgical Innovation, October 2012.
Mithun George Jacob, Christopher Cange, Juan Wachs, Rebecca Packer, ''Intention, Context and Gesture Recognition for Sterile MRI Navigation in the Operating Room'' in Proceedings of the 16th Iberoamerican Congress conference on Progress in Pattern Recognition (CIARP), Image Analysis, Computer Vision, and Applications, Buenos Aires, Argentina, pp. 220-227.
Mithun George Jacob, Yu-Ting Li, Juan P. Wachs, ''Gestonurse: a multimodal robotic scrub nurse'' in Proceedings of the 7th Annual ACM/IEEE International Conference on Human-Robot Interaction (HRI), Boston, MA, 2012, pp. 153-154.
Juan P. Wachs, Mithun George Jacob, Yu-Ting Li, A. George Akingba, ''Does a robotic scrub nurse improve economy of movements?'' in Medical Imaging 2012: Image-Guided Procedures, Robotic Interventions, and Modeling. Proceedings of the SPIE, Volume 8316, pp. 83160E-83160E-7 (2012).
Mithun George Jacob, Yu-Ting Li, Juan P. Wachs, ''Gestonurse: a robotic surgical nurse for handling surgical instruments in the operating room'' in Journal of Robotic Surgery (27 Nov 2011), Pgs. 1-11.
Mithun George Jacob, Yu-Ting Li, Juan P. Wachs, ''A Gesture Driven Robotic Scrub Nurse'' In Proceedings of the 2011 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Anchorage, Alaska, pp. 2039-2044.
V.G. Sridhar, Mithun George Jacob, ''Automated vision inspection system for a water bottling industry'' in Proceedings of the 9th International Symposium on Measurement and Quality Control (ISMQC) 2007, India.
Mithun George Jacob, V.G. Sridhar, ''Multiple Object Recognition on the Assembly Line'' in Proceedings of the 22nd International Conference on CAD/CAM, Robotics & Factories of the Future (CARS&FOF) July 2006, India, pg. 1066.
Mithun George Jacob, Sitaram Bhagavathy,
''Motion compensating transformation for video coding'', Patent PCT/US2012/020888 pending, January 2012.
Dong-Qing Zhang, Sitaram Bhagavathy, Mithun George Jacob,
''Methods and Apparatus for Encoding Video Signals using Motion compensated Example-based Super-resolution for Video Compression'', Patent PCT/US2011/050913 pending, September 2011.
Dong-Qing Zhang, Sitaram Bhagavathy, Mithun George Jacob, ''Methods and Apparatus for Decoding Video Signals using Motion compensated Example-based Super-resolution for Video Compression'', Patent PCT/US2011/050915 pending, September 2011.
Jesus Barcons-Palau, Sitaram Bhagavathy, Joan Llach, Mithun George Jacob, ''Segmenting grass regions and playfield in sports videos'', Patent PCT/US2010/000004 pending, April 2010.
Mithun George Jacob, Sitaram Bhagavathy, Jesus Barcons-Palau, Joan Llach, ''Detection of field lines in sport videos'', Patent PCT/US2010/000032 pending, July 2010.
2016 Innovation Award - Bosch Research and Technology Center, July 2016, Palo Alto
2014 Outstanding Graduate Student Research Award - College of Engineering, April 2014, Purdue University
Best Poster Award - Industrial Engineering Research Symposium, April 2013, Purdue University
Best Poster Award - Industrial Engineering Research Symposium, April 2012, Purdue University
Student Travel Award - IE Graduate Student Organization, September 2011, Purdue University
Best Student Paper Finalist and awarded Student Travel Grant - 2011 IEEE International Conference on Systems, Man, and Cybernetics (SMC), September 2011, Anchorage, Alaska
Second in GETC (Programming) in Phreak, a National Level Programming Contest in 2005, India
First in Panacea (Debugging and Programming) at ISTE Confluence 2004, India
''Gesture recognition system could reduce surgical delays'' - TheEngineer.
''Robots and people can all get along'' - NPR Marketplace.
''Robotic nurse in development at Purdue'' - Fox News.
''Future surgeons may use robotic nurse, 'gesture recognition''' - Forbes.com.
Treasurer, Omega Rho, Honor Society of INFORMS, Purdue University Chapter (2013 - 2014).
Treasurer, INFORMS Purdue University Chapter (2013 - 2014).
Member, Technical Committee on Human-Computer Interaction, IEEE SMC Society (2011 - 2013).
Member, IEEE RAS (Robotics and Automation Society) (2010 - 2013).
Member, IEEE ComSoc (Communications Society) (2012 - 2013).
Member, IEEE Computer Society (2012 - 2013).
Member, IEEE SMC (Systems, Man & Cybernetics) Society (2010 - 2013).
Reviewer, IROS, ICRA, IV, ITSC and SMC.
Reviewer, TSMC: Systems, JRTIP, ESA, and MVA