Human Robot Interaction

Development of Robotic Manipulation Algorithm for Grasping and Assembling with the Machine Learning using Visual and Tactile Sensing Information

Abstract

A human’s ability to grasp and recognize unknown objects is constant inspiration for robotics research. In order to perform grasping and manipulation tasks in the unstructured environments of the real world, robot must be able to compute grasping points from various sensor data. Moreover, it needs to be able to act in dynamic environments, whether that be changes in the robot’s workspace, occlusion, noise and errors in perception, inaccuracies in the robot’s control. The presentation will be to share our robotic manipulation algorithm for grasping with using vision and tactile sensing information. In more detail, this talk will introduce how to get meaningful information from multiple tactile sensors and build a learning based vision-robot calibration system. Also, I will describe the overall system integration process including sensor, camera, robot and gripper manipulation

Biography

Jung-Jun Kim is currently an Assistant Research Scientist, Smart Mobility Research Center, at the Korea Institute of Robotics and Technology Convergence, South Korea. His research mainly focuses on Computer Vision, in particular, Material Recognition and 3D reconstruction. He received the B.E. and M.E. in Computer Engineering from Kyunghee University, South Korea in 2015 and 2017 respectively. He worked as an intern at Korea Institute of Science technology in 2017.

 

 

 

Contact

Jung-Jun Kim
Korea Institute of Robotics and Technology Convergence
39, Jigok-ro, Pohang, Gyeonsangbuk-do, 37666, South Korea
jjkim@kiro.re.kr