Date: August 8, 2019 15:00-17:00

Venue: IRCN Seminar Room@13F, Faculty of Medicine Experimental Research Building, School of Medicine, Hongo Campus

Speakers: Emre Ugur 
                     Assistant Professor, Department of Computer Engineering, Bogazici University
                     Erhan Oztop
                     Professor, Computer Science Department, Ozyegin University

For more information: IRCN Administrative Office international.ircn@gs.mail.u-tokyo.ac.jp
*Please contact us if you are interested in this seminar.


Abstract:
"Learning to IMAGINE the Action Consequences in Robotic Manupulation"
Predicting the consequences of one’s own actions is an important requirement for safe human- robot collaboration and their application to personal robotics. Neurophysiological and behavioral data suggest that human brain benefits from internal forward models that continuously predict the outcomes of the generated motor commands for trajectory planning, movement control, and multi-step planning. First, I will present our recent extension of propagation networks that enable the robot to predict the effects of its actions in scenes containing articulated multi-part multi- objects. Belief Regulated Dual Propagation Networks (BRDPN) consists of two complementary components, a physics predictor and a belief regulator. While the former predicts the future states of the object(s) manipulated by the robot, the latter constantly corrects the robot’s knowledge regarding the objects and their relations. Next, I will talk on our recent learning from demonstration framework that is based on Conditioned Neural Processes. CNMPs extract the prior knowledge directly from the training data by sampling observations from it, and uses it to predict a conditional distribution over any other target points. CNMPs specifically learns complex temporal multi-modal sensorimotor relations in connection with external parameters and goals; produces movement trajectories in joint or task space; and executes these trajectories through a high-level feedback control loop. Conditioned with an external goal that is encoded in the sensorimotor space of the robot, predicted sensorimotor trajectory that is expected to be observed during the successful execution of the task is generated by the CNMP, and the corresponding motor commands are executed.

"Human Sensorimotor Learning in Shared Control Systems"
In human-human collaboration, both parties do learn and adapt to change their control policies based on each other’s behavior. The human-robot version is no different; human still would learn, and if we wish we may program the robot to learn and change its behavior through time. If managed properly, this co-adaptation mechanism may lead to a higher task performance. However, there is no established general rule to ensure this. Furthermore, the required human effort for high level task performance must be considered. Eventually a high performing collaborative system may be obtained; but, the learning/adaptation time needed by the human operator can be prohibitively long. Another dimension to consider is how much the agents are allowed to communicate. In some tasks, the human can be in charge and control when and how the robot collaboration is invoked; in some other cases, the robot can indicate its plan or current state using a sensory modality not in use for the task at hand. As an extreme case, even, neither agent may be given any knowledge about the other agent.
One natural way of inducing effective human-robot collaboration is to adopt a human-in-the-loop setup, where the control signals of the agents are combined to generate the net motor output driving the plant. In such shared control systems, usually the goal is to combine the strengths of each partner to achieve a task performance higher than that is possible by one of the partners alone. Although shared control is a promising direction for effective human-robot collaboration, the robot and its control policy create a novel environment for the human operator, for which significant human sensorimotor learning is often needed. Therefore, the human side of the shared control framework needs to be studied in detail to transform the shared control framework into a widely adopted technology. In this talk, I will present our work in this direction that investigates the human sensorimotor learning in shared vs. direct control of a robot arm -with no explicit communication- for balancing a sphere on a tray attached to it.


Details are below or click here(PDF).