RESEARCH (EN)

Our lab aims to enable fast and flexible operations through perception–action systems. With visual servoing as our core theme, we study and design vision recognition and motion control in an integrated manner, pursuing research from both fundamental/theoretical and system development/application perspectives—ranging from high-speed vision system development to understanding the principles of biological locomotion and interaction.

Visual servoing

Visual servoing refers to a theoretical framework that couples visual information processing with a system’s dynamics. Within this framework, we propose feature extraction methods suited for feedback control and robot–camera system designs that are robust to calibration errors. More recently, we have also been exploring an approach that uses generative AI to create target images, eliminating the pre-step of preparing target images that has been a key challenge in image-based visual servoing (IBVS).


3D measurement

Humans perceive the real world in three dimensions by integrating information from the left and right eyes. By reproducing this mechanism using projectors and cameras and acquiring 3D information from real scenes, we can enable three-dimensional visual servoing. In recent years, demand for high-precision 3D measurement has also been growing in manufacturing, for tasks such as quality inspection and shape evaluation.

In our lab, we study scan-based 3D measurement using a line scanner. A line scanner mounted at the tip of an articulated robot arm scans an object, and the results from multiple scans are integrated to generate a 3D point cloud. We also aim to identify and reduce point-cloud distortions caused by deviations between the planned and actual trajectories by leveraging consistency in the acquired data, ultimately enabling accurate 3D measurement and its applications to 3D visual servoing and quality inspection.

Robotic manipulation

Driven by labor shortages and the need to reduce manual work, there is strong demand to replace repetitive tasks with robots. However, variations in objects and uncertainty in physical contact can make even seemingly simple tasks difficult for robots.

In our lab, we couple a robot arm equipped with an anthropomorphic (human-like) hand with human hand motions, and use imitation learning to acquire basic skills from demonstrations. We then improve performance through reinforcement learning in simulation, and iteratively refine and retrain the policy through real-world deployment. Through this cycle, we aim for continuous and efficient performance improvement, enabling automation of tasks that have been difficult to achieve with conventional approaches and reducing setup and maintenance costs.

Systems Science of
Bio-Navigation
(Bio-movement Informatics)

Some biological perception–action systems operate based on mechanisms that are fundamentally different from those of artificial systems. From a systems biology perspective that analyzes living organisms as systems, our lab aims to elucidate the mechanisms of cognition, behavior, memory, and interaction in cells and animals, as well as the navigation strategies of various organisms built upon these mechanisms.