• Results
  • Multi-modal monitoring subsystem

Multi-modal monitoring subsystem (classification algorithm for security and safety state based on machine learning methods)

Multi-modal monitoring subsystem (classification algorithm for security and safety state based on machine learning methods)
Summary

There are multiple reasons why it is essential for the robot to know where the human is while they are working together. First, if cobots and humans are jointly working together for example at assembling a gear box, the robot has to know when it is his turn to add the next part. Second, the robot needs to have knowledge about and understand the humans motion during the task learning process in order to abstract these demonstrations. And last, the robot should know where the human is in terms of safety. A continuous multi-modal monitoring system will be used to track the workers motion and predict his intentions. Multi-model sensor fusion based on stochastic filters like the Kalman-Filter will be applied to fuse the available information from multiple view points and sensors. Classifiers and machine learning methods will be integrated to foresee the worker’s intention and most probable next motion. Methods like deep learning will be applied to continuously learn typical, repeating movements. The developed stochastic filters can be extended to use more or different sensors easily. This makes the subsystem adaptable for new scenarios and applications.

Results type(s)
Unfold all
/
Fold all
Structured mapping
Unfold all
/
Fold all