Commande de chute pour robots humanoïdes par reconfiguration posturale et compliance adaptative

Humanoid robots have come to a new era of development. There is a strong desire for making robots collaborate with humans in a close way. The robots have to navigate and move around in the human environment, interact with it, understand the needs of the humans, communicate with them, and interact with them. The surge in the development of humanoid robotics can be attributed to the simple following intuition: what is better than a human-like system to realize all these new human centered challenges? Recently, humanoid robots have been considered (i) for rescue and intervention in disaster situationsKakiuchi et al. [2017]; (ii) as home service companions to assist frail and aging people ; and (iii) as collaborative workers (i.e. as cobots termed“comanoids”) in large-manufacturing assembly plants where wheeled and rail-ported robots cannot be used (e.g. aircrafts and shipyards), among other applications. These three example applications have different social and economic impacts, different business models, but they also have different requirements in terms of hardware, perception capabilities, and dexterity. Humanoids are highly complex systems and stabilizing them is still a big challenge. Therefore, a rising number of works study the balance and stabilizing problem of humanoids, but few consider what should the robot do when their stabilizing control is undermined.

Humanoid robots are an imitation of the humans. Even if human motor control is infinitely more complex and more advanced than whatever the state of the art in humanoid robotics can offer to this day, humans themselves still fall on occasion Lortie et Rizzo [1999]. Moreover, the DARPA Robotics Challenge (DRC), that took place between the years 2012 and 2015, has shown that falls do occur at a very high rate when the robots are not anymore attached to security cords Atkeson et al. [2015] Atkeson et al. [2016], even under the close supervision of humans and in partial autonomy conditions. The challenge also shed the light on the fact that no fall controls were used when the robots were falling. The motivation for developing such algorithms is, therefore, evident in this context.

When considering falls, an important question is “How the fall has occurred and in which conditions?”. Indeed, existing control algorithms assume that the robot is pushed when it is in a vertical upright stationary configuration, and that the push is done at an upper point of the torso in the saggital or the coronal plane. For instance, Ogata et al. [2007] have considered this kind of push on a walking robot to validate their control law. However, in general, the robustness of a fall control law should be tested and demonstrated with different kinds of pushes, applied at different points on the robot. The robot may also slip on a surface or trip on an object. These two situations have been barely considered in the literature, in part due to the fact that the experimentation in this domain is risky, costly, and dangerous. Another major difficulty in these general cases is getting the information about the contact points that are lost. Closed-loop control with contact estimation is a strict necessity in these situations.

To satisfy Asimov-s laws, we first need to define what is a fall. Humans can ‘sense’ when they are just about to fall, then, and depending of their reaction speed, they move their body to either avoid someone or/and something during the fall, or if nothing is around try to break the fall. Thus, to activate the falling control algorithm, the robot needs to detect its fall. Once the robot has detected its own fall, it has roughly about 0.7 − 1 s to actuate its body to avoid to hurt and/or break something on it. Goswami et al. [2014] have presented change-ofdirection algorithms during falls which are presented in Section 1.4. Breaking falls has been studied by Fujiwara et al. Fujiwara et al. [2007] using offline optimization tools, and Ogata et al. Ogata et al. [2008] who simplified the problem to obtain an online control algorithm. Note that in this chapter, in order to not rewrite everything (because the authors do not have the same convention), the equation has been taken as is without being transformed to match the nomenclature.

Humanoid robots are human-size and human-shaped robots. In this thesis, we consider that the robot has two distinct legs and two distinct arms attached to a torso and a waist respectively. We also recall the three main planes of humans . The platform used for the experiments is HRP-4 from Kawada. It is a relatively ‘fragile’ robot that has not been primarily designed to resist impacts and falls in general. At low level, it is a position-based controller that commands the motors.

Because a fall is a matter of only a few milliseconds, it is admitted that the sooner the fall is detected the more efficient the control law will be. Also, falling detection must happen only when the robot is in a situation in which it cannot avoid the fall. In another words, if it has been pushed, but can still maintain its stability by performing a fall-avoidance algorithm (Ogata et al. [2007] and Sugihara [2008] for example), then fall should not be detected and the fall-avoidance algorithm should be executed first. Therefore, the falling detection algorithm should not only be fast, but also robust, as it should not detect “false positives”, nor trigger “false alarms”.

As a first and very simple way to detect a fall, Lee et Goswami [2013] propose to consider the angle α between the lean line and the normal of the ground. The lean line is the line going through both the Center of Pressure (CoP) and the CoM .

The reported time delay before detection of the fall is about 30 ms with a push of 160 N. This kind of fall detection has the advantage of simplicity and quasi-instantaneous computation, but it also has a major limitation. That is, the robot could be stable even with α > 15◦ . For example, the robots could be lying down on a couch or it could be leaning on one of its arms. Another possibility is a non planar ground, making the normal vector computation wrong. In the latter case, one could think of taking the angle between the gravity direction and the lean line, but in the same way, if the robot is standing on an inclined ground α would be greater than 15◦ . Finally, increasing the limit angle would make the robot react slower which is not advisable.

Machine learning has been proposed as an approach to fall detection as in Ogata et al. [2007] Kalyanakrishnan et Goswami [2011] and more recently in Andr´e et al. [2016] Hofmann et al. [2016]. The idea proposed by Ogata et al. [2007] is to discriminate two states after a disturbance. The one in which the robot can recover its stability (using fall-avoidance control) and the one in which the robot falls (the actual fall detection). An experiment has been done with several learning data, those data were obtained by applying a push on the rear side of the robot meaning that the robot is not able to discriminate the two states with another kind of disturbance. Still, the results were positive in this experiment and the robot was able to perform both fall-avoidance control and fall control (here an UKEMI motion) depending of the magnitude of the disturbance. Kalyanakrishnan et Goswami [2011] et al. considered two data. The first is called the False Positive Rate (FPR) and the second is the lead time τlead. The former designates the fraction of trajectories in which falling is predicted for a balanced state and the latter is the average value of tfallen − tpredict over trajectories that terminate in fallen. tfallen is the time at which fallen is reach and tpredict the time at which falling is predicted. The training is done over several trajectories to find a low FPR with a high lead time. Andr´e et al. [2016] et al. used four data mining algorithms over sensor data to find classifier that identifies failures. It is based on the so-called Associative Skill Memories which tries to associate data together considering that stereotypical movement has sensor footprints. Hofmann et al. [2016] uses neural network with NAO’s outputs (especially gyroscopes) as training data.

Table des matières

Introduction
1 State of the art and motivations
1.1 Introduction
1.2 Humanoid robot presentation
1.3 Fall detection
1.3.1 Limit angle
1.3.2 Projection of the CoM
1.3.3 Abnormality Detection Method
1.3.4 Experiential Learning
1.3.5 Predicted ZMP
1.3.6 Conclusion
1.4 Avoiding human/high-value objects
1.4.1 Direction of the fall
1.4.2 Foot placement strategy
1.4.3 Inertia shaping
1.4.4 Partial inertia shaping
1.5 Minimizing damages
1.5.1 UKEMI technique
1.5.2 Online methods
1.5.3 Shock-reducing motion
1.5.4 Tripod fall
1.6 Compliant strategy in front of a wall
1.7 Discussion
12 CONTENTS
2 Fall singularities
2.1 Introduction
2.2 Taxonomy of fall singularities
2.3 Singularity avoidance controller
2.3.1 Fall Direction
2.3.2 Front fall
2.3.3 Back fall
2.3.4 Side fall
2.4 Compliance
2.5 Simulation and experimentations
2.6 Conclusion and discussion
3 Cluttered environment and adaptive-QP
3.1 Introduction
3.2 Pre-impact phase
3.2.1 Search of landing points
3.2.2 Reshaping tasks
3.3 Post-impact phase
3.3.1 1-dof analysis
3.3.2 Multi-dof on-line solution
3.4 Simulations
3.5 Discussion
3.5.1 Impact detection
3.5.2 Gains stability
3.5.3 Torque-based controlled robots
3.5.4 Actuator dynamics
3.6 Conclusion
4 Polytope-based model predictive control for compliance
4.1 Reduced Dynamic Model
4.1.1 Actuation constraints
4.2 Distribution of gravity and linear momentum
4.3 CoM trajectory solution
4.4 Simulations
4.5 Conclusion and discussion
Conclusion

Cours gratuitTélécharger le document complet

Télécharger aussi :

Laisser un commentaire

Votre adresse e-mail ne sera pas publiée. Les champs obligatoires sont indiqués avec *