Bianca Lento

and 5 more

Current myoelectric controls of multiple-joint arm prostheses are not satisfactory, and alternatives based on natural arm coordination are being increasingly explored. We recently showed that adding movement goals to shoulder information enabled Artificial Neural Networks (ANNs), trained on natural arm movements, to predict distal joints so well that transhumeral amputees could reach as with their valid arm in Virtual Reality (VR). Yet, this control relies on the position and orientation of the object to reach expressed in a reference frame attached to the shoulder, whereas it might only be available in a head-centered reference frame through gaze-guided computer vision. Here, we designed two methods to perform the required transformation from incomplete, orientation-only data from head and shoulder, possibly available in real life with Inertial Measurement Units (IMUs). The first involved an ANN trained offline to do this transformation, while the second was based on a bioinspired space map with online adaptation. Experimental results on twelve intact-limbs participants controlling a prosthesis avatar in VR demonstrated persistent errors with the first method, which could be quickly absorbed with the second method. The effectiveness of this second bioinspired and adaptive method was also tested on six participants with transhumeral limb loss in VR, and a physical proof of concept was implemented on a teleoperated robotic platform with computer vision. Those advances represent necessary steps toward the deployment of movement-based control in real-life scenarios.