Please note: We are currently experiencing some performance issues across the site, and some pages may be slow to load. We are working on restoring normal service soon. Importing new articles from Word documents is also currently unavailable. We apologize for any inconvenience.

Increasing the independence of individuals with tetraplegia is a challenging task. One potential solution is to allow for use of assistive robotics, specifically an assistive robotic manipulator (ARM), when solving varying tasks in personal and remote space. Allowing for remote performances has a positive influence on the independence of the user, allowing the user to be self-sufficient even when lying in bed. The control interfaces that are suitable for severely disabled individuals is lacking. The aim of this paper is twofold: to allow for remote tongue-based control of an ARM and second, to compare the effect of semi-automation on the control of a robotic ARM. Ten able-bodied individuals participated in a two-day experiment where they were asked to drive a wheelchair mounted ARM away from the participant and out of sight. Thereafter, they should either pick up a strawberry or a bottle from a table. All the participants successfully finished three trials for three different control methods: 1) manual control (MA), 2) adaptive level semi-automation (SA), and 3) fixed level semi-automation (FA). When grasping the strawberry, there was a significant decrease in the gripping time and number of used commands when using FA compared to MA. When grasping the bottle, the SA showed a significant reduction in gripping time and number of used commands compared with MA. This paper is a step in the direction of providing severely paralyzed individuals with a way to increase their independence and overall quality of life.
Preprint submitted to IOP Journal of Neural Engineering Objective. Individuals with Amyotrophic lateral sclerosis (ALS) progressively lose muscle functionality and therefore experience both an increased need for assistive robot technologies and a reduced ability to control such robots. While these individuals may use high-performing control systems, such as tongue control, at the beginning of their disease progression, they will eventually be restricted to a lower-performing control system, such as brain control. However, an adaptive multimodal control interface framework consisting of combinations of tongue control and noninvasive brain control can utilize the residual tongue functionality to optimize the control performance throughout the disease progression. Approach. To investigate this concept, a new adaptive tongue-brain multimodal control framework for manual and continuous control of a 7-degree-of-freedom robot arm is developed, based on a prior validation study. The new framework focuses on improved visual feedback, as individuals with ALS specifically requested this in a previous validation study, and consists of four subsystems: the first uses full tongue control; the second and third use hybrid tongue and noninvasive brain control, with a decreasing need for tongue functionality; and the fourth uses noninvasive brain control only. The framework was evaluated with three participants with ALS. Main results. All participants were succesful with all subsystems. One user could no longer efficiently use the full tongue control interface but achieved good results with the third and fourth subsystems. The second participant achieved significantly better results with the subsystems that included some tongue control, thereby showing the advantage of including some tongue control. The third participant performed well with all subsystems showing the ideal performance progression between each subsystem. Moreover, all participants, including the two with good tongue control, chose a multimodal control interface as their favorite. Significance. The results indicate that individuals with ALS prefer interfaces that combine multiple control modalities.