There are over 4.3 million users of powered wheelchairs in the US alone . It has been reported that 10% of powered wheelchair users experience serious difficulties with the standard operation of their wheelchair, in particular with steering and maneuvering tasks . Furthermore, there are many other individuals who require mobility assistance, yet also have other conditions, such as visual or cognitive impairments, that hamper their ability to safely operate a powered wheelchair. The development of an intelligent powered wheelchair (IPW) offers a promising technology to increase independence of all those individuals.
Various prototypes of IPWs have been developed over the years. These feature a variety of robotic technologies. In this section, we review some of the most recent results, and refer the reader to an excellent overview for a more detailed survey . One of the primary challenges of building an IPW is in acquiring sufficient information from the surrounding environment. In terms of onboard navigation sensors, most systems rely on standard distance sensors, such as sonar, IR, laser range-finding, or binocular vision for mapping, localization and obstacle avoidance [3–6]. Laser range-finders, which offer the best accuracy in terms of range measurements, were relatively rare until recently due to their high-cost and large form factor. However the technology has been improving in this area, making them a more viable option.
Many IPW systems aim to offer autonomous hands-free navigation services. To achieve this, a variety of navigation modes have been considered, including reactive control, autonomous manoeuvre execution, and autonomous point to point navigation. In the reactive navigation mode, the user is responsible for motion planning and execution with the help of a collision avoidance system [7–9]. This system does not require the knowledge of the environment prior to navigation. The reactive mode is suitable for users who are able to plan their routes and to manipulate the input devices. In the autonomous manoeuvre execution mode, a set of navigation maneuvers is designed for specific navigation tasks [10–12]: doorway traversal [13–15], corridor traversal [13, 15, 16], wall following [16, 17], automatic docking [4, 18] and person following . In the autonomous point-to-point navigation mode, the user selects its destination pose in the map and supervises the navigation process. Given the destination pose, the navigation system is responsible for platform localisation, path planning and plan execution with local obstacle avoidance [20–24]. Safe navigation has also been achieved through artificial potential fields , or obstacle density histogram . In general, the full literature on robot navigation could be leveraged for this component, though it is necessary to respect the constraint imposed by the domain. For example, classical methods based on pose error tracking often do not lead to smooth motion; Mazo , Gulati and Kuipers  have proposed methods that tend to produce graceful motions. In the work presented below, we focus on the first two levels of capabilities (reactive control and autonomous manoeuvre execution). These are sufficient for deployment in the Wheelchair Skills Test environment. The third level is currently implemented, but was not validated in the experiments described below, therefore it is not described.
A variety of input methods have been incorporated onboard IPWs, from the traditional joystick or single-switch interface, to speech recognition, and most recently brain-computer interface. In his survey, Simpson  argued that the onboard computer system (presumably equipped with AI components) could provide a form of safety net against input methods with low-bandwidth or poor accuracy. Efforts have been divided into two main directions. The first direction focuses on using standard joystick input, and enriching this information with embedded intelligent systems to improve safety and efficiency of navigation [12, 29]. The second direction leverages non-traditional control interfaces such as voice-activation and brain-computer interface to obtain high-level commands that are then translated into fine motor control by the onboard navigation system [30, 31]. Our work falls primarily in this second category.
While many intelligent wheelchair systems have been developed, very few have been the subject of substantial validation with the target population. The situation has improved in the last few years, with a number of systems undergoing formal testing. However the choice of task domains and evaluation metrics still primarily comes from the robotic field (e.g. quantifying navigation performance), rather than from the rehabilitation domain (e.g. quantifying skills and improved functional outcomes). A review of commonly found metrics is presented by Urdiales et al.  whereas a detailed evaluation procedure for an intelligent wheelchair is presented by Montesano et al. .
The primary contribution of this paper is to present a fully integrated IPW which has been demonstrated to achieve flexible and robust performance with the target population in a clinically relevant environment. Unlike many of its predecessors, the robotic system presented here is designed to fit on any of a number of commercial powered wheelchair platforms. It provides rich sensing and communication interfaces to ensure robust operation and control in a variety of environmental conditions, in particular in constrained spaces that pose particular challenges for standard steering. Our robotic system is also designed to be used by individuals with varying impairments.
The main algorithmic components developed for the autonomous control of the wheelchair include an autonomous navigation system and a voice-activated communication system. Both of these components feature state-of-the-art robotic techniques, deployed in a challenging indoor experimental context. The navigation system is particularly proficient at handling autonomous navigation in narrow spaces, such as passing through doorways, aligning to a wall, and parking in a corner; these are the types of maneuvers that are particularly challenging for many wheelchairs (WC) users. The interaction system is substantially more flexible than previous such interfaces, allowing robust speech-based commands using full vocabulary natural language. We leverage several machine learning techniques to achieve robust speech understanding, including grammatical parsing to reduce the observation space, Bayesian inference techniques to track the most likely commands, and planning in Markov decision processes to select appropriate responses, or clarification queries when necessary. The combination of these methods is shown to produce a reliable, flexible, and congenial user interface.
Finally, one of the major contributions of the work reported in this paper is the method and results based on a validated evaluation of individual’s performance with a wheelchair. We adopt an experimental procedure based on the Wheelchair Skills Test , which requires completion of a varied collection of skills that are relevant to everyday operation of a powered wheelchair. Our experiments involved eight able-bodied and nine disabled subjects, 32 completed Robotics Wheelchair Skills Test (RWST) sessions, 25 total hours of testing, and 9 kilometers of total running distance.