1 1 Objectives

A cybernetic hand is by definition connected by a neural interface to a human and thus makes it possible to exploit sensorimotor mechanisms for controlling hand actions. While the ultimate goal of the cybernetic prosthetic hand presented in this paper (CyberHand) is to allow human amputees dexterous sensorimotor control (Fig. 1)—via a neural interface that provides efferent commands to control the hand and sensory feedback from artificial sensors—this paper focuses on the bio-inspired design of this hand.

Fig. 1
figure 1

Scheme of a cybernetic hand system. The standalone, modular biomechatronic hand consists of the mechanisms, the actuators, and the sensors. It is controlled by means of a low-level control loop primarily responsible for grasp stability and a high-level control system loop responsible for selecting the grasp configuration and force level requested by the user. The neural interface is connected to the hand through a telemetric link and is responsible for exchanging signals to encode sensor information retrieved from the hand and to decode efferent commands to control the hand action

Within the foreseeable future, neural interfaces will allow only a limited number of channels for exchanging efferent and afferent signals with the central nervous system (CNS) of a human. The cybernetic hand presented in this paper overcomes this limitation by its mechanical design that allows hand preshaping and specific grasping forces on the basis of only a few efferent control signals. Moreover, the integrated design makes it possible to provide task-specific feedback by utilizing a few sensory channels.

The mechanical design of the hand is based on (a) biomechanical modeling of the natural hand, (b) optimization of hand kinematics to enable thumb opposition and humanlike grasps with appropriate force distributions among fingers (Kapandji 1982), and (c) underactuated mechanisms that allow the hand to passively adapt to various object shapes without any active control required by the user. Underactuated mechanisms require few control signals but can still endow the hand with many degrees of freedom (DoFs).

The action of the cybernetic hand described in this paper is controlled by a control architecture composed of two main parts: a low-level and a high-level control. The low-level control is responsible for grasp stability, whereas the high-level control was designed to interpret the subject’s intention and launch appropriate action patterns. Both control levels are crucially dependent on a bio-inspired sensory system comprising two main parts: a proprioceptive and an exteroceptive sensory subsystem. The first provides useful information about hand kinematic and internal forces produced in the hand transmission, and the second monitors and measures the interaction between the grasped object and the hand, and between the object and the environment. Hand operation was thus designed to be controlled as a finite-state machine where the transitions between the different states are identified and detected as crucial events by the sensory system.

The cybernetic hand functionalities include reaching, grasping, exploring, some manipulation, and gesture expression. Each of these functionalities can be segmented according to the kinematics of hand mechanisms and the dynamical properties needed to provide smooth humanlike movements and grasp stability against disturbances. A set of hand action primitives were defined according to the desired tasks and performance. The design allows a subject to voluntary select a specific hand primitive on the basis of, for instance, visual information or the expected object properties. Once the subject has sent the appropriate command to the neural interface, the high-level control informs the local control about the desired hand primitive and the level of interaction force to be applied to the object. The low-level control in turn exploits information from the sensory system to dynamically adjust hand configuration and force distribution among fingers, in order to provide grasp stability without requiring intervention by the subject.

The work described in this paper is an interesting example of concurrent design of the diverse modules necessary to build a cybernetic hand: the mechanism, the sensory system, and the control systems.

While the ultimate goal is to connect the CyberHand described in this paper via a neural interface implanted in peripheral nerves (see Micera et al. 2006), the modular design of the hand allows to exploit and validate not only different neural interfaces but also noninvasive indirect interfaces by simply changing the high-level controller.

2 2 State of the art

The problem of functional replacement of an upper limb is an ancient problem: historically humans have replaced a hand lost in war or accidents with a prosthesis for cosmetic, vocational, or personal autonomy reasons. The interest of the user community is primarily task-oriented, that is, patients express their need to replace the missing limbs to be able to autonomously perform their own activities of daily living (ADLs) (Atkins et al. 1996).

For both practical and technological reasons the engineering design of biomechatronic hands requires a restricted requirement list. Compare, for instance, the task of grasping an avocado with that of determining if it is ripe. It is thus necessary to define priorities among the different requirements and to address separately those that are considered the most important ones. Yet, to create a universal priority list to generalize design rules is challenging because individual subjects may have very personal preferences and expectations for cosmetic appearance, functionality, or reliability depending on their psychological, cultural, and geographical background (Carrozza et al. 2004).

Remarkably, surveys on using such artificial hands reveal that 30 to 50% of upper extremity amputees do not use their prosthetic hand regularly (Atkins et al. 1996; Silcox et al. 1993). The main factors for this are low functionality, poor cosmetic appearance, and low controllability (Carrozza et al. 2002). In short, in addition to cosmetics, many subjects find it impossible to perform many grasping tasks and the control system is unnatural, making the hand an external device that is not part of the subject’s body.

In recent decades, much research effort has been focused on the development of more functional artificial hands. Robotic knowledge has been applied to improve some of the basic components of prosthetic hands such as the overall dexterity, electromyographic (EMG) recording and classification systems, and the sensing ability of the device. Developments of an artificial hand that can be used as prostheses have been reported by the Hokkaido University (Ishikawa et al. 2000). This prosthesis is endowed with a novel mechanism—an adjustable power transmission mechanism—by which the course of the force transmission wires changes depending on the size of the load. This allows the finger to move faster under light loads and slower but with more torque under heavy loads. Research at the Forschungszentrum Karlsruhe (Pylatiuk et al. 2004b) has concentrated on the mechatronic development of a prosthetic hand that combines a high number of grasping patterns with a low weight, good compliance, and good cosmetic appearance. Thanks to eight small-sized flexible fluidic actuators, including one for thumb opposition, the Karlsruhe hand is able to achieve different prehension patterns such as lateral and cylindrical grasps.

Researchers at the University of Southampton (Light et al. 2002) have developed a new ultralight limb that mimics movements in real hands with six sets of motors and gears so that each of the five digits can move independently and the thumb can change its opposition plane. This hand weighs no more than 400 g. Their four-fingered Southampton-Remedi Hand (Light and Chappell 2000) has motors attached to a gearbox that in turn is attached to the aluminum fingers that can deliver up to 12N of force.

Integrated sensorization of commercial prosthetic hands is still absent or very poor. Such hands typically are endowed with simple force sensors on the thumb (or rather on the transmission system), and these sensors are used for regulating the grasping force and to protect the device against potentially unsafe conditions. For instance, piezoelectric sensors can be integrated in the fingertips, and thick film force and slip sensors have been investigated for integration of multifunctional tactile sensors (slippage, force, and temperature) at the prosthetic fingertips (Dario et al. 1996; Cranny et al. 2005), but it is evident that the touch sensing abilities of commercial hands still remain primitive. Likewise, except for visual information and subtle clues such as the sound of the motor and transmission, sensory feedback to the user is poor and needs to be improved (Lundborg and Rosen 2001).

The most advanced technology in clinical practice for controlling prosthetic hands is based on myoelectric control. It exploits surface EMG signals generated by voluntary contractions of residual muscles in the patient’s arm. Using such signals is a simple and effective approach to obtaining commands for controlling active prosthetic hands (Nader 1990; for a review see Zecca et al. (2002)), and it is used, for instance, in Otto Bock hands. Surface electrodes are simple tomanage, noninvasive, and unobtrusive. However, it is important to point out that with current myoelectric hands it is very difficult to control more than one or at most two DoFs.

Yet, in parallel to the advances in the research on creating innovative and improved functional anthropomorphic artificial hands, powerful signal processing algorithms for EMG classification have been investigated to provide additional prosthetic actuation commands. Even if interesting results have been achieved by several groups extracting motor information (Micera et al. 1999; Englehart and Hudgins 2003; Reischl et al. 2004; Ajiboye and R. Weir 2005; Chan and Englehart 2005; Huang et al. 2005), extracting users’ voluntary intentions is still difficult and hampered by several limitations. Moreover, the decoding system that transmits commands to the prosthetic hand requires user training because the muscles that generate the signals are typically not homologous to those used during natural hand movements. This puts an undesired cognitive burden on the subject (Kyberd et al. 1995).

In the recent past, several strategies to use invasive and noninvasive interfaces with the CNS and peripheral nervous system (PNS) have been implemented.

Central invasive neural interfaces have been used by many groups (Carmena et al. 2003; Serruya et al. 2002; Musallam et al. 2004; Taylor et al. 2002; Hochberg et al. 2006; Schwartz 2004) to extract motor information related to reaching movements. Even if promising results have been achieved, the possibility of extracting reliable information on the trajectories of many different joints is very challenging especially for such complex tasks as dexterous manipulation. For this reason, the possibility of extracting high-level information (“grasping primitives,” Micera et al. 2005) from F5 premotor cortex (where this kind of information seems to be coded) is very interesting for the control of multidigit hand prostheses. It might eventually be possible by using such interfaces for a user to select a “grasping task” (e.g., palmar or lateral grasps) sending this kind of information to a low-level controller that in turn carries out the selected task by moving appropriate joints. Such algorithms may allow the modulation of the “shared control” between the neural interface (i.e., the user’s intention) and the low-level control for different robotic devices and different subjects.

Central noninvasive neural interfaces have also been used to extract voluntary information (Nielsen et al. 2006; Jackson et al. 2006; Pfurtscheller et al. 2006; Wolpaw et al. 2002) and to control hand prostheses. Their usability seems, however, to be constrained by the limited amount of information that can be extracted.

Finally, PNS invasive interfaces can be used to discriminate different neural signals (Navarro et al. 2005; Dhillon et al. 2004; Citi et al. 2006). The possibility of extracting “global” information related to grasping tasks seems more likely than information related to the detailed kinematics and dynamics of the task. In particular, the combination of multisite intraneural peripheral interfaces and advanced processing techniques seems to be able to increase the amount of information that can be extracted (Citi et al. 2006). PNS neural interfaces may be a good solution in the short term to achieve an implant able to have a good ethical acceptability, allow extraction of more useful information than EMG signals, and deliver sensory feedback to the user (Dhillon et al. 2005).

Technology and research have also moved toward the development of implantable telemetry systems for the recording of electroneurographic signals (ENG; Donaldson et al. 2003) and stimulation of peripheral nerves (Sacristán et al. 2006), as well as toward the development of novel signal processing algorithms (Micera et al. 2001; Cavallaro et al. 2003; Tesfayesus and Durand 2006).

While large efforts are currently directed at establishing functional neural connections, efforts are also being focused on the application of noninvasive systems. The main reason for this is the “bottleneck” of the cybernetic hand system, that is, the aim is to effectively use a neuroelectronic interface capable of creating an intimate contact with the nerves and of restoring the high number of functional connections between the external system and the peripheral nervous system of the patient.

Attempts that have been carried out to develop noninvasive afferent stimulation include vibrotactile and electrotactile stimulation methods (Riso et al. 1991; Lundborg and Rosen 2001; Sasaki et al. 2002; Pylatiuk et al. 2004a). Vibrotactile stimulation is defined as tactile sensation evoked by mechanical vibration of the skin, typically at frequencies of 10 to 500 Hz, whereas with electrotactile stimulation a local electric current is passed trough the skin (Kaczmarek et al. 1991). As a result of this research, guidelines and engineering parameters were investigated for optimum stimulation, and some preliminary “integrated hand systems” have been demonstrated. Indeed, Forschungszentrum Karlsruhe has recently equipped their hand with a vibrotactile sensory system (Pylatiuk et al. 2004a) together with contact sensors at the fingertips, while the Hokkaido hand can be used with an electrotactile sensory system (Arieta et al. 2005).

Despite extensive research efforts during the last decade in the various prosthetic fields of interest—mechanics, electronics, control, sensors, user-intention extraction, user stimulation, telemetry systems, etc.—there is currently no “real” prototype of an advanced prosthetic hand that integrates advanced user interfaces (whether EMG or ENG based) with advanced hand mechanism and sensors. Despite all efforts, the Otto Bock design is still the best and most reliable design commercially available off the shelf. The lack of suitable technological advanced integrated solutions combined with many unresolved clinical and practical issues associated with the use of prosthetic devices represent a constant motivation for researchers to focus on the design and development of advanced cybernetic prostheses to ultimately obtain a functional substitution of the hand that can be evaluated in clinical trials.

From the analysis of the state of the art it is clear that there are a number of unresolved fundamental issues:

  • Improving dexterity, that is, achieving high operation frequencies and manipulation accuracy;

  • Providing patients with exteroceptive and proprioceptive information;

  • Finding ways to embed sensors in an artificial skin that is cosmetically acceptable and mechanically compliant;

  • Developing shared control algorithms that make it possible to identify and act according to voluntary motor commands.

3 3 Design of the hand

3.1 3.1 Rationale for the design

Developing the ideal hand can be considered a pure engineering problem where the objective is well defined, viz., to build a machine that imitates the human hand. When considering the design methodology to pursue this objective, it is clear, however, that available engineering approaches are impractical and require extreme simplifications of the system. First, the available components required to replace natural hand modules (actuators, mechanosensors, batteries, artificial skin, mechanisms, etc.) are largely unable to match the performance and properties of their natural counterparts (muscles, mechanoreceptors, anatomic functionality of natural joints, skin, etc.). Second, the natural hand cannot be described as a machine per se but must be understood and modeled as an integral part of the body and the motor control systems. Perception is, for instance, fundamental for exploratory tasks, and many actions are controlled by the nervous system with limited attention and cognitive involvement of the subject.

The first problem—replacing components of the natural hand—motivates us to apply the best available biomimetic technologies to improve the overall quality and properties of the artificial hand. Providing enough functionality to imitate the human hand requires a system approach, that is, actuators, sensors, and mechanisms need to be defined as parts of an integrated system. Themechanisms, the actuators, and the sensorization all, however, require complex design, fabrication, and dedicated technology. Thus, the design criteria must rely on biomechanical information about the natural hand and its control to allow technology tradeoffs to achieve optimal behavior given requirements of overall size, weight, and cosmetic appearance. Specifically, the design of the sensory system should take into account the desired functionality of the artificial hand, that is, it should provide useful information for hand operation.

The second problem—that the human hand is an integral part of a body—requires a coordinated approach of neuroscience and robotics knowledge to describe and model the natural hand connected to the brain and then imitate these connections in the artificial system. The key component of the system is the interface between the mechatronic system of the hand and the nervous system of the subject. The main design goal of this interface is to enable perception and action in a “natural” way. The interface should receive efferent commands from the subject to control hand movements, obviating the discomfort of the current EMG-based control prosthesis, and should encode the signals from the artificial sensors and provide afferent stimulation to the subject. The mechatronic design of our hand system was influenced by the fact that currently available neural interfaces can provide only a few channels for exchanging signals for efferent or afferent pathways. Therefore, the CyberHand was designed to connect to the human brain through a relatively complex interface that records and interprets signals in few channels. Specifically, it was designed to be able to be controlled by efferent signals expressing subject intention and to provide afferent feedback to the subject by transmitting appropriately encoded signals from artificial sensors.

Natural hand operations can be grouped in a number of possible tasks or functionalities that, combined or performed in sequence, are fundamental for performing ADLs: reaching, preshaping, grasping, manipulation, and exploration. Accordingly, the ability of a subject to perform desired ADLs can be used for benchmarking the hand design.

In general, to be successful, hand prostheses must fulfill the following ideal requirements:

  1. 1.

    Functionality:The prosthetic device should perform a stable grasp and manipulation for performing vocational operations and ADLs.

  2. 2.

    Dexterity: The hand should be dexterous (dexterity increases with the number of degrees of freedom, with operation frequency, and with the accuracy in movement control; see Cutkosky (1985) and Akella et al. (1991).

  3. 3.

    Control: The prosthesis should restore the motor and motor-related sensory capabilities of the human hand. Accordingly, the users’ intention needs to be interpreted in real time. Proprioception and exteroception abilities must be provided to the user by means of an appropriate artificial sensorimotor system connected to the brain.

  4. 4.

    Cosmetics:The prosthesis should have the static and dynamic appearance of the human hand.

The ultimate goal of a biomechatronic design of a hand is to replicate the “machine” of the natural hand, first its specification and then its functionalities and performance. Accordingly, the design methodology proceeded from the analysis of the performance of the natural hand. The specifications of the natural hand correspond to the biological and physiological characteristics of the natural hand (Table 1).

Table 1 Natural hand performance (Eberhart et al. 1954; Keller et al. 1947) and the actual performance of the current version of the CyberHand

Froma biomechatronic point of view, the most important parameters for hand design are the number of DoFs (22), the force range (from proportional control of a few Newtons in fine manipulation to a power grasp of 500 N), the number and variety of mechanoreceptors and their distribution, and site-dependent density.

Even a casual look at this list of specifications makes it clear that robotic and biomechatronic science and technology are still far from fulfilling these challenging requirements, in particular, the large range of controlled forces and the high density of mechanoreceptors embedded in the skin. Nevertheless, the list of specifications is important when analyzing technology tradeoffs among different biomechatronic components and represents the desired performance of the “ultimate” cybernetic hand.

3.2 3.2 Mechanism design

The CyberHand was designed as a prototype for testing and evaluating neural interfaces, control algorithms, and sensory feedback protocols. It has 16 DoFs and 6motors (that is, 6 degrees of mobility, DoMs): each finger of the CyberHand has 3 DoFs and 1 DoM (flexion/extension) and the thumb has, in addition, 1 DoM for positioning (Fig. 2a,b). The size of the CyberHand is comparable to that of human hands (Fig. 3b) and can generate many different grasps (Fig. 3a), but as described below, its control is currently limited to a subset of functional grasps: lateral pinch, cylindrical and spherical grasps, and the tripod grasp (Fig. 3c; cf. Table 1).

Fig. 2
figure 2

Actuation of CyberHand. Each red arrow corresponds to the proximal joint acted on by an individual actuator. Blue arrows represent rotations carried out by actuators also acting on more proximal joints. a Existing CyberHand implementation. b Cables, pulleys, and extensor spring of of CyberHand index finger. Note that the metacarpophalangeal (MCP), proximal (PIP), and distal interphalangeal (DIP) joints share a common flexor tendon. c Improved actuation adding individual actuation of MCP joints, abduction-adduction to index, ring, and little fingers and palm hollowing

Fig. 3
figure 3

CyberHand postures. a Examples of various postures that can be achieved by the low-level controller of the current version of the CyberHand. b Comparison of CyberHand and a human hand. c Four basic grasps selected for the first implementation of the high-level controller of the CyberHand: lateral pinch and cylindrical, spherical, and tripod grasp

The five motors for finger flexion are housed in a socket and occupy a total volume of ∼250 cc, whereas the motor devoted to the thumb positioning is in the palm. The palm is composed by an outside shell, made of carbon fiber, divided into dorsal and volar parts, and by an inside frame, which holds the fingers and contains the thumb mechanism (Fig. 3a). A soft padding made of silicon rubber can be mounted on the palm in order to increase the compliance of the grasping. The total weight of the hand is about 320 g, excluding the motors in the forearm and the cosmetic covering of the palm. The design of the CyberHand took into account a number of features of the human hand that simplify its replication. Most of the muscles are, for instance, located in the forearm. This reduces limb inertia, allows more room for the muscles, and permits fine manipulation of small objects (Kapandji 1982). Moreover, in the natural hand, the transmission system consists of tendons that allow the muscles in the forearm to actuate the digits of the hand. Cable transmissions obviously make it possible to relocate bulky actuation and avoid problems due to rigid transmissions in an articulated mechanism.

3.2.1 3.2.1 Underactuation

With a hand closed in a fist, extension at the individual metacarpophalangeal (MCP) joints of the human middle and ring finger is impossible because these fingers are actuated by the common finger extensor muscle and lack private extensor muscles (in contrast to the index finger and, in many humans, also the little finger). Similarly, there are finger postures that are impossible to attain for most people (e.g., isolated flexion of the distal interphalangeal joints of the fingers). In short, the biomechanics of the human hand represents an “underactuated” system. In such systems, the DoFs are greater than the number of effective actuators, or in other words, the human hand represents a system that has an input vector of smaller dimension than the output state vector (Mason and Salisbury 1985).

There are several advantages to an underactuated system. First, it obviously requires fewer actuators than DoFs and this simplifies the design. Second, and not less important, in conjunction with a differential mechanism, it allows torque distribution among joints (Hirose 1985) and enables adaptive grasps (Hirose and Umetani 1978). Differential mechanisms are the crucial components of underactuated mechanisms because they can automatically control multiple DoFs by distributing forces according to design constraints (Laliberté and Gosselin 1998). As a consequence, the geometrical configuration of the finger will be defined by the external constraints related to the geometric characteristics of an object in contact with the hand, which obviates the need for active coordination of the phalanges. Thus, an underactuated grasping device can perform an automatic finger wrapping around objects without any amputee intervention.

The design approach based on underactuated mechanisms allows for reproduction of most grasping behaviors of the human hand without increasing the complexity of the mechanisms or the control. An appropriate choice of elastic elements and appropriate placing of the mechanical stops allow natural wrapping movements of the finger around an object. Thus, the mechanical intelligence embedded in the design of the CyberHand allows shape adaptation of the fingers. The idea of approaching the spatial complement of the shape of an object to ensure a distributed grasp is, however, rather common in biologically inspired robotics, e.g., snake robots (Hirose 1993).

To summarize, the important features of a gripper with underactuated fingers include the following:

  1. (a)

    They can grasp unevenly shaped objects (because each joint is driven by torque control thanks to the effect of the differential mechanism).

  2. (b)

    Objects can be gripped by the entire surface of the gripper with a force distribution determined by design parameters.

  3. (c)

    “Multidegree-of-freedom motion” is achieved and controlled with few control signals.

  4. (d)

    Underactuated mechanisms allow the grasping of an object in a way that is closer to human grasping than can be achieved by independent actuation (Montambault and Gosselin 2001).

While each finger of the CyberHand can be actuated independently by its own motor, underactuation of the fingers precludes manipulation. That is, it is not possible to actively control the movements of the proximal (PIP) and distal interphalangeal (DIP) joints of individual fingers without actuating the MCP joint, because three DoFs are implemented with only one DoM for each finger.

Iberall and Arbib (1990) introduced the concepts virtual fingers and opposition spaces. The physical entities (one or more fingers, the palm of the hand, etc.) that are used in applying force correspond to virtual fingers, and the regions in contact with the object corresponds virtual fingertips. Two “opposition axes” can be defined for such grasps: the opposition axis in the hand joining the virtual finger regions to be opposed to each other and the opposition axis in the object joining the regions where the virtual fingers contact the object. The task of motor control is to preshape the hand to form an opposition axis appropriate to the chosen task (i.e., a way to grasp the object) and to transport the hand to bring the hand and object axes into alignment. It should be noted, however, that in multidigit grasping in humans, the concept of opposition spaces and virtual fingers seems questionable. Specifically, when humans perform three-digit grasping, no opposition axis can be defined, that is, the forces exerted by the digits are in different directions and do not directly oppose one another (Flanagan et al. 1999).

3.2.2 3.2.2 Actuation of fingers and thumb

The fingers of the CyberHand comprise three phalanges connected by hinge joints and on the hinge axes are assembled idle pulleys (three DoFs). A cable is wrapped around each pulley from the base to the tip. The cable is fixated at the fingertip and runs around the idle pulleys in the joints. When the cable is pulled, the phalanges flex starting from the base to the tip (one DoM). When the motor releases the cable, torsion springs in the joints extend the finger. The CyberHand fingers thus exploit a differential mechanism that is based on elastic elements and mechanical stops, as has been described (Carrozza et al. 2004) but with some substantial innovations: each finger has been designed with separate actuation and transmission, a novel mechanism has been incorporated in the palm to provide thumb opposition, and the finger structure and geometry have been modified to house sensors mechanisms and electronic units.

The actuation unit of the finger pulls the “tendon” (a nylon-coated steel cable) that flexes the finger itself, thus acting like the human deep finger flexor muscle. The tendon from the actuation unit to the metacarpus runs in round wire spirals (provided by Asahi Intecc, Japan), as the human tendon runs in synovial sheaths. When the finger moves idling (that is, without contacting any object), the kinematics of such an underactuated finger depends on the length of the links/phalanges, on the radii of the fingers, and on the stiffness of the joint torsion springs. These parameters have been chosen to obtain an anthropomorphic appearance (also while moving) and a stable tip-to-tip pinch. In case of object contact the finger wraps automatically around the object exerting a uniform force: when a phalanx touches the object, thanks to the idle pulleys, the cable can be further pulled, flexing the more distal phalanx. In short, the placement of the actuators (in the forearm), the tendon transmission system, and the finger kinematics thus mimic the structure of the musculoskeletal system in humans.

The actuation unit for flexion consists of a DC motor (Minimotor, Switzerland, model 1727 006C), a planetary gear head (Minimotor, model 16/16, ratio 14:1), and an incremental magnetic encoder (Minimotor, model IE2-128, 128 pulses per tour). The motion is transmitted from the motor to a screw/lead screw pair by means of spur gears (Fig. 2a). The lead screw acts as a slider: it moves along the screw and pulls (or releases) the cable tendon. Since the screw/lead screw is a non-back-drivable pair, when a desired position has been reached, the power can be switched off, thereby saving energy. Two digital Hall effect proximity sensors act as limit switches; they are assembled along the screw in order to limit the stroke of the pulling slider (avoiding collision with the mechanical stops).

Two very peculiar DoFs in the human hand are (1) the thumb opposition and (2) the hollowing of the palm. The opposition of the thumb makes the human hand an extraordinarily versatile tool, allowing several grasp types including, specifically, the power grasp and the lateral grasp. Indeed, the lack of functional thumbs is the main reason for lack of dexterity in current hand prostheses.

For the CyberHand, a novel thumb mechanism was developed consisting of a DC motor (Minimotor, model 1016 006G), a planetary gear head (Minimotor, model 10/16, ratio 64:1), and a magnetic incremental encoder (Minimotor, model 30B, ten pulses per tour).The motion is transmitted from the motor to a worm gear by spur gears and to the MCP joint by means of the worm/worm wheel pair. Since the worm/worm wheel is a non-back-drivable pair, when the desired position is reached, the power can be switched off, again saving energy from the battery. Two digital Hall effect proximity sensors act as limit switches before the mechanical stops for the thumb positioning.

3.2.3 3.2.3 Future improvements

A further improvement can be obtained introducing, an optimal number of DoFs for the a cybernetic hand with manipulative purpose 20 (using at least 9 actuators): 15 flexions of the phalanges (3 MCP joints are directly controlled), 1 thumb opposition, 3 ad/abduction (for little finger, ring finger and index), 1 hollowing of the palm (flexing little and ring finger toward the thumb) (Stellin et al. 2006; Fig 2c).

3.3 3.3 The artificial sensory system

Sensory feedback—as described in more detail below—is crucial both for the amputee using the CyberHand and its biomechatronic control. Notably, because the hand is underactuated and therefore automatically adapts to the shape of objects, complex sensors for grasp optimization are not required with the CyberHand.

The artificial sensory system was organized in two parts: the proprioceptive and exteroceptive subsystems. The proprioceptive sensors provide information about hand position and movement and about internal forces, and the exteroceptive subsystem produces information about the interaction between the object and the hand and between the object and the environment. It is important to point out that this is a functional classification and a specific sensor can provide exteroceptive or proprioceptive information according to the particular implementation of the control system.

In order to develop an effective and reliable hand, it is important to identify the minimum set of sensors that is necessary to control a specific task related to the desired hand functionality. The minimum set of sensors is directly related to the event that must be detected and encoded during hand operation. For example, it is important to identify the set of sensors required for driving the hand during reaching and preshaping of an object. The hand configuration during these phases affects the feasibility and the performance of the ensuing object interaction. Once the object has been contacted, position control should be replaced by force control and the hand wrapped around the object. Since the mechanical design of the CyberHand does not allow control of individual finger joint configurations and voluntary adaptation to various object shapes, appropriate preshaping is crucial and must be effective. Similarly, object contact must be detected to reliably switch from position to force control, and force sensors are required to allow monitoring of the force applied to objects. The apparent minimal set of sensors required to control the sequential execution of the prototypical grasp-and-lift task includes force and contact sensors (e.g., Edin et al. 2006).

According to the modular approach to the design of the CyberHand, the sensory system is redundant and designed to be flexible to allow multiple implementations and comparisons between different sensor encoding and control strategies.

3.3.1 3.3.1 Proprioceptive subsystem

The proprioceptive sensory subsystem comprises a number of sensors embedded in the hand mechanism:

  • Position control of the hand joints can be obtained by means of joint angle sensors embedded in the mechanisms based on Hall effect sensors.

  • Incremental magnetic encoders are integrated with each motor to provide finger position control.

  • Tension sensors are integrated in the cable transmission to monitor the force applied by actuators in order to emulate the function of biological Golgi tendon organs.

Sensors to measure the joint angle were based on Hall effect sensors that had an operational range of 0–90° with a resolution of less than 5°. The incremental encoders were used for position control of the motors.

Cable-tension sensors are a peculiar feature of this design, because they are integrated in the mechanical stop of the tendon in order to detect the force applied on the transmission cables of each finger. The measurement is performed with strain gauges located on the mechanical structure of the tendon stop. Strain gauges provide an output voltage proportional to the tension force (up to 120N with a resolution of ca. 20 mN) with high linearity and negligible hysteresis. Tension sensors are fundamental for the low-level control of the grasping force (Cipriani et al. 2006).

3.3.2 3.3.2 Exteroceptive subsystem

The exteroceptive sensory system consists of different sensors intended to functionally emulate touch sensors in the human skin. Biological touch sensors represent physical properties of the environment in contact with the skin and are found as several distinct structures within the skin. Taking into consideration the function of biological sensors (Edin et al. 2006) and technologies and materials that can be integrated in a mechatronic design (Dario 1991; Lee and Nicholls 1999; Tegin and Wikander 2005; Dargahi and Najarian 2005), the following sensors were selected:

  • A flexible layer with contact sensors to cover the hand (Fig. 4a),

  • Triaxial force sensors integrated in the fingertips (Fig. 4b),

  • A compliant skin with embedded 3D force microsensors to measure force distribution at the finger-tips (Fig. 4c).

Fig. 4
figure 4

CyberHand exteroceptors. a Flexible contact sensors with a total of 8×3 contact points on the distal phalanx alone and a threshold of <0.15 N/mm2 across its surface. b Triaxial force sensor embedded in the fingertip. c The SCTM sensor (“soft and compliant triaxial microsensor”) has a thickness of 2 mm (left) and contains an integrated triaxial 1.4-mm3 force silicon sensor (image on right reprinted from Beccai et al. 2005, with permission from Elsevier), packaged to occupy ∼16mm3 (arrow) (a and b adapted from Edin et al. 2006)

Arrays of flexible on-off sensors were assembled as an external “skin” to provide contact information. Their design and technology allowed them to emulate the sensitivity of the mechanoreceptors of the human hand with pressure thresholds of < 15mN/mm2, that is, with a sensitivity comparable to human SAI and FAI afferents as it (Edin et al. 2006).

The triaxial force sensor integrated in the fingertips was based on an aluminum alloy 3D flexible structure (Fig. 4b). Three semiconductor strain gauges were located at the root of each tether along the three axes of the sensor itself; three additional strain gauges were used for temperature compensation. The sensor structure was developed to achieve mechatronic integration at the hand fingertip (Roccella et al. 2004; Zecca et al. 2004). The three-axial force sensor was designed with a bandwidth sufficient to emulate the dynamic behavior of all human tactile afferents (DC-400 Hz; Edin et al. 2006). As such, the force sensor can, for instance, be used to detect slippage at digit-object interfaces. Moreover, it is able to detect object contact as well as object liftoff and replacement, events known to be crucial for the sequential coordination of the grasp-and-lift task in humans (Johansson and Edin 1993; Edin et al. 2006).

The soft and compliant tactile microsensor (SCTM) system (Fig. 4c; Beccai et al. 2006) shows a high sensitivity and robustness and can detect the onset of slippage with an average latency of about 7 ms. Such results encourage further investigation and system optimization since these latencies are an order of magnitude lower than the latencies reported in humans before they begin corrective adjustments following slip events (Johansson and Westling 1987).

In short, the current exteroceptive sensory system of the CyberHand mimics specific features of the biological sensory system that from neurophysiological and behavioral studies seem to be crucial for human grasp-and-lift. Undoubtedly, we can expect implementations in the future of “smart skin” with mechanical properties more similar to those of the natural skin and with arrays of specialized sensors (e.g., in the form of the SCTM hybrid technology of Beccai et al. 2006). The architecture of the CyberHand makes it comparatively easy to exchange the existing exteroceptors with novel designs.

3.4 3.4 Control system

3.4.1 3.4.1 Rationale for the control system

What is a “useful grasp” depends, of course, on the task in question, but common to many tasks is the requirement of grasp stability. Grasp stability is the result of selecting appropriate grasp sites on an object, defining hand preshaping, transporting the hand to enable the digits to contact the object, and, once in contact with the object, avoiding slippage by applying sufficient surface-normal forces in relation to any destabilizing surface-tangential forces at the individual digit-object interfaces. The control of the reaching task and the selection of the appropriate grasp sites are, of course, as simple and direct with the CyberHand as they are with the natural hand. But the appropriate preshaping of the CyberHand and the control of interaction forces are critical and must be controlled by transmitting suitable commands to the CyberHand through an interface.

In humans, hand posture during grasping is characterized by a high degree of correlation among the movements of the different digits (Santello et al. 1998), as a consequence of the fact that the human hand is underactuated as discussed above. Indeed, principal component analyses have revealed that the combination of only a small number of statistically identified kinematic coordination patterns accounts for a large part of the variability observed during grasping various objects (Santello et al. 2002). According to ADL analyses (Sollerman 1980), a small set of grasps accounts for > 80% of the grasps used daily. Using Cutkosky’s grasp taxonomy (Cutkosky 1989) these grasps include cylindrical, spherical, tridigital (tripod), tip (precision grasp), and lateral grasps (cf. Fig. 3c). Accordingly, although the flexibility of the human hand allows it to form a multitude of grasps, a small subset of basic grasps would suffice to enable the majority of grasps required for ADLs.

Even the simplest of hand actions—such as power grasps—engage large parts of the human brain (Ehrsson et al. 2000), and humans require almost a decade of training until they perform the apparently simple task of grasping, lifting, and holding an object with an adult coordination pattern (Forssberg et al. 1991, Forssberg et al. 1992). One trivial reason for the complexity of human manipulation is that it requires coordinated activity of many muscles acting on the forearm, wrist, and digits. A nontrivial reason is that manipulative tasks are characterized by complex parallel and sequential coordination, i.e., these tasks are organized in sequential phases each characterized by specific sensorimotor behaviors (Johansson and Edin 1992; Johansson 1996). This organization of the manipulation tasks offers several advantageous control features. Adapting to objects with different frictional properties is simplified because the required adjustment can be implemented by changing a single parameter, viz., the ratio of the grip and load forces applied at the individual digit-object interfaces (Johansson and Westling 1984; Edin et al. 1992; Flanagan et al. 1999). By utilizing discrete mechanical events reflected in signals from tuned sensory organs in the hand, objects with unknown weights can, for instance, be lifted because the load phase characterized by a parallel increase in grip and load forces may continue until object liftoff has been detected. Humans typically anticipatorily parameterize manipulative tasks (weight, friction, torsional loads, etc.) by using visual information but can quickly adjust these given somatosensory information accrued during manipulation (Gordon et al. 1993; Jenmalm and Johansson 1997).

In studies of how forces are partitioned among fingers in two-digit and multidigit grasping, it has been concluded that the control seems to be organized at several levels, e.g., a low-level control that ensures grasp stability at individual fingertips and a higher-level control that takes into account the overall force requirements (Edin et al. 1992; Flanagan et al. 1999). In the same vein, the control of the CyberHand was conceived to consist of more than one level (cf. Fig. 1): a high-level controller with which the subject directly interacts and specifies grasp type and force requirements and a low-level controller that executes the required kinematic patterns and ensures grasp stability.

3.5 3.5 CyberHand control system

The CyberHand control system (Fig. 5) is composed of two parts: one dedicated to the low-level controller and one dedicated to the high-level controller. The CyberHand is currently controlled by two external computers (PC1 and PC2 in Fig. 5) connected to the hand mechanism, but in the future standalone cybernetic hand, these two functions will be provided by controllers embedded in the socket.

Fig. 5
figure 5

CyberHand hardware architecture. AO, analog output board; DAQ, data acquisition board; PWM, pulse width modulation

The low-level control unit includes PC1 (cf. Fig. 5; AMD ATHLON XP 2.8 GHz, 512 MB RAM), equipped with two National Instruments input/output boards: a 12-bit high-speed analog output board (model: PCI-6713E), and a high-performance data acquisition board (model: DAQ PCI-6071E). PC1 is also connected to six standalone motion controllers (one for each motor) by means of a serial communication. The core of these controllers is a Microchip microcontroller (PIC18F2431) that reads the motor encoders and limit switches and drives output power circuitry using a pulse width modulation (PWM) technique. The six standalone motion controllers implement position control using proportional-integral-derivative (PID) algorithms; such algorithms are also used to drive power to the motor in proportion to an external voltage (driver modality). The input/output boards are used both to drive the motors in the driver modality and to acquire the sensor signals after they have been properly conditioned. These devices are fundamental for prosthesis control because they provide PC1 with 6 voltage output channels (one for each motion controller and for the driver modality), 5 voltage input channels (one for each cable-tension signal), and 16 digital input channels (for limit switches and contact sensor acquisition).The output signals of the five cable-tension sensors are proportional to the grip force applied by each finger during the grasping tasks.

The high-level control is performed by PC2 (Fig. 5), where signals generated by the subject are interpreted and the desired grasp and relative force commands are generated and sent to the low-level control in PC1.

The CyberHand control architecture is modeled on natural grasping. Grasps are triggered by a higher-level unit that is able to recognize the user’s intentions and invoke appropriate grasping primitives. The grasping task is composed of two subsequent and different phases: the preshaping and the grasping (closure) phase (Fig. 6). After the preshaping phase (performed by the motion controllers using PID algorithms), the desired finger tendon force is selected according to the grasping primitives. In the second phase, the prosthetic hand closes the involved fingers using force control algorithms until the desired global tight force is reached. In this phase, the motion controllers are in their driver modality, where the control signals are generated by the analog output board based on signals from the cable-tension sensors.

Fig. 6
figure 6

CyberHand control algorithm. Simplified schematic of control algorithm indicating three main states of control: identification of required grasp and force, selection of grasp by preshaping hand under position control, and grasping phase under force control

Two force errors are evaluated in the control loop: the global force error (which provides information on the global grip) and the finger force error. The desired global grasp force, used to control the hand, is calculated by summing all the desired finger forces involved in the grip; it is particularly useful when some fingers close without touching the object. The obtained grip performs a bio-inspired balanced distribution of the forces within the hand, and each finger grips the object with the same force. If any of the fingers reaches full closure without touching any object, the desired global force is redistributed among the fingers actually in contact with the object.

Experimental trials with able-bodied subjects demonstrated high reliability and robustness of such a simple control algorithm (Cipriani et al. 2006) that exploits a minimum set of sensors including encoders, intrinsic-force sensors, and cable-tension sensors. However, slippage is not completely avoided due to the relatively long reaction time measured on the system (about 50 ms). The performance of the actual prototype of the CyberHand is illustrated in Table 1 and compared to the performance of a natural hand.

4 4 Conclusions

The general accepted definition of a prosthetic device is that it provides functional replacement of a lost organ without restoring normal control modality (Bronzino 1995). The aim of the CyberHand is to investigate methodologies that go beyond this straightforward approach of functional replacement and to ultimately connect the hand with the human brain. The modular design of the CyberHand makes it possible not only to work in incremental steps toward this goal—a goal that can easily be motivated by the limited success of existing prosthetic technologies—but also to exploit its advanced technologies to critically test neuroscientific hypotheses regarding sensorimotor control and the functional roles of the biological sensory systems during manipulation tasks.

Clinical studies of reimplanted hands provide several important lessons for the design of prosthetic devices in general and for cybernetic hands in particular. In patients with reimplanted hands, where resuturing accidentally severed nerves and tendons and prevented subsequent reinnervation of biological sensors and muscles, the end stage is not unlike the “ultimate” CyberHand, except that the hand in these patients is entirely biological. Nevertheless, even under these “perfect” conditions—a hand equipped with muscles and sensors in numbers exceeding by magnitudes what can be accomplished in any artificial hand and with direct neural connections by means of peripheral nerves—the functional results are very poor unless the treated patients are in their early teens or younger (reviewed in Lundborg 2003). The important limiting factor in these patients seems to be their ability to reinterpret sensory inputs (Rosén et al. 1994). The likelihood of success of neural devices thus appears limited. Importantly, this limitation, of course, applies to any sensory substitution techniques.

It therefore seems crucial to strive to minimize the required efferent and afferent channels. Accordingly, the high-level control of the CyberHand—which in principle is capable of performing a multitude of grasps—is deliberately limited to generate a small set of predetermined grasps of particular relevance for ADLs. This obviously simplifies the user’s task. Moreover, the user of the CyberHand will not be required to master the detailed control of the kinematics and dynamics of grasping but instead can rely on the intelligent mechanical underactuated design of the CyberHand for automatic adaptation to objects. The low-level control required to ensure grasp stability by providing sufficient forces at the multiple digit-object interfaces during grasping is ensured by a combination of mechanical design and sensor feedback. Finally, feedback—both to the low-level control and to the subject—originates in a limited set of sensors that were defined based on neurophysiological and behavioral data (Edin et al. 2006).

The CyberHand provides a framework for clinical assessments and comparisons between different neural interfaces and their integration with the local controller of the hand. But the ultimate success of the CyberHand does not depend on whether intraneural interfaces are found to be advantageous or not, because its modular design allows practically any kind of user interface. Moreover, its modular control architecture allows for testing of the usefulness of various sets of grasps in both daily and vocational activities.

Experiments have been performed with a set of primitive grasp classes representing a large proportion of the grasps in ADLs (Cipriani et al. 2006). Subjects were able to control the underactuated compliant hand—with the control strategy outlined above—and generated stable grasps in 96% of the trials. Moreover, grasping was resilient to external loads, that is, once an object had been grasped, the hand recovered quickly (<0.5 s) from destabilizing loads that jeopardized grasp stability by provoked changes in the force distribution among the fingers. In contrast, experiments have shown that true precision grasps are difficult primarily due to the mechanical characteristics of the hand and stiffness properties of the fingertips.