• Keine Ergebnisse gefunden

2.1 A DDRESSING P ROSTHESIS C ONTROL : S ENSOR - FUSION C ONCEPT

2.1.2 Software implementation: CASP

The algorithm driving the CASP system prototype was implemented in MATLAB 2013a (MathWorks, Natick, US-MA) as a library of individual modules using object oriented programming. Due to the specific performance requirements (e.g., intensive real-time calculations and data processing) additional application-specific computational optimizations were implemented using custom compiled C-code and CPU/GPU multithreading [79].

The system operated as a finite-state machine that is triggered by myoelectric commands or external events (e.g., object grasped/released). That is, when the user generated a myoelectric trigger command, the processing unit fused the data acquired from the sensors (i.e., prosthesis aperture, grasp type, orientation, and depth image) in order to perceive the environment (i.e., the graspable objects in it) and performed automatic and real-time updates of the prosthesis parameters. Based on the current state of the prosthesis and the estimated properties of the target object (shape, size, orientation), the prosthesis posture (i.e., grasp type, size, and wrist angle) was configured so that the hand is prepared for grasping the target object (reactive control).

Additionally, once the prosthesis posture has been automatically adjusted, the user would regain a full manual control of the prosthesis through the myoelectric interface, thus being able to correct or fine-tune the autonomous decisions (semi-autonomous control).

As already stated, the core feature of the CASP algorithm (Figure 2.3) is the fusion of sensor data from several sources, including IMUs, depth camera, embedded position, and force sensors. This comprehensive sensor fusion (red dashed-line box) allows the

algorithm to implement automatic, simultaneous, and reactive position control of the multi-DoF prosthesis. The inputs for the processing are: depth image (acquired via the infrared time-of-flight camera), intrinsic prosthesis properties (e.g. handedness, number of available DoF’s), the IMU’s orientation (attached externally to the prosthesis) and the data acquired from the sensors embedded into the prosthesis (force sensors and position encoders). The outputs are the control signals that automatically configure the prosthesis into the predefined posture by setting the grasp type, aperture size, and wrist rotation.

Figure 2.3: Conceptual scheme of the algorithm driving the CASP system. The central feature of the system is the sensor fusion, which allows for context-dependent reactive prosthesis control. List of abbreviations: Hand preshape and orientation (HPO), object of interest (OI), current prosthesis rotation (current_Rot), rotation of the selected hand posture (HPO(i)_Rot).

2.1.2.1 Object selection loop (Figure 2.3, blocks A-D)

The object selection loop analyzes the acquired depth images in order to extract the object of interest (OI) cluster from the 3D point-cloud. It operates at 2 Hz, until the user generates a myoelectric command for hand opening. The aforementioned object segmentation is performed as a three-step process in which:

1) The support surface (step B) is determined by identifying the largest plane in the point-could through iterative application of the RANSAC [80] algorithm for

plane detection.

2) All points that are above the support plane are considered to belong to a single cluster containing all present objects. This large cluster is further divided into smaller ones, each assuming to contain a single object. This is accomplished by detecting the discontinuities in the point-cloud through edge analysis applied to the raw depth image (Canny edge detector [81]). Additionally, some clusters are eliminated using post-processing. More specifically, all clusters that do not satisfy the requirements for a valid object defined as a set of thresholds are discarded (e.g., clusters that are too much dispersed, too small, or too far away are eliminated in the process);

3) From the object clusters, extracted in step 2, the one closest to the center of the camera’s field-of-view (FoV) is considered the OI (step C).

Therefore, the results of the initial object selection loop are two point-cloud clusters:

one belonging to OI and the other belonging to the support surface.

2.1.2.2 Contextual object analysis (Figure 2.3, blocks E-H)

Once the user issues a hand opening command through the myoelectric interface (step D), contextual analysis of the OI is performed:

1) The RANSAC method is applied once more to iteratively fit four different geometrical models (box, cylinder, sphere, and line) through the point-cloud belonging to the OI (step E). The model with most inliers is selected as the one best describing the shape of the OI, where the inlier is defined as a point located within the volume of the fitted geometrical shape.

2) The main-axis of the geometrical model is determined as the line passing through the object center, parallel to its longest edge (step E). The main-axis is used to determine the object pose with respect to the support surface and the user (step F).

3) The information obtained from steps E and F is then fused with the intrinsic prosthesis properties (step G) in order to generate the repository of viable hand preshapes and orientations (HPOs, step H). To this aim, the algorithm first identifies all object surfaces that qualify to be grasped. For example: all object surfaces that are facing away from the user/prosthesis are eliminated from further analysis. Then, a cognitive like processing algorithm implemented as a set of IF-THEN rules similar to the ones described in [82], [71], [83] is applied

iteratively over each qualified object surface in order to generate an appropriate HPO. Therefore, each HPO is defined as a triplet including hand orientation, grasp type, and aperture that depends not only on the object but also on the intrinsic prosthesis properties (e.g., for a prosthesis that has individually controllable fingers the HPO would automatically generate more dexterous preshapes as demonstrated in [71]; Similarly, the object surfaces that qualify to be grasped depend, as well, on the prosthesis handedness). For example: a wide cylindrical object of X cm in radius and oriented vertically would be grasped using palmar grasp, and the prosthesis would be oriented so that the palm is either vertical (90º) or horizontal (0º) when grasping from the side or above, respectively. In this process, the hand aperture size would be set to be somewhat larger than the estimated object width. The two HPOs would be therefore determined as: (0º rotation, palmar preshape, X cm aperture) and (90º rotation, palmar preshape, X cm aperture).

Therefore, the outcome of this processing stage is a repository of HPOs that accommodate grasping of the target object from all the viable sides.

2.1.2.3 Automatic posture control loop (Figure 2.3, blocks I-K)

This loop operates at 10 Hz and fuses the repository of HPOs with the current prosthesis orientation (obtained via IMU) in order to automatically drive the prosthesis into the optimal configuration. The currently measured prosthesis orientation is compared to all HPOs from the repository, and the HPO with the closest orientation is selected as the one to be activated (step I). This is performed at least once, immediately after the contextual object analysis has been finished, in order to configure the initial HPO.

Then, due to the continuous activity of this loop, the controller detects if the user tries to orient the prosthesis in order to grasp the object from a different side and activates the corresponding HPO from the repository (reactive behavior). The selected hand posture is then reached, in a closed-loop, using embedded position encoders (step K). It should be noted that the resulting amount of wrist rotation is calculated by subtracting the current socket rotation (obtained via IMU) from the activated HPO rotation (obtained from the repository of HPOs). This amount is then added to the current wrist to socket rotation (obtained via embedded position encoders) and sent as the new rotation command to the prosthesis controller (step K).

2.1.2.4 Manual control loop (Figure 2.3, blocks L-M)

This loop operates at 10 Hz and implements a two-site proportional myoelectric control interface with mode switching, as described in Introduction section 1.2.1.1. The loop is entered automatically once the user issues any kind of myoelectric command during the automatic control loop operation. This allows him to voluntarily grasp and manipulate the object of interest or simply correct the system decisions. Once the object has been grasped and released the manual control loop finishes and the state-machine resets automatically to state A.