• Keine Ergebnisse gefunden

There exist many examples in which the robot act as a tool for the interacting human [23].

The examples vary based on the difference of applications as well as the robot autonomy while interacting with the human or along with the human. Horiguchi [70] proposed a force feedback based HRI in teleoperation of robots.

The HRI discussed in [27] corresponds to the application of a harvester robot along with the human. The experiment was performed for harvesting melons. A variable level of robot autonomy was applied during HRI. The detection rates of melons were increased by collaborative harvesting. The success rate of harvesting also depends on the complexity of situation.

28

The task of the robot described in [156] corresponds to teleoperation. The robot operation concerns the placement of radioactive waste in a central storage. The robot is taught the task.

The teaching is performed through the teleoperation. A functional architecture is proposed in [156]. The robot is monitored while performing a task. The human can interrupt the robot if a new situation arises while the robot operation. The robot can only perform what he has been taught but can not react intuitively in an unknown situation. For this purpose the human guides the robot.

In [73] the level of autonomy of the robot is similar as discussed in the [156]. The robot patrols a nuclear plant. The robot works autonomously in the normal situations. The normal situations correspond to the situations in which the robot knows how to react. In an unknown situation the robot is guided by the human to solve the problem. In unknown situation the level of autonomy is zero and the robot totally depends on the human instructions. In known situation the robot is fully autonomous in performing the tasks.

There exist research work on HRI in the domain of urban search and rescue (USAR). Mostly the mobile robots are used in USAR. The robots are used as a tool to search and rescue the humans. The situations awareness plays an important role in USAR [167]. The USAR issue discussed in [102] concerns the operator situation awareness and HRI. The variation in the level of autonomy between the human operator and the robot is discussed in [31]. The approaches in [143] and [146] proposed that with the use of an overhead camera and automatic mapping techniques the situational awareness can be improved by reducing the navigational errors.

Another teleoperation approach is discussed in [113]. In this approach multiple operators present at different locations control multiple robots in a collision free collaborative manner in a common working environment. The collision can occur due to the fact that the operators are separately located from each other and do not know the intention of each other. A graphic display is used to avoid the collisions. In the continuation of work in [113], the time delay for the sent commands to the robots was handled by simultaneously sending to the graphic display and the robots [30]. These commands are used as virtual force feedback by the operators to avoid the collisions.

Autonomy is a significant aspect in HRI. The level of autonomy varies between fully autonomous to teleoperation, based on the fragility and the delicacy of the task and the working environment. It also depends on the artificial intelligence present in the robot and the nature of the working environment. The nature of the working environment describes that with which likelihood the new conditions can arise.

Teleoperation

Supervisory Control

Collaborative Control

Autonom

ous Collaboration

Total Dependance Teleoperation

Supervisory Control

Collaborative Control

Autonom

ous Collaboration

Total Dependance Teleoperation

Supervisory Control

Collaborative Control

Autonom

ous Collaboration

Total Dependance Teleoperation

Supervisory Control

Collaborative Control

Autonom

ous Collaboration

Total Dependance

Figure 2.2: Levels of robot autonomy in HRI [63]

29

The autonomy corresponds to the mappings of environment input to the actuator movements or the representational schemas [61]. The autonomy of a robot is the amount of time a robot can be neglected [31]. The term neglected means unsupervised. The levels of autonomy discussed in [147] are divided in different levels from total dependence to total autonomy.

The overview of levels of autonomy can be described as shown in Figure 2.2.

Fong [55] discussed the variability of autonomy in HRI. The robot operates autonomously until it faces a problem that can not be solved by him. The robot requests teleoperation in case of problem. The performance of the robots depends on the numbers of the robots and the teleoperators. If one human operator is present for more robots then the performance of the robots declines.

Autonomy is enabled in the robots with the help of artificial intelligence, signal processing, control theory, cognitive science, linguistics, and the situation dependent algorithms [61].

There existed different approaches for autonomy, e.g., sense-plan-act of decision-making [108] and behaviour-based robotics [8].

A mobile robot named Sage interacts with the people as a tour guide in a museum [111]. The change in the modes of the robot due to the HRI is discussed in [111]. The change in the mode of Sage causes the change in his behaviour with the interacting humans. The communication channels utilized by Sage in HRI include speech and emotions. Sage interacts with the humans through a LCD screen and audio as shown in Figure 2.3. The robot stops and asks for help in a troubled situation during HRI.

Figure 2.3: A museum guide mobile robot Sage [111]

A humanoid robot interacts with the humans using speech, gesture, and gaze tracking [81].

The robot works as a guide. The experiment with the robot showed the importance of gaze in the HRI. The interacting people spent more than half of the interacting time focusing on the robot’s face.

30

In [87] a study is performed on HRI where the robot acts as a guide to the human. It is discussed in the study that only speech can not help the robot to predict the future events concerning HRI. It is also important to understand the body language of the interacting human. The gaze of interacting human also gives a clue about his interest.

In [71] the importance of robot feedback is described during HRI. The robot feedback means that the robot acknowledges during HRI. The experiments showed that the robot feedback produced ease in HRI. The robot is designed to interact in office environment with the people having physical disabilities. The results of the experiments correspond to the fact that speech alone is not enough for human-robot communication.

The penguin robot interacts with the human as a host [144]. It is emphasized that a robot should not only exhibit gestures, but also interpret the gestures of interacting human. The robot uses the two channels of communication, i.e., vision and speech. The robot monitors the conveyed messages to the human by tracking the gaze of human.

Inagaki proposed HRI by perception, recognition and intention inference [75]. They used time dependent information along with the fuzzy rules for HRI. The approach in [75] is specialized with respect to the application of time dependent information in HRI. The human and robot cooperate to achieve a common goal.

Morita emphasized on the dialogue based HRI [101]. Their robot carries an object from one location to another location based on visual and audio inputs. Tversky [157] discussed the importance of understanding the spatial reference for HRI. Tenbrink [152] proposed a spatial understanding based HRI method. The robot is given the interaction commands through a keyboard. The interaction commands given to the robot considered the robot’s perspective.

Rani [120] proposed and performed the experiments concerning HRI that considers the human anxiety while HRI. The physiological knowledge is used to generalize the anxiety state of the interacting human. The anxiety state is independent of the age, culture, and gender of interacting human.

Fernandez [50] proposed HRI based on intention recognition. The experiments correspond to the transportation of a rigid object by human and the robot. They used spectral patterns in the force signal measured in the gripper arm.

The approaches in Section 2.3 discussed the usage of different communication channels and the levels of autonomy as the robot works as an assistant to the human. Only one approach [75] considered the intention of the interacting person that is also time dependent.