To content
Department of Electrical Engineering and Information Technology

Behavior

Classical industrial robots usually perform recurring motion sequences in a quasi-static environment. These can therefore be predefined by the programmer. The robot executes them repeatedly at runtime in a playback process.

As soon as the robot acts or moves in a dynamic environment, a static specification of the motion sequences is no longer sufficient to accomplish tasks. The robot must be able to perceive its environment by means of suitable sensors in order to be able to react to this information with appropriate actions. This link between information and action is referred to as behavior. Behavior thus describes how the robot should act in a situation in response to certain perceptions.

As the performance of autonomous robots increases, so does their ability to solve more complex tasks. The more differentiated the robot can perceive itself and its environment, the more situations it can distinguish. In addition, the number of different perceptions of the robot for a particular situation also increases. Due to improvements in mechanics and design of the actuators, modern robots have a multitude of more actions at their disposal to react and interact with a situation. Accordingly, as the degree of task complexity increases, so does the difficulty in behaviorally specifying a meaningful decision-making process for different situations.

In the past, mainly classical artificial intelligence methods were applied to define behavior. However, this classical approach of using a world model and logic rules to make decisions is becoming more and more difficult to apply to increasingly complex problems, since the model is necessarily becoming more and more complex.

Therefore, it is the goal of the research at the IRF to find new ways and methods for behavior specification. The focus is on approaches that focus on intuitive design of complex tasks and on approaches that allow the robot to learn or adapt behavior using computational intelligence methods.

XABSL

Extensible Agent Behavior Specification Language (XABSL) is a language for behavior programming. It represents a set of finite state machines, the options. An action is selected by a decision tree in which the options are organized as nodes. Options can be tested independently and are easily described by the syntax specified by XABSL. However, since there is a unique path in the decision tree for each situation in the game, which is specified by the designer at development time, the behavior acts rigidly according to its specifications.

The approach via automata allows a specification of behavior that can be graphically represented in a way that is understandable to humans. The sequence of decisions can be represented by transitions in a graph and thus enables the programmer to visualize complex decision processes.

XABSL has been used and further developed at the IRF over the years on the different 2- and 4-legged mobile robot platforms of the Robocup.

Adaptive behavior planning

Many methods for behavior planning, such as XABSL, are static approaches. Once a behavior has been specified, it is no longer changed at runtime, but is rigidly processed. If this behavior leads to suboptimal or even wrong decisions of the robot in certain situations, the robot has no possibility to adapt and improve its decision making for the future.

A current research focus at the IRF is therefore to develop approaches to behavior planning that allow the robot to learn or adapt its behavior autonomously. A distinction has to be made between methods that learn offline, i.e. before execution on the robot, or online, i.e. during execution on the robot.

For the specification of such a behavior different methods of computational intelligence are applied and investigated. As an exemplary research project, behavioral networks can be mentioned here. These make the selection of an action goal-oriented and situation-adapted via activation distributions. For this purpose, a network of goal and competence nodes is generated. The goals describe world states that the robot tries to reach. Competences are capabilities of the robot under given preconditions. Each competence also contains expected effects that are likely to occur after the execution of the ability, in order to plan with this knowledge in a goal-directed way. Edges between two nodes exist only when an expected effect satisfies or destroys a goal or precondition. Activation potentials are distributed over these positive or negative effect edges starting from the goals. The level of competence determines the choice of action. In the behavioral network, resources are also modeled to make decisions about whether actions can be executed in parallel or exclusively by the robot. The advantage of this approach lies in its flexibility and extensibility of the network. Thus, this approach is suitable for learning and adaptation.

Currently, research is being conducted at the IRF to determine the extent to which this approach, combined with human-performed demonstration, is suitable for autonomously learning behavior for use in robot soccer.