• Nu S-Au Găsit Rezultate

K. S. Barber and C. E. Martin

N/A
N/A
Protected

Academic year: 2022

Share "K. S. Barber and C. E. Martin"

Copied!
41
0
0

Text complet

(1)

Cover Page

Specification, Measurement, and Adjustment of Agent Autonomy: Theory and Implementation

K. S. Barber and C. E. Martin

The Laboratory for Intelligent Processes and Systems Electrical and Computer Engineering

The University of Texas at Austin Austin, TX 78712

http://www.lips.utexas.edu [email protected]

phone: (512) 471-6152 fax: (512) 471-3652

Abstract

Autonomy is an often cited but rarely agreed upon agent characteristic. Although no definition of agent autonomy is universally accepted, exploring the concept of autonomy offers many useful insights.

Studying agent autonomy is important because it influences and is influenced by the degree of trust that agent designers and users place in their agents, the effectiveness of agent-based problem solving, and the organizational structure of agent-based systems. This article reviews previous discussions of agent autonomy and presents a core definition synthesized from these discussions. A representation for agent autonomy is presented along with a metric that can be used to assess an agent’s autonomy for each of the goals it pursues. These constructs in turn allow agent autonomy to be manipulated. As the demand for agent-based systems grows, the demand for agents capable of adjustable autonomy grows as well.

Adjustable autonomy is a desirable agent characteristic because it (1) allows agent designers and users to gradually increase an agent’s “autonomy” as they become more confident in its operation and (2) allows agents to dynamically change some characteristics of their organizational structure to create the most effective problem-solving collaborations across run-time conditions. This article describes an implementation of a multi-agent system capable of adjustable autonomy and the autonomy mechanisms that make this implementation possible including autonomy-level agreements, autonomy-level commitment, and the autonomy model itself.

Submitted to

Autonomous Agents and Multi-Agent Systems

Contact Person:

Mrs. Karen Cullen

Autonomous Agents and Multi-agent Systems Editorial Office Kluwer Academic Publishers

101 Philip Drive Norwell, MA 02061

(781) 871-6600 Fax: (781) 878-0449 email: [email protected]

May 28, 1999

(2)

Specification, Measurement, and Adjustment of Agent Autonomy: Theory and Implementation

K. SUZANNE BARBER AND CHERYL MARTIN {barber, cemartin}@mail.utexas.edu Laboratory for Intelligent Processes and Systems, Department of Electrical and Computer Engineering, The University of Texas at Austin, Austin, TX

Abstract. Autonomy is an often cited but rarely agreed upon agent characteristic. Although no

definition of agent autonomy is universally accepted, exploring the concept of autonomy offers many useful insights. Studying agent autonomy is important because it influences and is influenced by the degree of trust that agent designers and users place in their agents, the effectiveness of agent-based problem solving, and the organizational structure of agent-based systems. This article reviews previous discussions of agent autonomy and presents a core definition synthesized from these discussions. A representation for agent autonomy is presented along with a metric that can be used to assess an agent’s autonomy for each of the goals it pursues. These constructs in turn allow agent autonomy to be manipulated. As the demand for agent-based systems grows, the demand for agents capable of adjustable autonomy grows as well.

Adjustable autonomy is a desirable agent characteristic because it (1) allows agent designers and users to gradually increase an agent’s “autonomy” as they become more confident in its operation and (2) allows agents to dynamically change some characteristics of their organizational structure to create the most effective problem-solving collaborations across run-time conditions. This article describes an implementation of a multi-agent system capable of adjustable autonomy and the autonomy mechanisms that make this implementation possible including autonomy-level agreements, autonomy-level commitment, and the autonomy model itself.

1. Introduction

Although autonomy has often been promoted as an essential, defining property of agenthood, there is little further agreement about this concept. No universally accepted definition of autonomy exists, and proposed definitions vary widely. This article attempts to unify various viewpoints and create clear boundaries among multiple concepts that have often been related to autonomy. This analysis differs from previous discussions of agent autonomy in that it singles out one of these concepts as the only necessary dimension of agent autonomy and provides a model of this dimension. Furthermore, this dimension is proposed to be sufficient to define basic agent autonomy, the minimal threshold beyond which an agent can be considered autonomous.

Additional conditions on sufficiency may be desired in some, but not all, contexts. This article also shows how the proposed model can be used as a foundation on which to build more complex models of autonomy.

(3)

Rather than attempt to discuss an agent’s overall autonomy as an indivisible top-level concept, this article approaches autonomy on a goal-by-goal basis. The term “goal” should be interpreted broadly in this context as any high-level task, desired state, intended plan or course of action, or low-level primitive action to be performed. An agent’s goals are the “core of its autonomy” [12].

This article presents a model of an agent’s degree of autonomy for each of the goals and subgoals it pursues, recognizing that the agent’s autonomy may be different for each goal. An autonomy metric is then derived from this model. This metric makes it possible to compare agent autonomy across agents, situations, and time.

A further benefit of the proposed autonomy model is its ability to support dynamic autonomy adjustment. The ever-increasing demand for flexible, adaptive, agent-based systems has recently focused attention on the need for agents capable of adjustable autonomy. This capability allows agents to dynamically alter their degree of autonomy during system operation to achieve better performance across run-time variations in their situations (e.g. communication failure, resource depletion). This article describes an implementation of a multi-agent system that is capable of adjustable autonomy and uses the proposed autonomy model as a guide for agent behavior toward achieving goals. The examples given are based on the Sensible Agent architecture and testbed [7]; however, the autonomy mechanisms described for the implementation are not architecture dependent.

Overall, this article examines agent autonomy from both theoretical and applied viewpoints.

The remainder of the article is structured as follows. Section 2 presents a definition of the primary dimension of agent autonomy (degree of self-direction). Section 3 discusses autonomy- related concepts and explains the relationship between these concepts and the proposed definition. Given this definition as a foundation upon which to build a model of agent autonomy, Section 4 discusses the necessary components of an autonomy representation and presents the autonomy model. Section 5 shows how this model can be used to synthesize an autonomy metric and discusses both quantitative and qualitative properties of the model. Section 6 presents the background and motivation for adjustable autonomy in multi-agent systems. Section 7 shows how the given autonomy model supports adjustable autonomy and presents additional theoretical constructs needed for the implementation of adjustable autonomy, including autonomy-level agreements and autonomy-level commitment. Section 8 presents implementation of adjustable autonomy has been implemented using these autonomy-related mechanisms and describes the operation of the multi-agent system through a particular scenario along with simulation performance. Finally, Section 9 presents some conclusions.

(4)

2. Autonomy

The general concept of autonomy is often interpreted as freedom from human intervention, oversight, or control [8, 10, 18, 19, 30, 45]. This type of definition corresponds well to the concept of autonomy in domains that involve single-agent-to-human-user interaction. However, in multi-agent systems, a human user may be far removed from the operations of any particular agent. Some researchers have defined autonomy in a more general sense as a property of self- motivation and self-control for the agent [12, 16, 30, 34, 37]. This sense of the word autonomy captures the concept of freedom from intervention, oversight, or control by any other agent, including, but not limited to, a human.

Unfortunately, this broad statement fails to account for many characteristics often considered necessary for the implementation of autonomous agents. For example, the behavior of autonomous agents is generally viewed as goal-directed [12, 16, 18, 21, 34]. That is, autonomous agents act with the purpose of achieving their goals. This is fortunate because it allows external agents (or human users) to influence autonomous agents to perform useful functions. In addition, many researchers consider pro-activeness to be a defining property of autonomous agents [8, 18, 21, 30]. This property requires aspects of “periodic action, spontaneous execution, and initiative, in that the agent must be able to take preemptive or independent actions that will eventually...” achieve its goal [21]. Autonomous agents must consider their goals, make decisions about how to achieve those goals, and act on these decisions.

Incorporating these properties, autonomy becomes an agent’s active use of its capabilities to pursue its goals without intervention, oversight, or control by any other agent.

No agent can be completely free from all types of intervention with respect to any goal. This article identifies three distinct types of intervention:

(1) modification of an agent’s environment, (2) influence over an agent’s beliefs, and

(3) intervention in an agent’s determination of which goals/ sub-goals/ intentions it will pursue.

We label freedom from these types of intervention as (1) “environmental isolation”, (2)

“incredulity” (non-gullibility), and (3) “self-direction”, respectively. All three types of intervention are equally important considerations for agent design and operation. However, we propose that “self-direction” is the only necessary condition for agent autonomy. No agent can be considered autonomous without freedom from this type of intervention. Furthermore, this condition is sufficient to define agent autonomy as a fundamental concept. The following

(5)

examples show how an agent can be “self-directed” or “autonomous” to the same degree but exhibit varying degrees of “environmental isolation” and “incredulity”.

Consider several cases of environmental isolation. In the first case, a robot (called Robot_5) navigates a maze without assistance or interference from any external entity. This robot would be viewed as autonomously carrying out the goal, “Robot_5 travel through maze #42 from start to finish.” In the second case, the robot is faced with randomly materializing obstacles. The robot’s autonomy has not been altered. The degree of dynamism the robot encounters and the difficulty of its task are characteristics of the environment or the task itself. These characteristics may affect the agent’s chance of success or failure but do not define its autonomy. In the third case, the robot is faced with an adversary who places obstacles in the maze. The robot cannot differentiate this case from the case of randomly materializing obstacles and thus should be viewed as no more or less autonomous. In the final case, another agent (who is cognizant of the robot’s decision-making algorithm) places obstacles in the path of the robot in such a way that the robot follows the path that this other agent desires. The robot is no less autonomous in this case because its actions would be no different if these same obstacles had appeared randomly. Even though the external agent may be aware of the decision-making algorithm used by the robot, it cannot force the robot to behave differently than its decision-making algorithm dictates. For example, if the robot would normally turn or stop to avoid an obstacle, the external entity could not place an object in front of the robot and then have the robot run into the object. The level to which the robot’s internal decision-making algorithm is known affects the robot’s

“predictability,” but not its autonomy. This robot’s autonomous behavior exists regardless of the presence of some omniscient entity who may or may not be controlling environmental factors.

Therefore, interaction with an agent’s environment by any other agent should not be viewed as autonomy-altering intervention.

An agent can also exhibit different degrees of incredulity but still operate in a completely autonomous fashion. Consider several cases. A maze-navigating robot acts with the same degree of autonomy whether its decision-making process uses a static world model or one that is dependent on dynamic sensor readings. Although such a robot may be more or less successful in its attempt to achieve its goal autonomously, it is no more or less “autonomous”. Suppose, instead, that the robot receives messages about changes in the maze layout from another agent. In this case, the source of the information dependency for the robot’s dynamic world model has changed from a sensor suite to another agent. In both cases, whether or not the robot incorporates information into its world model as truth data is a function of how “incredulous” the robot is with respect to the source of the information. The robot’s “autonomy” can be treated as an

(6)

independent dimension that reflects how free the robot is to decide how to travel through the maze based on its understanding of the world. (See Section 3.1.2. for further discussion of incredulity and “belief autonomy.”) The robot remains autonomous as long as its goals and the decision-making process with which it determines how to achieve its goals are left intact.

The term “autonomy” applies most clearly to intervention in the decision-making processes that an agent uses to determine how its goals should be pursued. Since any actionable oversight or control would require such intervention, these terms can be removed from the proposed definition. Therefore, autonomy is an agent’s active use of its capabilities to pursue its goals, without intervention by any other agent in the decision-making processes used to determine how those goals should be pursued.

This statement presents autonomy as an absolute quantity (i.e. either an agent is autonomous or it is not). However, the attempt to measure or adjust autonomy presupposes that there exist degrees of autonomy. This article considers an agent’s degree of autonomy on a goal-by-goal basis. This bottom-up approach clarifies discussions of agent autonomy. Such discussions can become frustrating if different parties in the discussion have different assumptions about which goal an autonomy assessment refers to. Agents often have multiple goals, some of which may be implicit. Without first recognizing the agent’s level of autonomy with respect to each of its goals, it is very difficult to agree on an overall assessment of the agent’s autonomy. For example, some would argue that a thermostat is autonomous and others would argue that it is not. However, the argument here actually hinges on which goal is most important in the assessment of the thermostat’s overall autonomy. It should be quite easy to agree that the thermostat does autonomously carry out the goal to maintain a particular temperature range but that it does not autonomously determine its own set point. Once an agent’s level of autonomy has been specified for each of its goals, the argument can focus (properly) on determining how important each goal is in the assessment of the agent’s overall autonomy (see Section 3.2.1.). An agent’s degree of autonomy, with respect to some goal that it actively uses its capabilities to pursue, is the degree to which the decision-making process, used to determine how that goal should be pursued, is free from intervention by any other agent.

3. Autonomy-Related Concepts

3.1. Concepts Often Merged With Autonomy

3.1.1. Capabilities and Dependence. Autonomy is often viewed from a perspective of agent capability. A more capable agent will generally be more robust and flexible, it should be able to

(7)

survive longer without help or input from external entities, and it is less likely to be dependent on others. Although there is a relationship between dependence and autonomy (the more dependent an agent is, the more likely that it will not succeed if it operates in a completely autonomous fashion), the two concepts are orthogonal rather than complementary. A dependent agent may normally operate in a non-autonomous fashion, but it may also attempt to operate autonomously if, for example, it loses contact with all other agents. Castelfranchi has modeled dependence relations extensively [13, 39] and points out that viewing autonomy as the complement of dependence does not add anything interesting to the notion of dependence [12]. Although the relationship between dependence and autonomy should be explored, neither an agent’s set of capabilities nor its degree of dependence on others defines the agent’s autonomy.

3.1.2. Beliefs and Internal State. Several researchers maintain that, in addition to control over its own goals and actions, an agent’s control over its own beliefs or internal state is critical to its autonomy [12, 30]. We agree that this concern is critical to the design of an autonomous agent.

An agent’s amount of “belief autonomy” or “incredulity” will affect how easily that agent is manipulated. However, other factors also affect how easily an agent is manipulated, including how well known the agent’s internal decision-making algorithm is (see “environmental isolation”

examples in Section 2) and how self-directed the agent is. We maintain that these three types of manipulation (intervention) are orthogonal dimensions of agent interaction. Because goals are the core of agent autonomy [12], we further maintain that complete “autonomy” requires an agent to be fully self-directed, but that agents with minimal amounts of incredulity and environmental isolation can still be considered fully autonomous in some contexts. The minimal thresholds are defined by the agent’s interfaces. For example, an agent must provide some interface for the modification of its beliefs, and this interface entails some minimal degree of incredulity.

Although an agent’s amount of incredulity remains a critical factor in the operation and design of autonomous agents, it does not define the core concept of autonomy.

3.2. Prior Autonomy Concepts From a New Perspective

3.2.1 Agents With the “Right Kind” of Goal. Some researchers argue that it is not enough for an agent to be self-directed. For an agent to be autonomous, it must have the “right kind of goal”

[12, 16]. These “autonomous goals” are goals that are the agent’s “own.” These goals are capable of being pursued independently of other goals. They are goals that an agent can accept, reject, formulate, or adopt on its own initiative. This section shows how this view can be

(8)

reconciled with a model of agent autonomy that considers primarily an agent’s degree of self- direction in determining how it will achieve a particular goal.

The “right kind of goal” viewpoint focuses on characteristics of the goal itself and looks back in time at how the agent got the goal. In order to focus attention on the control relationships among agents during decision-making about goals, this research chooses to model autonomy by looking forward in time at how the agent must determine the subgoals/subtasks it will undertake to carry out a goal. These approaches are compatible because how an agent gets a goal (subgoal X) is a function of how self-directed the agent is for the next highest goal in the goal hierarchy (goal Y), which motivated the agent’s acceptance of the subgoal (X). Thus we model an agent’s autonomy for a particular goal as a function of the decision-making framework in which subgoals (such as subgoal X) are chosen and allocated in order to achieve that particular goal (Y). Even top-level goals, (normally, the highest level of goals represented explicitly by an agent) can be represented as subgoals of an agent’s inborn motivations. An agent who is self-directed in pursuing such motivations will select “top-level” goals accordingly. This model contains all the relevant information about how the agent gets each of its goals.

3.2.2. Overall Agent Autonomy. Most research attempting to define agent autonomy focuses on determining whether or not the agent as a whole is autonomous [8, 10, 12, 16, 18, 19, 21, 30, 45]. As an alternative, this article focuses primarily on defining agent autonomy with respect to one goal at a time. This goal-by-goal model can be used to create an assessment of the agent’s overall autonomy, defined as a function of the weighted average of the agent’s degree of self- direction over all the agent’s goals and subgoals. The “right kind of goal” viewpoint suggests that the weights in such an assessment are clearly bimodal (small or large). Goal characteristics that warrant large weights include whether or not the goal is a top-level goal, whether or not the goal is homeostatic (requires the maintenance of a particular condition), and whether or not there exists a large number of independent alternatives to the goal [16].

3.3. Fundamental Autonomy

This article attempts to distill autonomy down to its most basic, fundamental dimension (i.e. self- direction) in order to create a workable model of the concept. The approach presented here has many practical applications as discussed by the following sections. Although finding acceptable answers to deeper philosophical questions of overall autonomy may ultimately require models of additional dimensions of autonomy, the autonomy model presented by this article forms a solid foundation for future autonomy assessments.

(9)

4. Representing Autonomy

4.1. Why Represent Autonomy

A representation of autonomy is desirable for two reasons: (1) to facilitate autonomy assignment and modification and (2) to facilitate autonomy measurement. First, consider the need to assign and modify autonomy. An autonomy representation makes autonomy an explicit agent characteristic. Furthermore, if an agent’s decision-making behavior depends on assignments to the autonomy representation, changes in the agent’s autonomy become explicit manipulations of this assignment. Such a representation gives the agent or its designer something to set, a “knob to turn” so to speak, allowing autonomy to be assigned and adjusted. Next, consider the need to measure autonomy. Few methods exist to address this need. One possible measurement technique involves determining how autonomously an agent has been acting by observing its behavior [10]. Although this type of measurement is very useful for evaluating the performance of an autonomous agent, it is limited by its reliance on historical data. This type of metric can describe how autonomously an agent has acted for some time-period in the past. However, it says little about the agent’s capacity for exhibiting autonomous behavior, and it does not reveal how autonomous the agent currently is nor how autonomous it will be in the future. A representation of autonomy can form the foundation for predictive and existent evaluations of agent autonomy.

4.2. What to Represent

A model of autonomy must identify the focus of decision-making (a goal or set of goals), the decision-makers, and any authority constraints that are in effect. This section describes the motivation for each of these model components.

The definition of autonomy proposed by this article indicates that an agent’s autonomy must be represented with respect to a goal, and that the agent must actively use its capabilities to pursue this goal. Many researchers have investigated the relationship between agents and goals.

In order for an agent to actively use its capabilities to pursue a goal, it must intend the goal [14]

or form some commitment to the goal [11, 29]. For the purposes of modeling autonomy, it is enough to record that such an intention has been formed. The intended goal then becomes the focus of the autonomy assignment. Focusing autonomy assignment on a specific intention allows agents to have multiple simultaneous autonomy assignments for the different goals they may pursue [36].

An autonomy representation should model the intervention of other agents in the decision- making process determining how an intended goal should be pursued. Note that although direct

(10)

influence over an agent’s goals by other agents is disallowed for completely autonomous agents [12], the existence of degrees of autonomy implies various degrees of direct intervention in determining an agent’s goals. How, and to what extent, other agents intervene must be modeled.

However, representing details of the decision-making process should be avoided because the representation must generalize across decision-making algorithms. The essential elements of the decision-making framework describe (1) which agents are in control of the decision-making process for a particular goal and (2) how much control each of these agents has.

A representation of autonomy must also explicitly model a constraint that enforces the authority of the decision-making agents. Without such a constraint, any autonomy assignment could be subverted by an agent who simply refuses to carry out the agreed upon decision. This authority constraint completes the model of the decision-making framework by ensuring that at least one agent will execute the decisions made by the decision-making agents.

4.3. Autonomy Representation

This section describes the autonomy representation used by Sensible Agents, capable of Dynamic Adaptive Autonomy [37]. Autonomy is represented in Sensible Agent-based systems by the tuple (G, D, C), where G is the focus of the autonomy assignment, D identifies the decision-makers and their relative strengths in the decision-making process, and C declares the authority constraint.

Sensible Agents use assignments to this autonomy representation as a guide for decision-making, thus forming the basis for predictive and existent evaluations of agent autonomy with respect to particular goals.

Any agent using an autonomy model must comprehend the concepts of “self” and “others”.

Let ax and ay represent any agent, with the identifiers x and y, let a0 represent the self-agent, with the unique identifier 0, and let an represent any other agent, with the identifier n≠0.

The three components of the autonomy representation are as follows:

(1) Focus --

Let G represent the focus of the autonomy assignment.

G identifies the goal(s) about which agents are making decisions. Any agent may make decisions for goals it intends to achieve as well as for goals that other agents intend to achieve.

Additionally, agents may combine their goals for concurrent solution in a “you scratch my back, I’ll scratch yours” fashion. Therefore, G may identify a single goal that the self-agent intends, any number of goals that other agents intend, or a combination of these two types of intended goal.

(11)

Let x

ix

ga represent the ix

th goal intended by agent ax. G, then, identifies a set { 0

0

a

gi [, 1

1

a

gi , ... , n

in

ga ]}, or { [ 0

0

a

gi ,] 1

1

a

gi [ , ... , n

in

ga ]}

G refers to some subset of all the intended goals in the system. The “[ ]” notation indicates that the enclosed elements are optional. The set G must identify at least one intended goal. This goal may be intended either by the self-agent or by some other agent. G may contain up to n+1 intended goals, where n is the number of other agents in the system. If the number of elements in G is greater than one (|G| > 1), then the decision-making agents must find a solution for all constituent goals concurrently.

Note that two different agents may actually intend equivalent goals. For example, Robot_5 and a benevolent external agent (with the ability to manipulate the maze or push the robot) may both intend the goal “Robot_5 travel through maze #42 from start to finish.” However, these two agents may not engage in collaborative decision-making. In fact, Robot_5 may not even know about the other agent or its goal. We impose a representational constraint that no two agents may intend the same instance of a goal. For example, Robot_5 may intend to achieve the goal

5 _

5

1 _ R o b o t R o b o t

ga = “Robot_5 travel through maze #42 from start to finish,” and ax may intend to achieve the goal x

ga4x= “Robot_5 travel through maze #42 from start to finish.” In this case, _5

5

1 _ R o b o t R o b o t

ga is equivalent to x

ga4x, but _5

5

1 _ R o b o t R o b o t

gaga4xx. In other words, these intended goals are not the same and can be maintained independently. These agents may also represent their autonomy with respect to these goals separately. If the robot and the agent engage in collaborative decision-making to get the robot through the maze, they should both represent a compound focus for such an autonomy assignment of the form G = { _5

5

1 _ R o b o t R o b o t

ga , x ga4x}.

(2) Decision-makers --

Let D represent the decision-making framework in which decisions about how to pursue G are made.

D identifies which agents make decisions about how to pursue the intended goals listed in G and describes their relative strength in the decision-making process. The evaluation of the relative strength of any agent in the decision-making process is based on an analogy to a simple planning/negotiation/voting process in which every vote must be cast. In this prototype process, every decision-making agent receives an integer number of votes, greater than or equal to one.

(12)

Let vax represent the number of votes agent ax can cast, where vax ≥1. D is a set of tuples{(a0 ,

a0

v ) [, (a1 ,

a1

v ), ..., (an , van)]}, or {[(a0 ,

a0

v ),] (a1 ,

a1

v ) [, ... , (an , van)]}

In D, the tuple ( ax ,

vax ) represents an agent who is making decisions about how to pursue the goal(s) in G along with the number of votes that agent receives in determining the overall decision of the group. Each agent in the set D may play an equal part in determining how to pursue G, or some agents may have more votes than others have. Any number of agents in the set may have the same number of votes. Agent ax’s relative decision-making power,

rax, is calculated by dividing the number of votes it can cast by the total number of votes that can be cast:

=

D vy

y y x

x v

r v

) , (a a

a a a

A description of the prototype decision-making process should make this representation more clear. During this voting process, every decision-making agent suggests a complete plan for adoption by the group and provides a justification for this plan. Each decision-making agent evaluates each suggested plan and determines the value of its justification. A preliminary vote is taken. Each agent may cast each of its votes independently. The

 

|D|2 plans (|D| is the number of items in the set D; the number of decision-making agents) receiving the least votes are withdrawn, and a final vote on the remainder is taken. The plan receiving the most votes in the final round is taken as the decision of the entire group. In all cases, ties are broken randomly and impartially by an arbiter.

Two things should become clear after reading this description: (1) for many different reasons, most agent-based systems are unlikely to use this particular decision-making algorithm for collaborative decision-making and (2) for each decision-making algorithm that is actually available in a particular agent-based system, an analogy to this voting scheme must be drawn such that a mapping to values assigned to D can be created. Identifying and defining the available decision-making algorithms at design time is not a limitation because agents making decisions in collaboration must always use some agreed upon mechanism to define the process of collaboration. Usually, the planning, negotiating, and decision-making protocols are hard-coded into the behavior of the agents. Relative to completely specifying a decision-making process, the additional step of creating a mapping between this process and assignments to D is not a

(13)

significant burden. Although this mapping is thus somewhat arbitrary for any given agent-based system, this flexibility does not lessen the value of the representation. Consistency can be maintained as long as any mapping is specified and consistently used. Agents within the same system must necessarily have the same understanding of collaborative decision-making. These agents should therefore use the same mapping to D. As long as the mapping from the specification of a decision-making protocol to the values assigned to D remains consistent within a single system, this autonomy representation can be readily used to assign, measure, and modify agent autonomy. See Sections 5.1 and 8.3 for some example mappings.

(3) Authority constraint --

Let C represent the authority constraint for the autonomy assignment.

Without the ability to implement a decision, the decision-making process itself is pointless. The authority constraint, C, ensures that some agent(s) will carry out the decisions of the decision- making group.

C is simply a list of agents { a0 [, a1 , ..., an ] }

Each agent in C must accept task assignments from the decision-making group. These task assignments may be in the form of high-level goals or scheduled actions. C must always contain the self-agent and may contain any number of other agents as well.

5. Measuring Autonomy

Many agent designers and users desire a single-valued metric assessing the degree of agent autonomy. Such a measure can be based on the representation given above, and would convey, in a single number, how much autonomy an agent has. The reader is reminded that the defined autonomy measure always describes an agent’s autonomy with respect to some goal (or G).

Agents often have multiple goals (some of which may be implicit) and can operate at different degrees of autonomy with respect to each of these goals. The following discussions concern only goal-specific autonomy assessment.

5.1. Degree of Autonomy

The autonomy measure is bounded at both ends. It is possible for an agent to have no autonomy or complete autonomy. Robot_5 who intends the goal “Robot_5 travel through maze

#42 from start to finish” has no autonomy with respect to this goal if it must wait at every step for instructions from some other agent such as “turn 90 degrees” or “go forward one position.” On the other hand, a robot maze-runner that completes the maze without any intervention is completely autonomous for its goal to travel through the maze.

(14)

Let a represent a single-valued measure of agent autonomy, where 0 ≤ a ≤ 1.

a = 0 indicates no autonomy.

a = 1 indicates complete autonomy.

The possible values of a range continuously between the values of 0 and 1. The value of a in any particular instance is determined by the amount of intervention in an agent’s decision-making process. This degree is determined the amount of control that each agent holds over the outcome of that decision-making process. As an example, consider a turn-based collaborative decision- making procedure. Suppose the maze-running robot makes every fourth decision on its own and the other agent makes the remainder (corresponding to 1 “vote” for the robot and 3 “votes” for the agent). In this case, the robot is completely in control one-fourth of the time (or one-fourth in control of the overall decision-making process). Therefore, arobot=0.25. If the robot makes every 100th decision, arobot=0.01 (every 1000th decision, arobot=0.001 and so forth). If the other agent makes every fourth decision and the robot makes the remainder, arobot=0.75. If the other agent makes every 100th decision, arobot=0.99 (every 1000th decision, arobot=0.999 and so forth).

Furthermore, if the robot always makes the first two decisions and the other agent makes the next three, arobot=0.40. If the robot makes 12 decisions, and the other agent then makes 13, arobot=0.48 (robot 499, other agent 501, arobot=0.499 and so forth). As the value of a increases, control of the decision-making process by the self-agent increases and intervention by other agents lessens.

5.2. The Autonomy Measure

Previous examples give intuitive arguments for assigning various values to the autonomy measure, a. This section presents the formula used to calculate a from the autonomy representation (G, D, C) given above.

For any (G, D, C),

aax with respect to G, given C, is defined as





otherwise if

0

) ,

( D

v x

x x

ra a a

This formula indicates that if the agent, ax , is listed as a decision-maker in the autonomy representation, then

aax =

rax, the agent’s relative strength in the decision making process. If ax is not listed as a decision-maker in the autonomy representation, then

aax = 0.

(15)

5.3. Levels of Autonomy

This section maps the quantitative analysis of agent autonomy given above to a qualitative description that helps clarify the concept of agent autonomy as a variable. Agent autonomy can be described along a spectrum as shown in Figure 1 [36]. An agent’s autonomy increases from left (a = 0) to right (a = 1) along this spectrum. The three discrete autonomy level categories labeled in Figure 1 define salient points along the spectrum.

SPECTRUM OF AUTONOMY

Command- driven

True Consensus

Locally Autonomous /

Master

0 1

Figure 1. The Autonomy Spectrum.

Command-driven (a = 0) -- The agent does not make any decisions about how to pursue its goal and must obey orders given by some other agent(s).

True Consensus -- The agent works as a team member, sharing decision-making control equally with all other decision-making agents.

Locally Autonomous / Master (a = 1) -- The agent makes decisions alone and may or may not give orders to other agents.

Supervised autonomy levels exist between the command-driven and consensus levels, and supervisory autonomy levels exist between the consensus and locally autonomous/master levels.

Notice that the “true consensus” autonomy level has no associated autonomy value in the above list. The degree of autonomy associated with “true consensus” changes as the number of decision-making agents changes. Figure 2 shows the relationships between the levels of autonomy and the degree of agent autonomy, a, for varying numbers of decision-making agents,

|D|. Several interesting conclusions can be drawn from this figure. If only one agent is making decisions (pictured on the x axis in Figure 2), the possible values of a are limited to the two discrete values, 0 or 1. That is, if only one agent, ay, is making decisions then any given agent is either ay or it is not making decisions (i.e either x = y and

aax =

rax= 1 or x ≠ y and

aax = 0).

Figure 2 also shows that the full range of a, up to but not including 1, is possible when two or more agents are making decisions. It is intuitively (and mathematically) impossible for one agent to make decisions without intervention if any other agent is helping to make those decisions.

Thus for decision-making frameworks with more than one decision-maker (|D| ≥ 2), a may approach 1 but will never reach 1. Conversely, Figure 2 shows that the value a = 0 is always possible, regardless of how many agents collaborate to make decisions. It is always possible that

(16)

(ax ,

vax ) ∉ D. Figure 2 also shows that for an agent in a true consensus relationship the degree of autonomy approaches 0 as |D| increases. Finally, Figure 2 describes the regions of supervisory (between true consensus and a = 1 for |D| ≥ 2) and supervised (between a = 0 and true consensus for |D| ≥ 2) degrees of autonomy. Within a decision-making framework, if there exists any agent, ax, with a supervisory degree of autonomy, then there also exists some agent, ay, with a supervised degree of autonomy. The converse also applies.

6. Adjusting Autonomy

This section describes the concept of adjustable autonomy, which allows agents to move along the autonomy spectrum during system operation.

6.1. Motivation for Adaptation

As agent-based applications become more widespread, the demand for robust performance and flexibility has increased. Consequently, much research over the past several years has focused on agent adaptability. Various researchers have identified motivations for agent adaptation including the ability to respond to the failure of another agent or a communication failure [17, 44] , to coordinate plans and resolve conflict [29, 31] , to distribute tasks equally among agents [20], to improve system performance [23], and to respond to other types of uncertainty present in the

1 10 100

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

Degree of Autonomy

Number of Decision-Making Agents |D|

True Consensus

Locally Autonomous / Master Command-driven

Supervisory

Supervised

Figure 2. The Spectrum of Autonomy Mapped to Degree of Autonomy (a) for Different Numbers of Decision-Making Agents ( |D| ).

(17)

environment. Agent adaptation can occur at different levels using a variety of mechanisms ranging from an individual agent’s adaptive planning algorithms to adaptation of the system’s organization itself. Many agent-based systems use some combination of adaptation at different levels to help maximize performance. This article focuses on adaptation at the organizational level in multi-agent systems. Approaches to dynamic organizational adaptation include approaches based on dynamic role-filling mechanisms and approaches based on maximization of specific system performance measures.

6.2. State of the Art for Organizational Adaptation

Most agent-based systems implementing organizational adaptation employ some type of dynamic role-filling mechanism. Agent interaction within a problem-solving environment can be modeled as the fulfillment of certain domain-specific roles [22, 33, 41] (e.g. in the American football domain, roles include quarterback, receiver, and blocker [40]). A role specifies domain-specific tasks an agent takes on. Static organizations can be dynamically “reconfigured” by allowing agents to dynamically assume one or more different pre-defined roles during system operation [41]. For example, the RETSINA agent approach uses “middle agents” to help route information requests in the face of the failure and recovery of agents or communication links [17]. If, for instance, an agent who was fulfilling a particular type of “information provider” role fails, a middle agent facilitates the placement of another agent in this role. The agents that participate in any particular instance of the organizational structure can be dynamically modified based on the run-time situation. In this manner, a RETSINA agent organization adapts to unexpected events such as the appearance and disappearance of agents. Another adaptive approach based on agent roles (e.g. “scout” in the “Attack” domain) is employed by STEAM, a rule-based system supporting flexible teamwork [44]. Under this system, team members monitor their teammates for role failure. If a critical role failure is detected, the team is reconfigured, if possible, by substituting another agent into the failed role. Glaser and Morignot describe another system using this dynamic role-filling approach in which agents can join existing agent societies [24].

The decision to join is based on the agent’s capability to fulfill a useful role in that society as well as the benefit the agent can obtain by participating in that society. Although these approaches successfully address many problems encountered by agent-based systems, this approach is limited because it assumes that the organizational structure is appropriate and is only concerned with maintaining it [41] .

An organization’s structure defines the pattern of information, control, and communication relationships among agents as well as the distribution of tasks, resources, and capabilities [22, 41,

(18)

42]. Some agent-based systems dynamically manipulate the system’s organizational structure itself. Ishida, Gasser, and Yokoo describe a system implementing organizational self design (OSD) based on strategic work-allocation and load-balancing [27]. The reorganization primitives provided by OSD dynamically vary the system macro-architecture, while the micro-architecture (the structure of agents themselves) remains the same. Dynamic reorganization is supported by two primitives: decomposition and composition. Decomposition creates two agents from one. It is performed when the environment demands too much from a particular agent. Composition combines two agents into one. It removes agents and, thus, inter-agent messages, freeing up both computation and communication resources. Another example of an agent-based system that dynamically varies its organizational structure is provided by Glance and Huberman [23]. They show that an agent’s capability to break away from its group combined with the difficulty of moving between established groups enables the highest levels of cooperation within an agent- based system. Unfortunately, the reorganization primitives used in these systems do not generalize well to other systems in other domains. For example, composition and decomposition would not work well for a team of agents designed to simulate a military attack team or an American football team.

In order to create reorganization primitives that generalize well, an explicit representation of organizational knowledge should be employed. So and Durfee [41] suggest a principled approach to creating a model of organizational knowledge. This approach focuses on representing agent roles (e.g. who is responsible for what task), and it supports a form of organizational self design that would allow the dynamic definition and redefinition of these roles. In addition, Fox et al present a formal model of organizational knowledge for the TOVE enterprise model [22].

Various types of generally applicable reorganization primitives could be defined to operate on this model.

A model of autonomy also explicitly represents organizational knowledge. The concept of autonomy necessarily implies a relationship (of non-intervention) with at least one other agent.

Such relationships define key characteristics of the organizational structure of agent-based systems. As shown by this article, an agent’s degree of autonomy arises directly from the control and authority constraints specified by the organizational structure. Therefore, the implementation of adjustable autonomy has a serious impact on organizational structure.

6.3. Dynamic Organization Through Adjustable Autonomy

Adjustable autonomy constitutes a subset of dynamic reorganization in which the authority and control relationships among agents are allowed to change, but the domain-specific task

(19)

responsibilities, resources, and capabilities of the agents may remain constant. The autonomy representation given above provides the model of organizational knowledge on which adjustable- autonomy reorganization primitives operate. For example, in the problem domain of American football, a coach, quarterback, and other team members may interact in various ways to determine which play to run on the next down. The quarterback may be command-driven, and the coach may send in the play from the sidelines. Alternatively, the quarterback may take on the responsibility to determine the play and act in a locally autonomous / master fashion. In addition, the team may make the decision together and determine the next play by consensus. Each of these problem-solving frameworks may be desired in different situations even though the problem itself and the domain-specific role of each team member remain the same. Adjustable autonomy gives agents the capability to dynamically adapt their problem-solving structure to their situation.

This capability will be particularly important in domains with (1) unreliable communication, (2) high degrees of uncertainty, or (3) resource contention.

7. Mechanisms for Adjustable Autonomy

This section describes four mechanisms, including the autonomy representation discussed above, that allow the implementation of adjustable autonomy in Sensible Agent-based systems.

7.1. Autonomy Representation

The autonomy representation (G, D, C), and associated autonomy measure, a, support the implementation of adjustable agent autonomy. Assignments to the autonomy representation correspond to assignments of parameters in the available decision-making algorithms. It is changes in these parameters that actually modify the behavior of the agents. Two examples of such mappings between decision-making algorithms and the autonomy representation have been discussed in this article: (1) the voting procedure described in Section 4.3 and (2) the alternate control of decisions over time described in Section 5.1. The number of votes or decisions possessed by each agent can be modified during system operation by assignments to the autonomy representation. Therefore, the decision-making algorithm may stay the same, but the agent’s autonomy becomes adjustable. Far more complex adjustments are possible, corresponding to more complex mappings from the autonomy representation to alternate decision-making algorithms. See Section 8.3 for a mapping that is based on classification of autonomy assignments and a subsequent mapping to a discrete number of decision-making algorithms.

(20)

7.2. Autonomy-Level Agreements

The autonomy representation presented in this article is designed to be maintained as a local, subjective model from a single agent’s viewpoint. That is, agent ax, represents its own autonomy with respect to all goals it is pursuing and may also maintain a representation (belief about) ay’s autonomy for its goals. However, any of ax’s autonomy assignments (G, D, C) that make reference to other agents or their intended goals must be represented consistently by all involved agents. This means that agents must actually establish an agreement to work together under a specified decision-making framework each time more than one agent is involved. Complete autonomy (“locally autonomous” on the spectrum) can be viewed as the absence of such an agreement. Many different types of negotiation protocols can be used to establish these

“autonomy-level agreements.” Similar properties can be seen in communication protocols designed to establish joint intentions or joint commitments [15, 26], where multiple agents must agree to pursue something together. In particular, research on Sensible Agents has developed communication protocols specifically for establishing autonomy-level agreements [6].

7.3. Autonomy-Level Commitment

Constraints on agent interactions must be established before adjustable autonomy can be realized.

As discussed in Section 4.2, agents must establish some form of commitment to their goals to ensure pursuit of these goals. This article proposes an additional layer of commitment that is needed in order to implement adjustable autonomy. Agents should model and enforce commitments to their established autonomy level agreements. This “autonomy-level commitment” helps ensure that agents will actually participate as expected in their established decision-making frameworks. In addition, the implementation of such a commitment impacts several other important aspects of agent-based systems capable of adjustable autonomy including trust and stability. Agents explicitly committed to a specific interaction style can be trusted to a greater extent (by designers or users) not to adjust their autonomy in an unpredictable manner.

Also, autonomy-level commitment puts a control on how free agents are to continually adjust their autonomy as opposed to actually working to solve a problem at a certain autonomy level. In general, commitments are not unbreakable. Often, there is some price that an agent can pay if it wants to renege on a commitment. By increasing this price, designers of agent-based systems can increase organizational stability without making the organization completely rigid. In addition, certain conventions [29] allow a commitment to be broken without penalty under certain conditions (i.e. loss of communication). All parties to the commitment should agree on these conventions.

(21)

Similar issues have previously been addressed by researchers considering joint intentions and teamwork [9, 15, 29, 38, 43]. Most of the theories and reasoning algorithms developed by this body of work apply to the formation and dissolution of collaborative problem-solving groups.

The major difference lies in the amount and type of information conveyed by autonomy-level commitments. Most models of joint intention, joint commitment, and teamwork represent the

“joint” nature of these intentions and commitments, but do not specify the organizational structure under which the agents should interact to carry out these intentions. An assignment to the autonomy representation given above specifies both the joint nature of the pursuit of a goal and the manner in which this pursuit is joint. Thus a commitment to an autonomy assignment commits an agent not only to joint action toward a goal, but also to a particular interaction style it must use to make decisions about how to pursue that goal.

There is also a minor representational difference between joint commitments and autonomy- level commitments. The majority of the models listed above require more than one agent to be committed to the same instance of a particular goal (sometimes called a joint goal). That is, more than one agent may intend to achieve the same instance of a particular goal. However, the explicit representation of agent autonomy removes this requirement and allows agents to maintain separate intended goal instances as discussed in Section 4.3. These individual commitments to independent goal instances can be used as a fall-back position if teamwork fails (under a particular autonomy assignment) or to facilitate the merging of already existing goals into team goals.

Some model of autonomy-level commitment will be critical to every agent-based system capable of adjustable autonomy. In Sensible Agent-based systems, an autonomy-level commitment is made to every autonomy assignment (G, D, C) [37]. Autonomy-level commitment is also modeled as a tuple (c, J, M), where c represents the cost of breaking the commitment, J is a set of conditions that justify the autonomy level agreement, and M is a set of condition-action pairs that tell the agent what it must do if it breaks the commitment. The convention governing these commitments indicates that once a commitment is no longer justified, an agent can break it without penalty, otherwise the agent must pay the price, c. The actions required by M most often specify a set of messages that the agent must send once it has dissolved a commitment. If communication is down, the agent pends such messages until communication returns (if it ever does).

(22)

7.4. Intended Goals Structure (IGS)

Agents capable of adjustable autonomy need a data structure in which to represent their intended goals so they can apply autonomy assignments to these goals. This data structure should be able to represent the relationships of the agent’s goals to one another. Sensible Agents use the Intended Goals Structure (IGS) for this purpose. The following sections describe key characteristics of the IGS.

7.4.1. Planning With Structured Goals. Many DAI researchers have characterized planning in multi-agent systems as a form of distributed goal search through classical AND/OR goal tree structures [28, 32]. Problem-reduction-style planning, supported by this type of goal structure, is well suited to multi-agent problems requiring coordination. Current Sensible Agent implementations adopt this planning paradigm. The goal trees from which Sensible Agents plan contain templates. Goal-template instantiations result in goal instances called candidate goals.

Candidate goals are those goal instances that are being considered but have not yet been accepted for execution by any agent [25]. Once a Sensible Agent chooses to achieve a candidate goal, this goal becomes an intended goal. The agent who accepts the goal intends to achieve it, and will make efforts to do so. For a complete description of agent intention, see [14]. In general, a Sensible Agent must make decisions about which actions to take or which subgoals to adopt in order to achieve its intended goals. An agent may also assist other agents by making decisions, or helping to make decisions, for their intended goals. Goals for which an agent is making decisions but which are not intended by the agent itself are referred to as external goals.

A Sensible Agent’s autonomy level assignment for a goal specifies the interaction framework through which decisions are made about how that goal will be achieved. Sensible Agents proceed through two phases when making decisions about how to carry out a goal: (1) a task- determination phase during which subgoals or subtasks designed to carry out a goal are proposed, instantiated, and selected, and (2) a task-allocation phase during which agents are assigned actions or tasks in accordance with the decisions made. Unique autonomy assignments are made at each level of the goal-subgoal hierarchy. Each of these individual autonomy assignments is dynamically adaptable.

7.4.2. IGS Characteristics. A Sensible Agent represents intended goals in an Intended Goals Structure (IGS) [36]. The IGS is an agent’s representation of the instantiated goals it will attempt to achieve (its own intended goals) as well as any additional goals for which it must make decisions (external goals, intended by other agents). The IGS contains AND-only compositions

(23)

of these goals and therefore does not represent alternative solutions or future planning strategies as goal trees do. (Agents may also maintain a set of Candidate Goal Structures, which have the same structure as the IGS but represent intended goals in other possible worlds.) The IGS represents what an agent has decided to do up to this point. Characteristics of the IGS include the following:

(1) Elements in the IGS, called goal elements, refer to a single intended goal or set of intended goals (G), which forms the focus of an autonomy assignment.

(2) Goal elements do not represent an agent’s alternatives.

(3) The IGS contains one or more top-level goal elements at all times and maintains the hierarchical structure of goal/subgoal decomposition where appropriate.

(4) A complete autonomy assignment is made for every goal element in the IGS.

Sensible Agents research currently considers only first-order goal elements, which cannot contain other goal elements as components. This constraint requires an agent to dissolve any existing autonomy-level agreement formed around an intended goal before forming a new autonomy-level agreement around that same goal.

Goal elements may be inserted into the IGS at the top level or under a currently existing goal element in a hierarchy. There are a total of three ways in which goal elements may be inserted into a Sensible Agent’s IGS. First, the system designer may place high-level goal elements in an agent’s IGS prior to system startup. Such goals may reflect the agent’s overall purpose in the system and the self-maintenance tasks the agent must perform to remain functional.

In addition, there are two ways in which goal elements may be inserted into an agent’s IGS during system operation:

Type-I Insertion: Insertion of the goal element is initiated by the agent when it accepts a goal during the task allocation phase of planning.

Type-I insertions reflect agent intention to achieve a goal.

Type-II Insertion: Insertion of the goal element is initiated by the agent when it agrees that the self-agent will make decisions, or help make decisions, for an external goal. Type-II insertions reflect decision- making frameworks centered around external goals.

Type-I insertions occur as the result of goal allocation. The agent either (1) must accept the goal due to an authority constraint or (2) decides to accept the goal on its own, outside any existing autonomy-level agreement. Once a Sensible Agent accepts a goal through either of these methods, it forms an intention to achieve that goal. Goal elements resulting from Type-I

(24)

insertions do not have a pre-defined autonomy assignment. The agent must make an autonomy assignment before it can begin making decisions about how to achieve the newly inserted goal.

Type-II insertions occur when an agent forms an autonomy-level agreement that requires it to make decisions about external goals. Goal elements that are inserted into an agent’s IGS through a Type-II insertion do not reference any goals intended by the self-agent. Autonomy-level agreements in which the agent agrees to combine an external goal with one of its own (already inserted) goals do not require an additional goal element insertion. In those cases, only the autonomy assignment for the existing goal element is modified.

8. Simulation

The remainder of this article presents an simulation scenario demonstrating the impact of adjustable autonomy on problem-solving performance in a multi-agent system.

8.1. Implementation

The Sensible Agent architecture and testbed have been designed to support adjustable autonomy, which has also been called Dynamic Adaptive Autonomy (DAA) by this research [37]. The Sensible Agent architecture consists of four major modules:

• The Perspective Modeler (PM) [4] contains the agent’s explicit model of its local (subjective) viewpoint of the world (including its IGS). The overall model includes the behavioral, declarative, and intentional models of the self-agent (the agent who’s perspective is being used), other agents, and the environment. The PM interprets internal and external events and changes its models accordingly. Degree of uncertainty is modeled for each piece of information. Other modules within the self-agent can access the PM for necessary information.

• The Action Planner (AP) [3] interprets domain-specific goals, makes decisions about how to achieve these goals, and executes the generated plans. Domain-specific problem solving information, strategies, and heuristics are contained inside this module. The AP interacts with the environment and other agents in its system, and it draws information from all other modules in the self-agent.

• The Conflict Resolution Advisor (CRA) [5] identifies, classifies, and generates possible solution strategies for conflicts occurring between the self-agent and other agents. The CRA monitors the AP and PM to identify conflicts. Once a conflict has been detected, it classifies this conflict and offers resolution suggestions to the AP.

(25)

• The Autonomy Reasoner (AR) [35] determines the appropriate autonomy level for each of the self-agent’s goals, makes an autonomy assignment for each goal, and reports autonomy constraints to other modules in the self-agent. The AR handles all autonomy adjustments and requests for adjustments made by other agents.

Of the four Sensible Agent modules, the AP currently has the only domain-specific implementation requirements. This enables reuse of the other modules and should allow Sensible Agent technology to be applied to many different types of problems with minimal conversion effort.

The Sensible Agent Testbed makes it possible to run repeatable experiments in which Sensible Agent functionality can be evaluated [1]. This environment handles complex modeling issues, produces reasonable visual and numerical output of the current world state, and logs various performance metrics. The current CORBA implementation allows the integration of C++, Java, Lisp, and ModSIM implementations on Solaris, WindowsNT, and Linux platforms. The CORBA Internet Inter-Orb Protocol (IIOP) provides a platform- and language-independent manner for inter-connecting different ORB implementations. The Sensible Agent testbed is a powerful, extensible system that allows the simulation of many different multi-agent problem- solving tasks.

8.2. Problem Domain

The Sensible Agent testbed situations described in this article use the problem domain of naval radar interference management. A radar is an instrument that detects distant objects and determines their position and velocity. It does this by emitting very high frequency radio waves and analyzing the returning signal reflected from the targets’ surfaces. Radar interference is any form of signal energy detected by a radar that comes from some source other than a reflection of its own emitted wave, but which is indistinguishable from actual return signals. Interference often comes from other radars operating in the same area at similar frequencies. The problem of naval radar interference management consists of maintaining a set of position and frequency relationships among geographically distributed radars such that radar interference is minimized.

Agents in these examples have the same top-level goal “Track Targets in Region,” and they use the same goal tree. Figure 3 shows the portion of the goal tree used by these agents to control radar interference. Each goal template in the goal tree contains one or more typed variables, indicated by values shown in curly braces. For example, the goal template “{Agent} Interference

< {threshold}” could be instantiated as “Agent 1 Interference < .0010”. Agents can use two strategies to reduce their interference below a chosen threshold: frequency management or

Referințe

DOCUMENTE SIMILARE

In terms of descriptive imaginary dynamics, Maxwell historical moment announces a distinct tendency of contemporary scientific discourse: the relative autonomy of the

By contrast to Yeats’ central position at the time, as acknowledged agent of cultural power, Joyce’s resistance was catalyzed by the energy of self-exiling –a third space

To discriminate between the anticancer and non-anticancer drugs, a data set of 180 drug molecules consisted of 90 non redundant anticancer and the same number of non redundant

Agent systems have been developed that rely purely on the inherited network of accessibility of OO systems (Binder 2000), but ideally an AO programming environment would

Traditionally, research into systems composed of multiple agents was carried out under the banner of Distributed Artificial Intelligence ( DAI ), and has historically been divided

In order to describe the economic implications of a prolonged military rivalry, we have constructed a nonlinear dynamical model that merges the classical Richardson arms race

De¸si ˆın ambele cazuri de mai sus (S ¸si S ′ ) algoritmul Perceptron g˘ ase¸ste un separator liniar pentru datele de intrare, acest fapt nu este garantat ˆın gazul general,

Overall, the autonomy representation enables the following capabilities for agent-based systems: (1) ensures consistent agent behavior within autonomy levels, (2)