• Nu S-Au Găsit Rezultate

Flexible Problem-Solving Roles for Autonomous Agents

N/A
N/A
Protected

Academic year: 2022

Share "Flexible Problem-Solving Roles for Autonomous Agents"

Copied!
32
0
0

Text complet

(1)

Flexible Problem-Solving Roles for Autonomous Agents

K. S. Barber and C. E. Martin

The Laboratory for Intelligent Processes and Systems Electrical and Computer Engineering

The University of Texas at Austin, ENS 240 Austin, TX 78712

http://www-lips.ece.utexas.edu [email protected]

phone: (512) 471-6152 fax: (512) 471-3652

Abstract

Agent-based technologies can be applied to many aspects of manufacturing. The need for responsive, flexible agents is pervasive in manufacturing environments due to the complex, dynamic nature of manufacturing problems. One critical aspect of agent flexibility is the ability to adapt problem-solving roles to various situations during system operation. This issue is addressed by research on Sensible Agents, capable of Dynamic Adaptive Autonomy. In Sensible Agent-based systems, levels of autonomy constitute descriptions of agent problem-solving roles.

These roles are defined along a spectrum ranging from command-driven, to consensus, to locally autonomous/master. Dynamic Adaptive Autonomy allows Sensible Agents to change autonomy levels during system operation to meet the needs of a particular problem-solving situation. The foundation of Dynamic Adaptive Autonomy is the definition of the various problem-solving roles that agents may fill. This paper defines four autonomy constructs (planning-responsibility, authority-over, commitment, and independence) and presents a formal representation of agent autonomy levels based on these constructs.

Submitted to

Special Issue of Integrated Computer-Aided Engineering on “Agent-Based Manufacturing”

Contact person:

Maria Gini

Department of Computer Science and Engineering 4-192 EE/Csci Building

200 Union St. SE Minneapolis, MN 55455

Phone (612) 625-5582 [email protected]

March 26, 1998

(2)

Flexible Problem-Solving Roles for Autonomous Agents

K. S. Barber and C. E. Martin

The Laboratory for Intelligent Processes and Systems Electrical and Computer Engineering

The University of Texas at Austin, ENS 240 Austin, TX 78712

http://www.lips.utexas.edu [email protected]

[email protected] phone: (512) 471-6152 fax: (512) 471-3652

Abstract

Agent-based technologies can be applied to many aspects of manufacturing. The need for responsive, flexible agents is pervasive in manufacturing environments due to the complex, dynamic nature of manufacturing problems. One critical aspect of agent flexibility is the ability to adapt problem-solving roles to various situations during system operation. This issue is addressed by research on Sensible Agents, capable of Dynamic Adaptive Autonomy. In Sensible Agent-based systems, levels of autonomy constitute descriptions of agent problem-solving roles.

These roles are defined along a spectrum ranging from command-driven, to consensus, to locally autonomous/master. Dynamic Adaptive Autonomy allows Sensible Agents to change autonomy levels during system operation to meet the needs of a particular problem-solving situation. The foundation of Dynamic Adaptive Autonomy is the definition of the various problem-solving roles that agents may fill. This paper defines four autonomy constructs (planning-responsibility, authority-over, commitment, and independence) and presents a formal representation of agent autonomy levels based on these constructs.

Submitted to

Special Issue of Integrated Computer-Aided Engineering on “Agent-Based Manufacturing”

Contact author:

K. Suzanne Barber

The Laboratory for Intelligent Processes and Systems Electrical and Computer Engineering

The University of Texas at Austin, ENS 240 Austin, TX 78712

http://www.lips.utexas.edu [email protected]

(3)

Flexible Problem-Solving Roles for Autonomous Agents

K. S. Barber and C. E. Martin

The Laboratory for Intelligent Processes and Systems Electrical and Computer Engineering -- The University of Texas at Austin

Austin, TX 78712 (512) 471-6152 [email protected] ABSTRACT

Agent-based technologies can be applied to many aspects of manufacturing. The need for responsive, flexible agents is pervasive in manufacturing environments due to the complex, dynamic nature of

manufacturing problems. One critical aspect of agent flexibility is the ability to adapt problem-solving roles to various situations during system operation. This issue is addressed by research on Sensible Agents, capable of Dynamic Adaptive Autonomy. In Sensible Agent-based systems, levels of autonomy constitute descriptions of agent problem-solving roles. These roles are defined along a spectrum ranging from command-driven, to consensus, to locally autonomous/master. Dynamic Adaptive Autonomy allows Sensible Agents to change autonomy levels during system operation to meet the needs of a particular problem-solving situation. The foundation of Dynamic Adaptive Autonomy is the definition of the various problem-solving roles that agents may fill. This paper defines four autonomy constructs (planning- responsibility, authority-over, commitment, and independence) and presents a formal representation of agent autonomy levels based on these constructs.

KEYWORDS

Dynamic Adaptive Autonomy, Sensible Agents, flexible problem-solving roles

1. INTRODUCTION

Manufacturing environments are inherently complex and dynamic. These characteristics create many challenges for automation of manufacturing tasks such as planning and scheduling. The use of agent-based systems offers significant advantages to automated manufacturing including distribution of

(4)

simply applying the agent-based paradigm to manufacturing problems may not be enough to address the real-time demands of production systems. Agent-based systems operating in the manufacturing domain are subject to dynamic situational changes across many dimensions:

certainty of information held by an agent or acquired by an agent about other agents in the system (e.g. inventory status, product status on factory floor, machine processing performance);

resource accessibility for a particular agent (e.g. which tools are available / performing within tolerance);

goal constraints for multiple goals (e.g. deadlines for goal completion, goal priorities); and

environmental states (e.g. air quality in clean room).

Manufacturing therefore requires agent-based problem solving to be flexible and tolerant of faulty information, equipment, and communication links. The research presented in this paper uses Sensible Agent-based systems (Barber, 1996), which have previously been implemented for the military domain of naval radar frequency management (Macfadzean and Barber, 1995), to address these issues in the manufacturing domain. The key point addressed in this paper concerns the ability of Sensible Agents to dynamically adapt the problem-solving roles they play. These problem-solving roles define the interaction frameworks in which agents plan to achieve goals. The process by which agents adapt these roles is called Dynamic Adaptive Autonomy, and it continuously redefines the organizational structure of Sensible Agent-based systems. Specifically, this paper provides a formal representation of agent autonomy, which supports dynamic adaptation of agent problem-solving roles within a multi-agent system.

(5)

This paper is structured in the following manner. Section 2 discusses related research. Section 3 presents the Sensible Agent architecture and the foundation of Dynamic Adaptive Autonomy. Section 4 introduces the formal autonomy representation. The Sensible Agent Testbed is introduced in Section 5. Finally, Section 6 discusses the advantages of applying this technology to the manufacturing domain and draws conclusions.

2. RELATED WORK

The organizational structure of agent-based systems, which provides the mechanism through which agents coordinate or cooperate to achieve system goals, has been the subject of much research over the past few decades (Chandrasekaran, 1981; Fox, 1981; Kirn, 1996; Nirenburg and Lesser, 1986; Singh, 1990; Smith, 1980; Werner and Demazeau, 1991; Wesson et al., 1981). Although these studies have shed much light on agent behavior under different problem-solving frameworks, a formal description of generalized (high-level, application-independent) agent problem-solving interaction is lacking. Such a representation is necessary to facilitate the development of increasingly powerful, flexible agent-based systems. This representation should describe the agent’s role in the organizational structure of the system.

Previously adopted representations of agent organization are unable to support the necessary meta-level reasoning about agent problem-solving interactions. Although many representations for organizational structure have been proposed for both self-organizing and statically organized agent- based systems, most such representations rely on application-specific definitions of agent capabilities or functional system roles (Barbuceanu et al., 1998; Glaser and Morignot, 1997; Pattison et al., 1987;

Singh, 1990). Others represent organizations as a derivative of the end-product of agent problem-

(6)

solving (i.e. the collection of ordered, dependent sets of tasks or actions in a plan or process) in addition to a collection of agent beliefs about such actions (Gasser et al., 1989). However, these representations do not capture an agent’s problem-solving role in its organizational structure.

One overall goal of multi-agent-systems research is adaptive self-configuration: allowing agents to reason about and change the organization of their coordination frameworks (Gasser, 1988). Several researchers have made progress toward this objective. Specific research that has contributed to flexible, adaptive coordination is discussed in the following paragraphs.

Durfee and Lesser’s method of communicating partial global plans (PGP) allows independent agents to interact within a system in many different ways (Durfee and Lesser, 1987; Durfee, 1996).

Many styles of cooperation can be implemented using the same mechanism, as presented by this research. Although such a mechanism is required for adaptive organization, it is not sufficient. PGP research assumes a statically defined meta-level organization, defined during system creation. Thus the ability to dynamically modify agent interactions is not supported.

Organizational self-design (OSD) provides one strategy for adaptive strategic work-allocation

and load balancing (Gasser and Ishida, 1991; Ishida et al., 1992). The reorganization primitives provided by OSD dynamically vary the system macro-architecture, while the micro-architecture (the structure of agents themselves) remains the same. Dynamic reorganization is supported by two primitives: decomposition and composition. Decomposition creates two agents from one, and composition combines two agents into one. Unfortunately, OSD does not allow agents to join and leave problem-solving groups while retaining their identity. Therefore, it does not optimally support the

(7)

Organizational fluidity is a measure of how much the organizational structure of the system

can change. This measure is affected by (1) how easily individuals can move within an organizational structure and (2) how easily individuals can break away from an existing organizational structure and strike out on their own (Glance and Huberman, 1993). As group size increases, cooperation becomes more difficult. Several experiments have shown that organizations with clusters are better than flat organizations (for level of cooperation) and that fluid organizations are best overall (Glance and Huberman, 1993). The combination of ease of breaking away and difficulty of moving between clusters enables the highest level of cooperation.

Glaser and Morignot describe a system in which agents can join existing agent societies that have established conventions for agent cooperation (Glaser and Morignot, 1997). This interaction is based on the agent’s capability to fulfill a useful role in that society as well as the benefit the agent can obtain by participating in that society. However, the defined roles are application-specific statements of agent qualifications. An agent’s position in the organizational structure of the system is not specified explicitly nor in a domain-independent fashion.

Most self-organizing systems rely on explicit, predefined differences in agent behavior and limit the reorganization primitives to a fixed number of predefined behaviors (Gasser and Ishida, 1991;

Glance and Huberman, 1993; Ishida et al., 1992). Others are based on adapting application-specific roles that agents can play during problem solving (Glaser and Morignot, 1997). These systems do not explicitly represent the agent’s problem-solving role. This limits the ability of these systems to reason about the appropriateness and potential modification of an agent’s problem-solving interactions in an application-independent fashion.

(8)

3. SENSIBLE AGENTS

As defined for this research, an “agent” is a system component that works with other system components to perform some function. In the manufacturing domain, the defined boundaries for agent functionality are essentially the result of object-oriented analysis and design for the application problem.

Decisions regarding level of abstraction and agent functional responsibility are made in the system analysis and design phase. For example, a process-planning agent may be defined which encapsulates resource selection, process selection, and costing services. Alternatively, each of these services may be assigned to individual agents (e.g. process selection agent).

In general, and from a domain-independent perspective, agents (1) have the ability to act and perceive at some level, (2) communicate in some fashion with one another, (3) attempt to achieve particular goals or perform particular tasks, and (4) maintain an implicit or explicit model of their own state and the state of their world. Sensible Agents extend these general agent capabilities in two dimensions by taking into consideration the tradeoffs between system and local goals and by dynamically adapting their organizational structure to given situations. This paper focuses on Sensible Agents’ capability to adapt problem-solving roles.

Sensible Agents achieve flexibility and responsiveness by representing and manipulating the interaction frameworks in which they plan to achieve their goals. Agent interactions for planning can be compared along a spectrum of agent autonomy as shown in Figure 1. An agent’s level of autonomy for a goal specifies the interaction framework in which that goal is planned. Although “autonomy” is often interpreted as an agent’s degree of freedom from human intervention, the extended concept of

(9)

autonomy in this context refers to an agent’s degree of freedom with respect to other system agents, some of which may be human.

An agent’s autonomy increases from left to right along this spectrum. Although agents may operate at any point along the spectrum, the three discrete autonomy level categories labeled above define the endpoints and midpoint:

Command-driven -- The agent does not plan and must obey orders given by another (master) agent.

Consensus -- The agent works as a team member, sharing planning tasks equally with other agents.

Locally Autonomous / Master -- The agent plans alone and may or may not give orders to other agents.

Agents can be designed to operate at a single level of autonomy if (1) the application is simple, (2) the designer correctly predicts the types of problems that agents will face, and (3) the environmental context and problem types remain constant. However, for complex applications in dynamic environments (i.e. most manufacturing problems), the appropriate level of autonomy may depend on an agent’s current situation. An agent’s state, its overall set of goals, and/or its environmental context may affect its optimal autonomy level assignment. All of these characteristics may be dynamic and change

SPECTRUM OF AUTONOMY

Command-

driven Consensus

Locally Autonomous /

Master

Figure 1. The Autonomy Spectrum

(10)

during system operation. For example, scheduling agents, who are attempting to achieve maximum throughput while directing parts to various process workstations, may perform optimally in different problem-solving configurations due to unforeseen factors such as load imbalance, the introduction of a new high-priority job, or machine failure. A Sensible Agent maintains solution quality in dynamic environments by using a technique called Dynamic Adaptive Autonomy (DAA). DAA is a capability developed by this research that allows a Sensible Agent to modify its autonomy level for any goal during system operation. The process through which an agent chooses the most appropriate autonomy level for a given situation is called autonomy reasoning.

Sensible Agent capabilities, including Dynamic Adaptive Autonomy, are supported by the Sensible Agent architecture depicted in Figure 2. Each Sensible Agent contains four major modules, as described in the following paragraphs:

(11)

ACTION PLANNER Plans Solves Domain

Problems Executes Plans PERSPECTIVE MODELER

Behavioral, Declarative, and Intentional Models of Self, Other Agents, and the Environment

Maintains agent’s local subjective beliefs about itself and its world

AUTONOMY REASONER Autonomy Constructs Determines and assigns

autonomy level Executes autonomy level

transitions

CONFLICT RESOLUTION

ADVISOR Conflict specific

knowledge Identifies, classifies and offers solutions

for conflicts

SYSTEM MODEL

Interaction with other system agents

Interaction with the Environment

Autonomy Requests Perception of

Environment and external agents

Figure 2. Sensible Agent Architecture.

1. The Perspective Modeler (PM) contains the agent’s explicit model of its local (subjective) viewpoint of the world. The overall model includes the behavioral, declarative, and intentional models of the self-agent (the agent who’s perspective is being used), other agents, and the environment. The PM interprets internal and external events and changes its models accordingly. Degree of uncertainty is modeled with each piece of information.

Other modules within the self-agent can access the PM for necessary information.

2. The Action Planner (AP) interprets domain-specific goals, plans to achieve these goals, and executes the generated plans. Domain-specific problem solving information, strategies, and heuristics are contained inside this module. The AP interacts with the environment and other agents in its system, and it draws information from all other modules in the self-agent.

(12)

3. The Conflict Resolution Advisor (CRA) identifies, classifies, and generates possible solutions for conflicts occurring between the self-agent and other agents. The CRA monitors the AP and PM to identify conflicts. Once a conflict has been detected, it classifies this conflict and offers resolution suggestions to the AP or AR.

4. The Autonomy Reasoner (AR) determines the appropriate autonomy level for each of the self agent’s goals, assigns an autonomy level to each goal, and reports autonomy-level constraints to other modules in the self-agent. The AR handles all autonomy level transitions and requests for transition made by other agents.

Of the four Sensible Agent modules, the AP has the only domain-specific implementation requirements. This enables reuse of the other modules and should allow Sensible Agent technology to be applied to many different manufacturing problems with minimal conversion effort. Although the content of the models in the PM would be specific to each application, the representation schemata themselves would remain domain independent. In the same fashion, the algorithms residing in the CRA, PM, and AR are domain independent. For example, the algorithms for detecting, classifying, and evaluating solution strategies for conflict resolution in the CRA operate on domain-independent data structures. The remainder of this paper focuses on the domain-independent representation of agent autonomy, assigned by the AR. Not only does this representation allow the AR to function across domains, but it also serves several other essential functions which are discussed in the following paragraph.

A formal representation of agent autonomy is necessary to provide an interpretation of agent

(13)

agent must be able to model the interaction styles of other agents in its system in order to be able to interact with them. Agent behavior at each autonomy level should be uniform among all system agents because system agents must be able to form expectations about how other agents will behave at certain autonomy levels. Those other agents should capable of conforming to those expectations. Because autonomy levels concern the planning interactions of system agents, these agents must agree about their interaction frameworks. A formal representation of these frameworks is essential for these purposes.

An autonomy representation is also required to guide agent planning (i.e. the agent’s AP must be able to interpret autonomy level assignments in order that it may perform at a given level as opposed to some other level).

4. AUTONOMY REPRESENTATION

In contrast to previously employed representations of agent organization, the autonomy representation presented here focuses on domain-independent, high-level descriptions of problem- solving interactions among individual agents. This autonomy-level representation describes the organizational qualities rather than the content of such interactions. Autonomy levels are represented by four autonomy constructs:

Planning-Responsibility: a measure of how much the agent must plan for a goal.

Authority-Over: a measure of the agent’s ability to access system resources in pursuit of a goal.

Commitment: a measure of the extent to which a goal must be achieved.

Independence: a measure of how freely the agent can plan for a goal.

Therefore, an autonomy level is a 4-tuple: (R, A, C, i), where R is planning-responsibility, A is authority-over, C is commitment, and i is independence. Together these constructs provide a powerful

(14)

and flexible mechanism for specifying agent problem-solving roles within an agent-based system. The mapping from these constructs to the autonomy spectrum shown above is given in Section 4.1. Sensible Agent autonomy is assigned through autonomy level agreements with other agents, establishing a planning framework for some subset of goals in a system. These agreements may be implicit for statically organized systems, or they may be negotiated explicitly in dynamically organized systems. An agent may be involved in more than one planning framework at a time (i.e. take a distinct problem- solving role with respect to each goal). Through autonomy level agreements, agents agree to interact in a specific manner toward a specific objective. This manner is specified by problem-solving roles and represented by an assignment to the autonomy constructs.

The following discussion of the individual autonomy constructs refers to agent goals and subgoals. The goal/subgoal relationship reflects a task-reduction planning paradigm. This planning

paradigm is common in work on agent-based systems (Jennings, 1993; Lesser, 1991). Autonomy levels can be assigned at each step of task reduction (to each goal, its subgoals, and any further subgoals these subgoals may have). However, the nature of the autonomy constructs does not restrict their applicability to the goal/subgoal paradigm alone; autonomy levels can also be applied at any single level.

Although the complete specification of a Sensible Agent-based manufacturing application is beyond the scope of this paper, we provide an abstract description of a Sensible Agent-based process planning system to facilitate the following discussion. This system is composed of four primary agents:

• a Resource Selector (RS) selects personnel and equipment to execute specified

(15)

• a Process Selector (PS) determines required processes to produce manufacturing features and consequently a manufactured part,

• a Cost Estimator (CE) estimates labor hours as well as equipment and material costs given selected resources and processes, and

• a Process Plan Generator (PPG) assembles process plans to include selected processes with associated costs and resources.

Additionally, agents can be assigned to respective resources on the factory floor (e.g. material handling devices, robots, machine tools, and machine operators) where each of these agents monitors resource availability and status, initiates command instructions, and monitors execution. This example manufacturing application will be used throughout the following sub-sections to demonstrate the use of the autonomy constructs.

4.1 Planning-Responsibility

The planning-responsibility assignment maps directly to the autonomy spectrum. The more planning-responsibility an agent is assigned relative to other agents in its planning framework, the more autonomy that agent has. Planning-responsibility is an essential autonomy construct because it tells an agent which goals to achieve or plan for, how much relative effort to spend on each of those goals, and what its planning interactions with other agents should be.

A Sensible Agent may accept planning-responsibility for a goal it does not, itself, intend to achieve. In this context, planning for a goal refers to the process by which subgoals (or plan steps) are created, suggested, selected, and allocated among agents.

Let a represent an agent.

(16)

Let pa be number representing the total amount of a’s planning-resources.

Based on its capabilities, each agent has a finite amount of resources that can be used per unit time for planning. Planning-resources may include computational cycles (used for searching a problem space) or communication bandwidth (used for gathering information and negotiating with other agents).

Each resource used for planning can be assigned an associated value in terms of planning-resource units.

The valuation for each type of planning-resource will vary across problem domains.

Let giarepresent the ith goal intended by a.

Intention is a relationship between an agent and a goal (Cohen and Levesque, 1990). If an agent intends a goal, it will endeavor to achieve that goal and will, in general, perform actions toward that end. An agent may plan for goals it intends to achieve as well as for goals that other agents intend to achieve. For an agent to plan more than one goal at a time, its planning-resources must be allocated among all of these goals. Goals for which the agent does not plan do not receive planning-resource allocations. Based on the agent’s priorities, different portions of the agent’s planning-resources can be allocated to different goals. This allocation can be changed during system operation. The allocation of planning-resources is therefore a mechanism for implementing dynamic prioritization of an agent’s goals and communicating the effects of this prioritization to other agents.

Let the tuple (gia, p

gi

a ,1 a) represent agent a1’s allocation of planning-resources to the intended goalgia.

If a = a1, then the tuple defined above represents an allocation of planning-resources to a goal

(17)

Let G be a set of tuples:

{ [(gi

0

a,p g

a, ia0),] (gi

1

a1

,p

gi a, a

1

1), ..., (gi

n

an

,p

ginn

a, a ) } corresponding to the set of intended goals (and the distribution of a’s planning-resources over these goals) around which a particular agent- interaction framework is centered. The “[ ]” notation indicates that the first tuple is optional.

Therefore, G refers to some subset of all the intended goals in the system. If G does not contain a goal intended by a, (i.e. the first tuple listed above is not included) then G represents a collection of goals that agent a is helping other agents plan. If the number of tuples in G is one (|G| = 1), then a single constituent intended goal is planned within the framework. If |G| > 1, then all constituent intended goals must be planned-for concurrently and consistently. In the remainder of this paper, the term “goal” will refer to some G, the term “intended goal” will refer to some gia.

Let pGa be the percentage of pa that a allocates to G, overall.

Therefore, pGa represents the total amount of planning resources available to each of a‘s planning frameworks. These planning-resources are further allocated among the constituent intended goals by the p

gi

a ,1 aassignments, as described above. Therefore, each p

gi

a ,1 a indicates a percentage of the planning-resources allocated to G. This hierarchical allocation of planning-resources enables Sensible Agents to employ efficient prioritization algorithms.

Let rabe the percentage of decision-making power or voting power that a has within a planning framework, as compared to other agents involved in that planning framework.

Let L be a set of tuples:

(18)

{ (a1, ra

1), (a2, ra

2), ... , (an, r

an) } representing the agents who are planning within a particular interaction framework.

L represents the interaction bonds through which planning frameworks are realized. Each agent

in the set may play an equal part in planning for the goal (true consensus), or some agents may have more voting power than others have. This flexibility results from the ratio assignments (ra) represented in L, which reflects each agent’s strength in the planning group. Any number of agents in the set may have the same decision-making power (i.e. Two agents in true consensus would each have a ratio assignment of 0.5). The ratio assignment is based on individual agent capabilities with respect to the problem at hand and is determined during the autonomy-level-agreement phase of autonomy level assignment.

Finally, let R represent planning-responsibility.

Therefore, R is (pGa , G, L), where pGa represents a’s allocation of planning-resources to G, G represents the set of constituent intended goals to which the planning framework applies and the distribution of planning-resources across these constituent intended goals, and L represents the set of agents who are planning for G and their relative decision-making power. Here, a is referred to as the self-agent, the agent from whose perspective the planning-responsibility assignment is made.

In the agent-based manufacturing system described above, the PPG agent may act as a master agent in most cases, coordinating the process planning task (giving orders to and receiving responses from the other agents who would be command-driven). An example planning-responsibility assignment to a particular goal, g , from the PPG’s (master’s) perspective would be (pa = 0.20, G = {(g , 1)}, L

(19)

driven agent’s) perspective may be (pGa = 0.0, G = {(g0, 0.0)}, L = {(PPG, 1)}). Notice that both agents are working within a consistent planning framework on the same goal. If, for some reason, the PPG agent fails, forcing the other three agents to act in consensus to plan for this goal, the planning- responsibility assignment to this goal from the RS’s perspective may be (pGa = 0.15, G = {(g0, 1)}, L = {(RS,.33), (PS,.33), (CE,.33)}). It is important to note that under Dynamic Adaptive Autonomy (1) the goal, g0, will still be pursued even in the face of PPG’s failure and (2) the “best” organization is selected given a current assessment of the dynamic situation. Although the master assignment for the PPG was initially considered optimal, the result of its failure forces another, now optimal, organization where the RS, PS and CE agents are acting in consensus with respect to the goal g0.

4.2 Authority-Over

Tracking resource allocation and the power to assign goals is also essential for modeling organizational interactions. In Sensible Agent-based systems, this tracking is achieved through the authority-over autonomy construct. Changing autonomy level assignments for specific goals gives an agent more or less access to resources connected to that goal. This occurs through the formation or dissolution of interaction links to other agents, and therefore to the resources they control. The authority-over construct represents the privileges an agent has been granted with respect to the resources controlled by other agents. The authority-over construct serves the purpose of allowing an agent to understand how planning interaction frameworks affect access to shared resources. Through resource modeling, an agent can determine how it may gain access to desired resources, and it can attempt to form autonomy-level agreements with certain agents to bring about this access.

(20)

The authority-over construct, A, denotes the set of agents to whom tasks can be readily allocated during planning within the confines of the established planning framework. Therefore, the authority-over construct enforces task allocation privileges. The agents planning for a certain goal can plan to allocate tasks to the agents listed in the authority-over construct for that goal and thereby use the system resources controlled by these agents. In effect, an agent has access to all the system resources controlled by all the agents listed in the authority-over construct.

The assignment to A can be based on the planning-responsibility assignment for the same goal.

For example, the agents that may appear in A for a particular autonomy-level assignment (R, A, C, i) may include every agent who is planning plus every agent who’s intended goal is being planned:

A = { a1, a2, ... an },

where ∀ai ( ((ai , rai) ∈ L) ⇒ ai ∈ A) ) and

∀aj ( ((giaj, p gij

a a ) ∈ G) ⇒ aj ∈ A )

The instantiation of the authority-over construct can be extended from this example (e.g. a single agent may be designed with authority over all system agents whether or not it is helping them plan for a goal they intend to achieve). Together, the planning-responsibility and authority-over constructs describe the interactions of agents during the planning process. The difference between the planning- responsibility and authority-over constructs is that the planning-responsibility construct indicates which agents decide the tasks to be performed and how they are allocated, and the authority-over construct designates which agents are required to accept a task allocation from that specified planning group.

Given the Sensible Agent manufacturing application described above, if the PPG agent is the

(21)

perspectives could be A = {PPG, RS}. If the RS, PS, and CE were in consensus for goal g0, as also shown above, the authority-over assignment from all three agents’ perspectives could be A = {RS, PS, CE}.

4.3 Commitment

Modeling agent commitment is seen by many researchers as fundamental to agent coordination and interaction (Castelfranchi, 1995; Cohen and Levesque, 1990; Fikes, 1982; Jennings, 1993). In order to participate effectively in system operations, agents must be able to model and predict the behavior of other system agents to some extent. When agents form problem-solving groups, dependence on the actions of other agents increases. System agents must be able to rely on their problem-solving partners to carry out a distributed plan. The commitment construct provides this assurance. The commitment construct has two major components representing an agent’s commitment to its goal (goal commitment) and the agent’s commitment to the planning-interaction framework under which the goal is planned (autonomy-level commitment).

Goal commitment refers to a relation between an agent and a goal or action. If an agent is committed to a goal, it will endeavor to achieve it. An agent may break a goal commitment by giving up a goal that it had previously intended to achieve. Autonomy-level commitments reflect an agent’s commitment to participate in a particular planning-framework in pursuit of a goal. An agent may break an autonomy-level commitment by reneging on a previously established autonomy-level agreement.

Commitment itself is usually considered a binary concept; either an agent is committed to some objective or it is not committed to that objective. An agent’s behavior with respect to its objectives provides the actual meaning of commitment for the agent. However, agents can be implemented that

(22)

have varying levels of commitment for each goal (Durfee and Lesser, 1987; Sandholm and Lesser, 1995). Commitment values in the commitment construct are integer assignments in the interval [1, 4].

Agents must pay a price when they break their commitments. The characteristics of these costs will vary greatly across application domains. Correct implementation of the commitment construct simply requires that the cost increase as the level of commitment increases.

Goal commitment is represented within the commitment construct by a commitment level and a convention for each goal. The commitment level is represented by an integer value, c, where c = 1, 2, 3, or 4, as described above. Conventions are the conditions governing the reconsideration of agent commitments (Jennings, 1996). Conventions allow an agent to give up goals that have not been achieved without paying a penalty. An agent may give up a goal when it is no longer motivated to achieve it. To act on this convention, an agent must explicitly represent the conditions that justify the goal. These conditions may be represented as n-order logic or temporal logic formulas. The convention must also describe the actions the agent must take if it does give up the goal.

The representation for the goal commitment component of the commitment construct is the tuple (c, J, M). In this representation, c is an integer representing the agent’s level of goal commitment to the a particular goal; J is the set of conditions, represented as logical predicates, that justify the goal and thus motivate the agent to achieve the goal; and M is a set of condition-action pairs. M represents conditions as logical predicates based on an agent’s state. The actions specified in M are most often a set of messages that the agent must send if it gives up the goal. The representation of agent messages is implementation dependent but must specify message content and recipients. A system designer can

(23)

Autonomy-level commitment is also represented within the commitment construct by a commitment level and a convention for each goal. Parallel to the representation for goal commitment, autonomy-level commitment is represented by the tuple (cA, JA, MA), where cA is an integer value representing autonomy-level commitment level. The allowed values for cA are also 1, 2, 3, and 4. An agent must pay this price for breaking an autonomy-level agreement that establishes the planning interaction framework for a goal. The commitment level for autonomy-level commitment, cA, controls this cost. When an agent has made a new planning-responsibility assignment to the goal, a new autonomy-level commitment assignment must also be established. JA and MA represent justification for the autonomy-level commitment and corresponding condition-message pairs as described above. The form of conventions for autonomy-level commitments will vary across systems. These conventions play a more diverse role in system operation than do those for goal commitments. Conventions for autonomy-level commitment may specify rules triggering re-organization under certain conditions such as loss of communication and suspicion of failure for an external agent.

In summary, the commitment construct, C, is represented by ((c, J, M ), (cA, JA, MA)), internal and autonomy-level commitment. No goal may have a NULL autonomy-level commitment assignment.

However, a NULL goal commitment assignment is possible if the agent is simply helping another agent plan for its goal. Although an agent may have a NULL goal commitment for a particular goal, the agent can be motivated by an autonomy-level commitment to participate in the planning framework for the goal. This may in turn motivate the agent to accept task allocations as a result of this goal. The agent then assigns goal commitments to any such subgoals. Clearly, both types of commitment can motivate an agent to assign subgoals to itself under the goal in question.

(24)

Given the agent-based system described above, an example commitment assignment may be made as follows: If the PPG agent is the master for g0 and assigns a subgoal, g1, to the RS agent who is command-driven, then the commitment immediately assigned by the RS agent to g1 may be ((c = 4, J = {T}, M = ∅), (cA = 1, JA = {⊥}, MA = ∅)). Note that the assigned goal commitment level reflects complete commitment to this goal because the command-driven agent was ordered to achieve the goal and must carry it out. In this example (J = {T}) reflects that the goal is always justified, so the RS agent can never avoid paying a penalty for giving up the goal. There are no messages or actions (M) required if the agent does give up the goal. Note that when the RS agent first receives the subgoal g1, it has not yet formed any autonomy-level agreements for that goal. Although the RS agent was command-driven for g0, it may be locally autonomous for g1. The autonomy-level commitments are meaningless (reflected by JO = {⊥}) until an autonomy level assignment for goal g1 is determined.

4.4 Independence

Finally, the independence construct provides an index to domain-specific system constraints on agent planning. In general, agents plan under constraints imposed by their system, goals, and environment. The interpretation of the independence construct relies on two concepts:

(1) bounding the impact of an agent’s planning choices on its overall system and (2) defining levels of system constraints.

Independence is represented as an integer index assigned to each goal as part of its autonomy level. There are four possible independence indices: 1, 2, 3, and 4. The assigned independence index

(25)

may include 1, 2, 3, and 4. Possible assignments for consensus may be limited to 1 and 2. Note that the independence construct is not applicable to the command-driven classification because command- driven agents do not plan at all.

Locally Autonomous

Command Driven Consensus

Independence Assignments Possible Values

1 2 3 4

1 2 3 4

1 2 3 4

Figure 3. Possible Independence Assignments for Each Autonomy Level Classification.

The assigned independence index dictates agent behavior with respect to the two independence concepts listed above. The first concept relies on an evaluation of the inherent system utility of subgoals being considered for adoption. Sensible Agents evaluate their goals based on system and local utility measures (Usystem, Uagent) (Goel et al., 1996). A Sensible Agent makes decisions about system versus local tradeoffs based on these utilities. The allowable range of Usystem , for candidate goals the agent considers during planning, is bounded based on the independence assignment for a goal. At low levels of independence, an agent is forced to weigh the value of Usystem over that of Uagent when making tradeoffs during planning. A threshold for minimum allowed Usystem for acceptable goals is also set. As independence increases the agent may increase the weight for Uagent and decrease the weight for Usystem. Under the maximum independence assignment, Uagent may be considered alone, and the minimum value of Usystem may be unbounded. In essence, increasing independence decreases the acceptable ratio of Usystem/Uagent, and it also decreases the minimum threshold of acceptable Usystem values.

(26)

The second consideration for agent independence relies on the notion of flexible system constraints, which is based on the concept of social law. Social laws can be thought of as domain- specific constraints describing acceptable or expected agent behaviors. Social laws reduce the possibility of conflict among agent plans and increase the efficiency of planning by reducing the number of options an agent considers (the branching factor of its search) (Briggs and Cook, 1995). Constraints restricting an agent’s behavior also increase its predictability, allowing other agents to form better plans.

Permissive social laws may result in anarchy, but restrictive social laws may artificially constrain action resulting in inefficient or ineffective behavior. One way to strike a balance between agent freedom and predictability is to allow agents to relax the constraints imposed by their social laws when necessary (Briggs and Cook, 1995). This solution is adopted by Sensible Agent implementations. As an agent’s independence level increases, the social laws in effect become less constraining (see Figure 4).

4 3 2 1

Independence Values

Action Category 1

Action Category 2

...

Least Constrained ... ... ...

... ... ...

Most Constrained

Figure 4. Varying Levels of System Constraints Indexed by Independence Assignment.

Overall, the independence construct provides a vehicle through which Sensible Agents can determine the applicability of system constraints under which they operate. By manipulating independence assignments through DAA, an agent can often overcome constraints that exist in the system for general purposes, but that are unreasonable for particular situations. An example of such a

(27)

identical to ToolB used by other system agents. If ToolA is functioning properly, the transport agent operates at a low level of independence and delivers its parts predictably to ToolA. However, if ToolA goes out of service, the transport agent could increase its independence and operate with a less restrictive set of social laws that allow it to deliver to either ToolA or ToolB. Even though providing this choice makes the agent less predictable for other system agents, the overall system goal of optimal production is satisfied.

5. SENSIBLE AGENT TESTBED

This section introduces the Sensible Agent Testbed, which is the Sensible Agent implementation integrated with a software simulation environment. The purpose of this testbed is to provide an environment for easy specification of agent systems, facilitation of experimentation, and rapid integration of new technology such as planning or conflict resolution algorithms. Additionally, the Sensible Agent Testbed allows visualization of module functions and communication internal to a Sensible Agent as well as the operation of a system of agents as they work to achieve goals. The Sensible Agent Testbed is intended for use by our research group at The University of Texas as well as third party users. Thus, significant effort has been made to formally specify the Sensible Agent architecture and the realization of its implementation in the testbed.

(28)

AR AP CRA PM

Sensible Agent

Sensible Agent Simulation Environment

ORB ORB

IIOP Sensible

Agent User Interface

Figure 5. Structure of Sensible Agent Testbed.

The structure of the Sensible Agent Testbed is presented in Figure 5. We use the Object Management Group’s (OMG) Common Object Request Broker Architecture (CORBA) standards and the OMG Interface Definition Language (IDL) to formally define the interactions among agents and their modules (Barber, 1998). The use of IDL permits continued Sensible Agent evolution in an implementation-language-independent manner and facilitates parallel research initiatives within the framework of an integrated distributed object system. Sensible Agent modules and the environment simulator are implemented as CORBA objects that currently communicate through the Xerox Inter- Language Unification (ILU) distributed object environment (i.e. Object Request Broker, ORB). This implementation allows the integration of multiple implementation languages (C++, Java, Lisp and ModSIM) simultaneously on Solaris, WindowsNT and Linux platforms.

Additionally, ORB support of the CORBA Internet Inter-Orb Protocol (IIOP) standard will allow the connection of external ORBs to the Sensible Agent Testbed, further enhancing its extensibility.

The use of IIOP combined with the OMG’s CORBAServices Object Naming Services (COSNaming) allows the presentation of a public interface to the Sensible Agent Testbed, permitting external entities to

(29)

6. DISCUSSION AND CONCLUSIONS

The need for responsive, flexible, and sensible agents is pervasive in manufacturing environments due to the complex, dynamic nature of manufacturing problems. Sensible Agents can be applied to many aspects of manufacturing from incorporation of manufacturing knowledge in design to shop floor control, and the capability of Dynamic Adaptive Autonomy may prove critical to the implementation of effective, efficient manufacturing via multi-agent systems. System designers cannot predict every combination of run-time conditions on the factory floor. Tool status, operator availability, raw material quality and accessibility, unscheduled maintenance, and machine wear can all introduce unexpected problems for planning in manufacturing environments. In order to maintain both high productivity and market responsiveness, manufacturing systems must be adaptive and flexible. Dynamic Adaptive Autonomy can provide agent systems that are capable of being both.

The basis for DAA is the definition of different problem-solving roles that agents may fill. The representation of agent problem-solving roles is embodied by the autonomy constructs (R, A, C, i).

Together, planning-responsibility (R) and authority-over (A) specify the interaction framework in which an agent’s goal is planned and under which sub-tasks are allocated across system agents. Commitment (C) provides the foundation for agent-based interaction toward a set of application-specific goals.

Independence (i) allows agents to consider and control the system and local impacts of their planning processes. Overall, the autonomy representation enables the following capabilities for agent-based systems: (1) ensures consistent agent behavior within autonomy levels, (2) documents implications of agent interaction agreements, (3) provides guidance for agent planning, and (4) enables agents to reason about autonomy and choose productive problem-solving interactions. The formal definition of these

(30)

constructs allows the interpretation of agent problem-solving roles. The autonomy constructs represent the foundation for DAA in agent-based systems, which, in turn, may prove to be very beneficial in complex, dynamic manufacturing applications.

7. ACKNOWLEDGMENTS

This research was supported in part by the Texas Higher Education Coordinating Board (#003658452) and a National Science Foundation Graduate Research Fellowship. The authors would also like to thank S. Ramaswamy for his suggestions concerning this research, as well as Anuj Goel, Tse-Hsin Liu, Ryan McKay, David Han, Joon-Woo Kim, and Eric White for their contributions to the Sensible Agents project.

8. REFERENCES

Barber, K. S. 1996. The Architecture for Sensible Agents. In Proceedings of the International Multidisciplinary Conference, Intelligent Systems: A Semiotic Perspective, 49-54.

Gaithersburg, MD.

Barber, K. S. 1998. Sensible Agent Problem-Solving Simulation for Manufacturing Environments. In Proceedings of the AAAI SIGMAN Workshop on Artificial Intelligence and Manufacturing:

State of the and State of Practice. Albuquerque, New Mexico.

Barbuceanu, M., Gray, T., and Mankovski, S. 1998. Coordinating with Obligations. In Proceedings of the Second International Conference on Autonomous Agents, 62-69. Minneapolis/St. Paul, MN: ACM Press.

Briggs, W. and Cook, D. 1995. Flexible Social Laws, Technical Report. Department of Comptuer Science and Engineering, University of Texas at Arlington, Arlington, TX.

Castelfranchi, C. 1995. Commitments: From Individual Intentions to Groups and Organizations. In Proceedings of the First International Conference on Multi-Agent Systems, 41-48. San Francisco, CA: AAAI Press / The MIT Press.

Chandrasekaran, B. 1981. Natural and Social System Metaphors for Distributed Problem Solving:

Introduction to the Issue. IEEE Transactions on Systems, Man, and Cybernetics 11(1): 1-5.

Cohen, P. R. and Levesque, H. J. 1990. Intention is Choice with Commitment. Artificial Intelligence 42: 213-261.

(31)

Durfee, E. H. 1996. Planning in Distributed Artificial Intelligence, in Foundations of Distributed Artificial Intelligence, Sixth-Generation Computer Technology Series, O'Hare, G. M. P. and Jennings, N. R., Eds. New York: John Wiley & Sons, Inc., 231-245.

Fikes, R. 1982. A Commitment-Based Framework for Describing Informal Cooperative Work.

Cognitive Science 6: 331-347.

Fox, M. S. 1981. An Organizational View of Distributed Systems. IEEE Transactions on Systems, Man, and Cybernetics 11(1): 70-80.

Gasser, L. 1988. Distribution and Coordination of Tasks Among Intelligent Agents. In Proceedings of the Scandinavian Conference on Artificial Intelligence, 1988 (SCAI 88), 189-204.

Gasser, L. and Ishida, T. 1991. A Dynamic Organizational Architecture for Adaptive Problem Solving.

In Proceedings of the Ninth National Conference on Artificial Intelligence, 185-190. American Assosciation for Artificial Intelligence.

Gasser, L., Rouquette, N. F., Hill, R. W., and Lieb, J. 1989. Representing and Using Organizational Knowledge in DAI Systems, in Distributed Artificial Intelligence, vol. 2, Gasser, L. and Huhns, M. N., Eds. London: Pitman/Morgan Kaufman, 55-78.

Glance, N. S. and Huberman, B. A. 1993. Organizational Fluidity and Sustainable Cooperation. In Proceedings of the 5th Annual Workshop on Modelling Autonomous Agents in a Multi-Agent World, 89-103. Neuchatel, Switzerland.

Glaser, N. and Morignot, P. 1997. The Reorganization of Societies of Autonomous Agents. in Multi- Agent Rationality: Proceedings of the Eigth European Workshop on Modeling Autonomous Agents in a Multi-Agent World, Boman, M. and van de Velde, W., Eds. New York: Springer, 98-111.

Goel, A., Liu, T. H., and Barber, K. S. 1996. Conflict Resolution in Sensible Agents. In Proceedings of the International Multidisciplinary Conference on Intelligent Systems: A Semiotic Perspective, 80-85. Gaithersburg, MD.

Ishida, T., Gasser, L., and Yokoo, M. 1992. Organization Self-Design of Distributed Production Systems. IEEE Transactions on Knowledge and Data Engineering 4(2): 123-134.

Jennings, N. R. 1993. Commitments and Conventions: The Foundation of Coordination in Multi-Agent Systems. The Knowledge Engineering Review 8(3): 223-250.

Jennings, N. R. 1996. Coordination Techniques for Distributed Artificial Intelligence. in Foundations of Distributed Artificial Intelligence, Sixth-Generation Computer Technology Series, O'Hare, G.

M. P. and Jennings, N. R., Eds. New York: John Wiley & Sons, Inc., 187-210.

Kirn, S. 1996. Organizational Intelligence and Distributed Artificial Intelligence. in Foundations of Distributed Artificial Intelligence, Sixth-Generation Computer Technology Series, O'Hare, G.

M. P. and Jennings, N. R., Eds. New York: John Wiley & Sons, Inc., 505-526.

Lesser, V. R. 1991. A Retrospective View of FA/C Distributed Problem Solving. IEEE Transactions on Systems, Man, and Cybernetics 21(6): 1347-1362.

Macfadzean, R. and Barber, K. S. 1995. An Approach for Decision Making and Control in Geographically Distributed Systems. In Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics, 3816-3821. Vancouver, Canada: IEEE.

Nirenburg, S. and Lesser, V. R. 1986. Providing Intelligent Assistance in Distributed Office Environments. In Proceedings of the ACM Conference on Office Information Systems, 104-112.

(32)

Pattison, H. E., Corkill, D. D., and Lesser, V. R. 1987. Instantiating Descriptions of Organization Structures. in Distributed Artirficial Intelligence, Huhns, M. N., Ed. San Mateo, CA:

Pitman/Morgan Kaufman, 59-96.

Sandholm, T. and Lesser, V. R. 1995. Issues in Automated Negotiation and Electronic Commerce:

Extending the Contract Net Framework. In Proceedings of the First International Conference on Multi-Agent Systems, 328-335. San Francisco, CA.

Singh, M. P. 1990. Group Ability and Structure. In Proceedings of the Decentralized A.I. 2, Proceedings of the 2nd European Workshop on Modelling Autonomous Agents in a Multi- Agent World, 127-145. Saint-Quentin en Yvelines, France: Elsevier Science.

Smith, R. 1980. The Contract Net Protocol: High-level Communication and Control in a Distributed Problem-Solver. IEEE Transactions on Computers 29(12): 1104-1113.

Werner, E. and Demazeau, Y. 1991. The Design of Multi-Agent Systems. In Proceedings of the Proceedings of the 3rd European workshop on Modelling Autonomous Agents in a Multi- Agent World, 3-28. Kaiserslautern, Germany: Elsevier Science Publishers.

Wesson, R., Hayes-Roth, F., Burge, J. W., Staaz, C., and Sunshine, C. A. 1981. Network structures for distributed situation assessment. IEEE Transactions on Systems, Man, and Cybernetics 11(1): 5-23.

Referințe

DOCUMENTE SIMILARE

The concept of damages in contemporary legal systems is extensive. The damage arises from the contract basis where the contract is not executed entirely or is

(2016) have used in their research study two types of edible and also medicinal fungi species, Ganoderma lucidum and Pleurotus ostreatus, and have chosen as a

Figure 6 shows the superposed absorption spectra of the initial Nystatin solution and after irradiation for 30 and 60 minutes as well.. The effect of irradiation on

– Write/ update concept and design documents – Communicate vision for the game to the team – Create levels for the game. – Advocate for

Agent systems have been developed that rely purely on the inherited network of accessibility of OO systems (Binder 2000), but ideally an AO programming environment would

M¨uller, editors, Decentralized AI — Proceedings of the First European Workshop on Modelling Autonomous Agents in Multi-Agent Worlds (MAAMAW-89), pages 49–62.. Dependence

We then go on to examine a number of prototype techniques proposed for engineering agent systems, including methodologies for agent-oriented analysis and design, formal

Even so, the proposed model of the core dimension of autonomy presents a practical view of agent autonomy that can be used to implement a system capable of adjustable autonomy,