• Nu S-Au Găsit Rezultate

Knowledge Modeling - State of the Art

N/A
N/A
Protected

Academic year: 2022

Share "Knowledge Modeling - State of the Art "

Copied!
27
0
0

Text complet

(1)

Knowledge Modeling - State of the Art

Vladan Devedzic

Department of Information Systems, FON - School of Business Administration University of Belgrade

POB 52, Jove Ilica 154, 11000 Belgrade, Yugoslavia

Phone: +381-11-3950856, +381-11-4892668 Fax: +381-11-461221 Email: [email protected], [email protected]

URL: http://galeb.etf.bg.ac.yu/~devedzic/

Abstract. A major characteristic of developments in the broad field of Artificial Intelligence (AI) during the 1990s has been an increasing integration of AI with other disciplines. A number of other computer science fields and technologies have been used in developing intelligent systems, starting from traditional information systems and databases, to modern distributed systems and the Internet.

This paper surveys knowledge modeling techniques that have received most attention in recent years among developers of intelligent systems, AI practitioners and researchers. The techniques are described from two perspectives, theoretical and practical. Hence the first part of the paper presents major

theoretical and architectural concepts, design approaches, and research issues. The second part discusses several practical systems, applications, and ongoing projects that use and implement the techniques d escribed in the first part. Finally, the paper briefly covers some of the most recent results in the fields of intelligent manufacturing systems, intelligent tutoring systems, and ontologies.

(2)

1. Introduction

There were several major research, development, and technological streams in computer science and engineering during the last decade. Some of them have had deep impact on development of intelligent systems. Together, they form a context within which modern knowledge modeling techniques should be discussed. Such streams include object-oriented software design [9], [32], layered software architectures [56], development of hybrid systems [41], multimedia systems [23], and, of course, distributed systems and the Internet [36].

Along with such a context, any discussion of knowledge modeling should also include a reference to the kinds of knowledge that can be represented in the knowledge base of an intelligent system, as well as to the basic representation and design techniques.

D_knowledge C_knowledge

E_knowledge KB

S_knowledge

Figure 1. Knowledge base contents

Conceptually, we can think of the knowledge base as of a large, complex, aggregated object [18]. Its constituent parts can contain knowledge of different kinds. Some of them are represented in Figure 1.

D_knowledge stands for domain knowledge, and it refers to the application domain facts, theories, and heuristics. C_knowledge stands for control knowledge. It describes the system's problem solving strategies and its functional model, and is more or less domain independent. E_knowledge denotes explanatory knowledge. It defines the contents of explanations and justifications of the system's reasoning process, as well as the way they are generated. System knowledge (the S_knowledge part) describes the contents and structure of the knowledge base, as well as pointers to some useful programs, which should be "at hand" during the knowledge base building process, since they can provide valuable information. Examples of such programs are various application and simulation programs, encyclopedias, etc. In some intelligent systems, system knowledge also defines user models and strategies for the system's communication with its users.

Apart from these four kinds of knowledge there can also be some other specific kinds of knowledge in the knowledge base (e.g., knowledge specific to truth maintenance, or knowledge specific to the capabilities of communication and integration with other systems).

All kinds of knowledge in the knowledge base are represented using one or more knowledge representation techniques. These techniques use different and interrelated knowledge elements. The knowledge elements range from primitives (including different forms of O-A-V triplets, frames, rules, logical expressions, and procedures), to complex elements. Complex knowledge elements are

represented using either simple aggregations of knowledge primitives, or conceptually different techniques based on knowledge primitives and their combinations. In designing the entire knowledge base, all knowledge elements can be classified according to their type (e.g., rules, frames, or

procedures) and grouped into several homogenous collections. Each such collection contains knowledge elements of the same type.

The rest of the paper is organized as follows. The next section surveys some knowledge modeling techniques that have received most attention in recent years by developers of intelligent systems, AI practitioners and researchers. Then another section discusses several practical systems, applications, and ongoing projects that use and implement those techniques. Finally, the paper briefly covers some of the most recent results in the fields of intelligent manufacturing systems, intelligent tutoring systems, and ontologies.

2. Concepts, Theory, Approaches, and Techniques

Along with the general major trends in computer engineering mentioned above, research and modeling efforts in some specific fields have helped to lay the ground for development of advanced practical intelligent systems. These fields include intelligent agents, ontologies and knowledge sharing,

(3)

knowledge processing, intelligent databases, knowledge discovery and data mining, distributed AI, knowledge management, and user modeling.

2.1. Intelligent Agents

Intelligent agent is a program that maps percepts to actions. It acquires information from its environment ("perceives" the environment) and decides about its actions and performs them.

While there is no real consensus about the definition of intelligent agents, the above one adapted from [53] is intuitively clear and essentially describes the general concept of generic agent, Figure 2. All more specific intelligent agents can be derived from that concept.

Environment

Percepts Actions

Agent Effectors

Sensors

Figure 2. Generic agent (after [53])

There are different other names for intelligent agents , such as software agents, autonomous agents, adaptive interfaces, personal agents, network agents, softbots, knowbots, taskbots, and so on. Although there are minor differences among all these concepts, all of them are used to denote (one way or another) intelligent agents. For the purpose of this survey, we will adopt the term "intelligent agent"

and its definition as an autonomous software entity that perceives its environment through sensors, acts upon it through its effectors, has some knowledge that enables it to perform a task, and is capable of navigation and communication with other agents.

Intelligent agents help their users locate, acquire, filter, evaluate, and integrate information from heterogeneous sources, i.e. coordinate information acquisition. Unlike most other kinds of intelligent systems, intelligent agents help all categories of end users. They help users in different ways, e.g. by hiding the complexity of difficult tasks, performing some tasks on behalf of their users, teaching end users, monitor events and procedures of interests to their users, helping the users collaborate and cooperate, and the like [26], [42]. Agents have the ability to identify, search, and collect resources on the Internet, to optimize the use of resources, and to perform independently and rapidly under changing conditions. However, the user doesn't necessarily "listen" to what the agent "says".

Intelligent agents are modeled after human agents. Typically, they interact with their users as cooperative assistants, rather than just letting the users manipulate them as traditional programs. They act autonomously or semiautonomously as communicational interfaces between human agents and other programs, as well as between other computational agents and other programs [44 ]. Along with the capabilities of autonomous operation and communication, their other useful properties include, initiative, timeliness of response, flexibility, adaptability, and often a learning capability. It should be understood, though, that the concept of intelligent agent is not an absolute characterization that divides the world to agents and non-agents.

From the architectural point of view, most agents fall into one of the following categories [27], [45], [62], [63]:

?? Logic-based agents. Such agents make decisions about their actions through logical deduction, as in theorem proving. The internal state of a logic-based agent is assumed to be a database of formulae of classical first-order predicate logic. The database is the information that the agent has about its environment, and the agent's decision-making process is modeled through a set of deduction rules. The agent takes each of its possible actions and attempts to prove the formulae from its database using its deduction rules. If the agent succeeds in proving a formula, then a corresponding action is returned as the action to be performed.

?? Reactive agents. Their decision-making is implemented in some form of direct mapping from situation to action, often through a set of task accomplishing behaviors. Each behavior may be thought of as an individual action function that maps a perceptual input to an action to perform. Many behaviors can fire simultaneously, and there is a mechanism to choose between the different actions selected by these multiple behaviors. Such agents simply react to their environment, without reasoning about it.

?? Belief-desire-intention (BDI) agents. These agents internally represent their beliefs, desires, and intentions, and make their decisions based on these representations. B DI architectures apply practical reasoning, i.e. the process of continuously deciding which action the agent is to perform next in order to get closer to its goals. The process is represented in Figure 3, where a

(4)

belief revision function (brf) takes a sensory input and the agent's current beliefs, and on the basis of these, determines a new set of beliefs. Then an option generation function (Generate options) determines the options available to the agent (its desires), on the basis of its current beliefs about its environment and its current intentions. The filter function (Filter) determines the agent's intentions on the basis of its current beliefs, desires, and intentions, and an action selection function (Action) determines an action to perform on the basis of current intentions.

Sensor input

brf

Beliefs

Generate options

Desires

Filter

Intentions

Action

Action output Figure 3. BDI agent architecture (after [63])

?? Layered architectures. Agents with layered architectures make their decisions via various software layers. Layers correspond to different levels of abstraction. Each layer supports more -or-less explicit reasoning about the environment. In horizontally layered architectures, Figure 4a, each layer is directly connected to the sensory input and action output, as if each layer itself was an agent. In vertically layered architectures, sensory input is performed by at most one layer, and so is action output. One typical version of vertically layered architectures is shown in Figure 4b. Here both the agent's sensors and effectors (not shown in Figure 4b) lay in the same layer, and the API layer is used to program them. Effectively, the API layer links the agent to the physical realization of its resources and skills. The definition layer contains the agent's reasoning mechanism, its learning mechanism (if it exists), as well the descriptions of the agent's goals, knowledge (facts), and the resources it uses in performing its task. The organization layer specifies the agent's behavior within a group of agents, i.e. what group the agent belongs to, what is the agent's role in the group, what other agents is the agent "aware of", and so on. The coordination layer describes the social abilities of the agent, i.e. what coordination/negotiation techniques it knows, such as coordination techniques for collaboration with other agents, techniques for exchanging knowledge and expertise with other agents, and techniques for increasing the group's efficiency in collaborative work. The communication layer is in charge of direct communication with other agents (exchanging messages). It handles low-level details involved in inter-agent communication.

(5)

Layer 1 Layer 2

. . . Layer n

API layer Definition layer Organization layer Coordination layer Communication layer

Other agents

a)

Action output

b) Sensory

input

Figure 4. Layered architecture of intelligent agents a) horizontally layered architectures (after [62]) b) vertically layered architectures (after [45])

The communication layer in Figure 4a brings about the issue of agent communication. Collectively, agents may be physically distributed, but individually, they are logically distinct and unlikely to operate in isolation [39]. Agents typically communicate by exchanging messages represented in a standard format and using a standard agent communication language (ACL). An ACL must support a common syntax for agent communication, common semantics (i.e. domain ontologies as backbone of knowledge being communicated - see the next section), and common pragmatics (what agent talks, to what other agents, how to find a right agent to talk to (the identification problem), and how to initiate and maintain communication) [21]. A number of ACLs have been proposed so far. Two most

frequently used are KQML (Knowledge Query and Manipulation Language) [21], [31], and FIPA ACL [22], [31].

KQML is a language and a set of protocols for agent communication that supports different

architectures and types of agents. It also supports communication between an agent and other clients and servers. KQML uses standard protocols for information exchange (such as TCP/IP, email, HTTP, CORBA), and different modes of communication (synchronous, asynchronous, and broadcast). Figure 5 illustrates the layered organization of KQML messages, and Figure 6 shows an example KQML message. Message content can be described in any language and is wrapped-up in description of its attributes, such as the message type, the content language, and the underlying domain ontology. The outer wrapper is the "communication shell", specifying the message sender, the receiver, and the communication mode.

Communication shell Sender ID,

Receiver ID, communication

mode (synch/asynch)

Message

Content attributes

(language, underlying ontology, message type,…) Message

content

Knowledge being communicated,

encoded in a desired language

Figure 5. Layered organization of KQML messages (MSG

:TYPE query

:QUALIFIERS (:number-answers 1) :CONTENT-LANGUAGE KIF

:CONTENT-ONTOLOGY (blocksWorld) :CONTENT-TOPIC (physical-properties) :CONTENT (color snow ?C))

(PACKAGE :FROM ap001 :TO ap002

:ID DVL-f001-111791.10122291 :COMM sync

:CONTENT (MSG

:TYPE query

:QUALIFIERS (:NUMBER-ANSWERS 1) :CONTENT-LANGUAGE KIF

:CONTENT (color snow _C))) a)

b)

Figure 6. An example of KQML message (after [21]) a) the message content and attributes b) the message in its communication package

(6)

KQML also supports using facilitators and mediators, special-purpose agents that provide mediation and translation services for agent communication (e.g., forward messages to other agents, help

"matchmaking" between information providers and servers, and maintain a registry of service names).

Unlike direct, point-to-point communication between agents, facilitators and mediators serve as a kind of "event handlers" in a community of agents. They can register other agents' subscription of interest in certain information/knowledge and notify them when it becomes available. They can also collect other agents' "advertisements" of the information/knowledge they can provide, thus serving as

information/knowledge brokers between the other agents. Moreover, facilitators and mediators can help other agents establish direct communication by providing negotiating services such as matchmaking (the identification problem).

FIPA’s agent communication language is superficially similar to KQML. Like KQML, it is based on messages as actions, or communicative acts. The FIPA ACL specification consists of a set of message types and the description of their effects, as well as a set of high-level interaction protocols, including requesting an action, contract net, and several kinds of auctions [22]. FIPA ACL's syntax is identical to KQML’s except for different names for some reserved primitives [31]. Like KQML, it separates the message's outer language from the inner language. The outer language defines the intended meaning of the message, and the inner one specifies the content. The inner language - Semantic Language, or SL - relies on the idea of BDI agents. SL is the formal language used to define FIPA ACL’s semantics and is based on a quantified, multimodal logic with modal operators for beliefs, desires, uncertain beliefs, and intentions (persistent goals). SL can represent propositions, objects, and actions.

Two or more agents acting together form a multiagent system [39]. Centralized, "big" agents need huge knowledge base, create processing bottlenecks, and can suffer from information overload. In a

multiagent system, several smaller agents divide complex tasks and specialize for performing parts of complex tasks. Multiagent systems achieve higher adaptivity of individual agents and the system as a whole, and higher flexibility when there are many different users.

Typical roles of agents in a layered, distributed, multiagent system, include interface agents (agents that receive tasks from the users and return results), task agents (agents that perform tasks received from interface agents), and information agents (agents that are tightly coupled with data sources).

Some of the information agents facilitate matching of other agents. In such a system, agents run on different machines, information sources are also distributed, and agents cooperate in performing their tasks, thus creating a self-organizing system of agents that represents a shared resource for its users.

Performance degradation of a multiagent system is graceful if an agent fails, because other agents in the system can often carry on the task of the problematic agent.

Mobile agents are units of executing computation that can migrate between machines [60]. The concept of mobile agents is based on remote programming (RP) for distributed systems, as opposed to remote procedure call (RPC) [2], [8], [61]. In the RPC case, Figure 7a, a client program issues a

procedure/function call through a network to a remote server. In the RP case, the client program sends an agent to the remote server, where the agent issues local calls in order to obtain the desired service.

After it completes its task on the remote server, it can get back to the client machine and deliver the results to the client program.

Client b) Server

Network RP Client

program

Service

a)

Client Server

Network RPC Client

program

Client

agent Service

Local procedure call

Figure 7. a) Remote procedure call b) Remote programming

Most mobile agents today are based on Java applets' mobility and Java RMI protocol for agent transport. There is a number shareware tools for mobile agents on the Internet, such as IBM Aglets.

(7)

Such tools make possible to create, deploy, deactivate, and delete mobile agents easily, often no direct coding. However, for both technical and security reasons, deploying mobile agents in practice usually requires an agents server to be installed on every machine that can be visited by a mobile agent created by using a specific tool. Agent server is the "gathering place" for mobile agents on a given machine.

The agents work within that server, and migrate from one server to another carrying their states along.

Mobile agents can move along a prespecified "itinerary", or can decide themselves dynamically where to migrate from a given server. The term agents meetings is used to denote collaborative work of several mobile agents within a given agent server that the agents previously negotiate about. In all such dislocated collaborative activities, a mobile agent can act as a "proxy" that brings the results of an agents meeting "home", i.e. to the home machine.

However, in spite of widespread use of mobile agents in e-commerce and in some tasks that are otherwise typically performed manually (such as automatic network configuration and control, to give but one example), the concept of mobile agents has been criticized for having some important

constraints. The constraints include restrictions for meeting and collaboration mechanisms (everything must be done from within an agent server, and agent servers from different developers are often incompatible), critical data security (a mobile agent can steal data from a server, and a server can steal data from a mobile agent too!), and complex implementation of agent communication protocols and languages in case of mobile agents.

The concept of intelligent agents is a useful tool for system analysis and design. That fact has led to a growing adoption of intelligent agents in the software engineering community, and the terms like agent-oriented programming [29], [57], [60], agent-oriented software engineering [63], [64], and agent-oriented middleware [39] have been quickly coined.

One key idea of agent-oriented programming is that of directly programming agents in terms of

"mentalistic notions" (such as belief, desire, and intention) [57], [63]. For example, the agents intentions, i.e. how the agent acts, can be expressed as a commitment rule set [57]. Each commitment rule contains a message condition, a mental condition, and an action. Matching the mental condition of a commitment rule against the beliefs of the agent and the message condition against the messages the agent has received, it is possible to decide whether the rule is satisfied. If it is, the agent becomes committed to the action. Actions may be private, corresponding to an internally executed subroutine, or communicative, i.e., sending messages. Message types can be defined starting from speech act theory.

Some messages can modify the agent’s commitments, and other can change its beliefs.

Other important ideas of agent-oriented programming and programming languages include providing support for issues like inter-agent communication, migration, and clean factorization of a system into its high-level application component and the infrastructure implementation [60]. In other words, agent- oriented programming at the low level requires entities like agents, sites, and communication channels, as well as primitives for agent creation, agent migration between sites, and location-dependent asynchronous message communication between agents. At the high level, it is necessary to represent location-independent communication between agents and provide infrastructure for a user-defined translation into the low-level primitives.

Agent-oriented software engineering draws its justification from the fact that intelligent agents

represent a powerful abstraction, perhaps as powerful as procedural abstraction, abstract data types, and object-oriented programming [64]. In object-oriented software engineering, systems are modeled as a collection of interacting but passive objects. In agent-oriented software engineering, systems are understood and modeled as a collection of interacting autonomous agents. Thus, in a way, agent- oriented approach complements the object-oriented one. Although in agent-oriented software

engineering agents are typically implemented using object-oriented techniques, there are usually fewer agents in the system than objects.

Of course, not every system can and not every system should be naturally modeled using agents.

Agent-oriented software engineering is appropriate when building systems in which the agent metaphor is natural for delivering system functionality, data, control, expertise, or resources are distributed, or a number of legacy systems must be included and must cooperate with other parts of a larger system.

In order to gain wider acceptance for agent-oriented software engineering, appropriate extensions to several UML (Unified Modeling Language) diagrams have been proposed recently to express agent interaction protocols and other issues useful for modeling agents and agent -based systems [49]. An agent interaction protocol describes a communication pattern as an allowed sequence of messages between agents and the constraints on the content of those messages. Such communication patterns can be modeled as specific sequence diagrams, activity diagrams, and state charts in UML, possibly adorned with new notational elements (or appropriate variations of existing ones) to denote

communication acts between the agents. The key idea here is to treat a whole agent interaction protocol as an entity in UML (a package or a template). Any such entity should be generic, in order to support

(8)

easy adaptations and specializations. Figure 8 illustrates how some proposed minor extensions to UML can be used to express agent communication. Here the agent-name/role:class syntax is an extension of of already part of object-name/role:class syntax that is already a part of UML the arrowed lines are labeled with an agent communication act (CA), instead of an OO-style message. Other extensions have been proposed as well to support concurrent threads of interaction, interaction among agents playing multiple roles, packages involving agents, and deployment diagrams for agents. Some of the extensions include specific UML stereotypes for agent-oriented design, such as <<clone>>, <<mitosis>>,

<<reproduction>>, and the like.

Agent-1/Role:Class

CA-1

Agent-2/Role:Class

CA-2

Figure 8. Basic format for agent communication (after [49])

Some reusability-related suggestions in agent-oriented software engineering include the following:

?? agents' design and implementation should be kept simple - this allows for easier customization to different users, easier reuse and maintenance;

?? many agents can be developed starting from copying code fragments from a similar agent's program, and then customizing them to different users.

If middleware is defined as any entity that is interposed between a client and a server, a peer and another peer, or an application and a network, then agent-oriented middleware should provide the following services to agent-based applications and systems [39]:

?? dynamic binding between agents and hardware they run on, including a handle for any entity/entities "owning" the agents;

?? location services, such as finding an appropriate agent to communicate with either directly or indirectly, maping task requests to service instances, and facilitating agent interaction in either client-server or peer-peer configurations; currently, facilitators and mediators are used in many agent applications as special-purpose agents that provide location services;

?? application services, including dynamic self-configuration and communication between agents and other applications;

?? registration and life-cycle management services; currently, these services and application services provide agent servers (at least to an extent).

Is middleware distinct from other agents or is it an agent itself? This open problem is resolved in practice using a design trade-off. In "fat middleware - thin agents" configurations, the abstraction of agents’ common functionality is maximized and "delegated" to a separate middleware component. In

"thin middleware - fat agents" configurations, each agent’s autonomy is maximized and the common functionality delegated to the middleware is minimized. The tendency is toward more sophisticated agents and increased intelligence of an overall system. This moves the distribution of knowledge and intelligence toward the agents and away from the middleware, and essentially turns any middleware functionality into an agent.

2.2. Ontologies and Knowledge Sharing

In building knowledge-based systems, developers usually construct new knowledge bases from scratch.

It is a difficult and time-consuming process. Moreover, it is usually hard t o share knowledge encoded in such knowledge bases among different knowledge-based systems. There are several reasons for that.

First, there is a large diversity and heterogeneity of knowledge representation formalisms. Even within a single family of knowledge representation formalisms, it can be difficult to share knowledge across systems [46]. Also, in order to provide knowledge sharing and reuse across multiple knowledge bases, we need standard protocols that provide interoperability between different knowledge-based systems and other, conventional software, such as databases. Finally, even if the other problems are eliminated, there is still an important barrier to knowledge sharing at a higher, knowledge level [47]. That is, there is often a higher-level modeling, taxonomical, and terminological mismatch of different systems, even if they belong to the same application domain.

(9)

Research in the growing field of ontological engineering [11], [17], [43], offers a firm basis for solving such problems. The main idea is to establish standard models, taxonomies, vocabularies and domain terminologies, and use them to develop appropriate knowledge and reasoning modules. Such modules would then act as reusable components that can be used for assembling knowledge-based systems (instead of building them from scratch). The new systems would interoperate with existing ones, sharing their declarative knowledge, reasoning services, and problem-solving techniques [11].

Ontologies, or explicit representations of domain concepts, provide the basic structure or armature around which knowledge bases can be built [59] . Each ontology is an explicit specification of some topic, or a formal and declarative representation of some subject area. It specifies concepts to be used for expressing knowledge in that subject area. This knowledge encompasses types of entities, attributes and properties, relations and functions, as well as various constraints. The ontology provides

vocabulary (or names) for referring to the terms in that subject area, and the logical statements that describe what the terms are, how they are related to each other, and how they can or cannot be related to each other. Ontologies also provide rules for combining terms and relations to define extensions to the vocabulary, as well as the problem semantics independent of reader and context.

The purpose of ontologies is to enable knowledge sharing and reuse among knowledge based-systems and agents. Ontologies describe the concepts and relationships that can exist for an agent or a

community of agents. Each such a description is like a formal specification of a program. In fact, a common ontology defines the vocabulary with which queries and assertions are exchanged among agents. Ontologies state axioms that constrain the possible interpretations for the defined terms.

If we think of ontologies in object-oriented way, then one possible interpretation of ontologies is that they provide taxonomic hierarchies of classes and the subsumption relation. For example, in the hierarchy of the Lesson class, we may have the Topic class, the Objective class, and the Pedagogical- point class. But on the other hand, we can develop Lesson, Topic, Objective, and Pedagogical-point ontologies as well. In that case, the Lesson ontology would subsume the other three ontologies.

Ontologies make possible to define an infrastructure for integrating intelligent systems at the knowledge level [53]. The knowledge level is independent of particular implementations. It defines adequate representation primitives for expressing knowledge in a convenient way (usually those used by a knowledge engineer) [16]. The representation primitives can be regarded as an ontology (as concepts and relations of a particular domain), suitable for defining a knowledge representation language. Once the ontology is formalized, it defines the terms of the representation language in a machine-readable form.

Ontologies are especially useful in the following broad areas:

?? collaboration - ontologies provide knowledge sharing among the members of interdisciplinary teams and agent-to-agent communication;

?? interoperation - ontologies facilitate information integration, especially in distributed applications;

?? education - ontologies can be used as a publication medium and a source of reference;

?? modeling - ontologies represent reusable building blocks in modeling systems at the knowledge level.

Ontologies are also expected to play a major role in The Semantic Web [16]. The Semantic Web is the next -generation Web that will enable automatic knowledge processing over the Internet, using intelligent services such as search agents, information brokers, and information filters. Doing this will require to define standards not only for the syntactic form of documents, but also for their semantic content, in order to facilitate semantic interoperability. XML (eXtended Markup Language) and RDF (Resource Description Framework) [3], [13], [33], [48] are the current World Wide Web Consortium standards for establishing semantic interoperability on the Web. However, although XML has already been successfully used to represent ontologies, it is more syntactically oriented, addresses only document structure (just describes grammars), and provides no way to recognize a semantic unit from a particular domain. On the other hand, RDF provides a data model that can be extended to address sophisticated ontology representation techniques, hence it better facilitates interoperation. In fact, RDF has been designed to standardize the definition and use of metadata - descriptions of Web-based resources - but is equally well suited to representing data. Its basic building block is object-attribute- value triplet, which is convenient for representing concepts in ontologies. Moreover, a domain model - defining objects and relationships - can be represented naturally in RDF. Defining an ontology in RDF means defining an RDF schema, which specifies all the concepts and relationships of the particular language [16]. RDF schema mechanism can be also used to define elements of an ontolo gy representation and inference language.

(10)

2.3. Knowledge Processing

We can define a knowledge processor as an abstract mechanism which, starting from a set of given facts and a set of given knowledge elements produces some changes in the set of facts. Concrete examples of knowledge processors include (but are not limited to) blackboard control mechanisms, heuristic classifiers, rule-based inference engines, pattern -matchers, and at the lowest level even a single neuron of a neural network. As an illustrative exa mple, consider the blackboard architecture in Figure 9. It models solving a complex problem though cooperation and coordination of a number of specialists, called knowledge sources (KS). Each knowledge source is specialized in performing a certain task that can contribute to the overall solution, and all the knowledge sources share partial results of their processing through a shared memory, called the blackboard. Knowledge sources are independent in their work to a large extent, since their knowledge resides in their local knowledge bases and many of the facts they use is local to them. Hence most processing/reasoning of a knowledge source is done within the knowledge source itself. A global blackboard control mechanism coordinates the work of individual knowledge sources and synchronizes their access to the blackboard.

KS 1 KS 2 KS 3

KS 4 KS 5 KS 6

Blackboard (Shared memory)

Figure 9. The blackboard architecture

Two important lines of developments in AI regarding knowledge processing have emerged during the last decade. The first one is development of different kinds of knowledge processors starting from well- established software design practices. An example is using software patterns to specify common global architecture of different knowledge-processing mechanisms [17]. In software engineering, patterns are attempts to describe successful solutions to common software problems [54]. Software patterns reflect common conceptual structures of these solutions, and can be applied over and over again when analyzing, designing, and producing applications in a particular context.

Figure 10 shows the structure of the Knowledge processor pattern [18]. Its participants have the following meanings. Knowledge processor defines an interface for using the knowledge from the Knowledge object, as well as for examining and updating facts in the Facts object. Knowledge and Facts objects are generally aggregates of different collections of knowledge elements. Parameterizing collections of knowledge elements, we can actually put collections of rules, frames, etc. in the

Knowledge object, thus making it represent a knowledge base. By analogy, we can also make the Facts object represent a working memory, containing collections of working memory elements, rule and frame instantiations, etc. Knowledge processor contains also a pointer to an instantiation of the abstract Interface class. Developers can subclass Interface in order to implement an application-specific interface to a particular knowledge processor. Concrete Knowledge Processor is either a knowledge processor of a specific well-known type (e.g., a forward -chaining inference engine, a fuzzy-set-based reasoner, a knowledge-based planner), or can be defined by the application designer.

1 1

1

1

1 1

Knowledge processor

Concrete knowledge

processor Interface

Knowledge Facts

Figure 10. The Knowledge processor pattern

The second important line of developments in AI regarding knowledge processing is the line of integrating traditional programming languages with knowledge-processing capabilities. Examples include the Tanguy architecture [14] and the Rete++ environment [24] . The Tanguy architecture

(11)

extends the C++ language to cope with permanent object storage, production rules (data-driven programming), and uniform set-oriented interfaces. The Rete++ environment embeds pattern matching based on the famous Rete algorithm into C++.

2.4. Intelligent Databases

One way to make data access and manipulation in large, complex databases simple and more efficient is to integrate database management systems with knowledge processing capabilities. In that way, database designers create intelligent databases. They are featured by query optimization, intelligent search, knowledge-based navigation through huge amounts of raw data, automatic translation of higher-level (natural language) queries into sequences of SQL queries, and the possibility of making automatic discoveries [51].

Intelligent databases have evolved through merging of several technologies, including traditional databases, object-oriented systems, hypermedia, expert systems and automatic knowledge discovery.

The resulting top-level, three-tier architecture of an intelligent database, Figure 11, has three levels:

high-level tools, high-level user-interface, and intelligent database engine. High-level tools perform automated knowledge discovery from data, intelligent search, and data quality and integrity control [51]. The users directly interact with the high-level user interface. It creates the model of the task and database environment. The intelligent database engine is the system's base level. It incorporates a model for a deductive object-oriented representation of multimedia information that can be expressed and operated in several ways.

High-level tools High-level user interface Intelligent database engine

Intelligent database

Figure 11. Three-tier architecture of intelligent databases

Typical concrete ways of merging database technology with intelligent systems are coupling and integration of DBMSs and intelligent systems. Coupling is a weaker merger. It does not guarantee consistency of rules and data (a database update may not go through the intelligent system). It also raises difficulties when trying to bring the database into conformity with new rules in the knowledge base. In spite of that, there are many successful commercial applications that use coupling of intelligent systems and databases. In such applications the intelligent systems play the role of intelligent front- ends to databases. On the other hand, integration of DBMSs and intelligent systems guarantees consistency of rules and data, because a single DBMS administers both kinds of objects. Moreover, integration usually brings better performance then mere coupling. There are very well known examples of integration of rules and data in commercial DBMSs, e.g. INGRES and Sybase.

2.5. Knowledge Discovery and Data Mining

Knowledge Discovery in Databases (KDD) is the process of automatic discovery of previously unknown patterns, rules, and other regular contents implicitly present in large volumes of data. Data Mining (DM) denotes discovery of patterns in a data set previously prepared in a specific way. DM is often used as a synonymous fo r KDD. However, strictly speaking DM is just a central phase of the entire process of KDD.

Knowledge discovery is a process, and not a one-time response of the KDD system to a user's action.

As any other process, it has its environment, its phases, and runs under certain assumptions and constraints.

Figure 12 illustrates the environment of the KDD process [40]. The necessary assumptions are that there exists a database with its data dictionary, and that the user wants to discover some patterns in it.

There must also exist an application through which the user can select (from the database) and prepare a data set for KDD, adjust DM parameters, start and run the KDD process, and access and manipulate discovered patterns. KDD/DM systems usually let the user choose among several KDD methods. Each method enables data set preparation and search in order to discover/generate patterns, as well as pattern

(12)

evaluation in terms of certainty and interestingness. KDD methods often make possible to use domain knowledge to guide and control the process and to help evaluate the patterns. In such cases domain knowledge must be represented using an appropriate knowledge representation technique (such as rules, frames, decision trees, and the like). Discovered knowledge may be used directly for database query from the application, or it may be included into another knowledge-based program (e.g., an expert system in that domain), or the user may just save it in a desired form. Discovered patterns mostly represent some previously unknown facts from the domain knowledge. Hence they can be combined with previously existing and represented domain knowledge in order to better support subsequent runs of the KDD process.

Database

Data dictionary

User

Domain knowledge KDD method

search evaluation

Discovered knowledge Application

Figure 12. Environment of the KDD process (after [40])

Figure 13 shows typical activities, phases, and data in the KDD process [20]. KDD is never done over the entire database. Instead of that, a representative target data set is generated from a large database by an appropriate selection procedure. In the next phase, preprocessing of target data is necessary in order to eliminate noise (handle missing, erroneous, inexact, imprecise, conflicting, and exceptional data, resolve ambiguities in the target data set) and possibly further prepare target data in terms of generating specific data sequences. The result is the set of preprocessed data.

Database Target data

Preprocessed data

Transformed data

Patterns Knowledge Selection

Preprocessing DM

Transformation Interpretation / evaluation

Figure 13. Phases in the KDD process (after [20])

The next phase is transformation of the preprocessed data into a suitable form for performing the desired DM task. DM tasks are specific kinds of activities that are carried out over the set of transformed data in search of patterns, guided by the kind of knowledge that should be discovered.

Some examples of DM tasks are classification, cluster identification, mining association rules, mining path-traversal patterns, change and deviation detection, and sequence analysis. The output of the DM phase is, in general, a set of patterns. However, not all of the patterns are useful. The goal o f

interpreting and evaluating all the patterns discovered is to keep only those patterns that are interesting and useful to the user and discard the rest. Those patterns that remain represent the discovered

knowledge.

In practice, KDD process never runs smoothly. On the contrary, it is a time-consuming, incremental, and iterative process by its very nature, hence many repetition and feedback loops in Figure 13.

Individual phases can be repeated alone, and the entire process is usually repeated for different data sets.

Discovered patterns are usually represented using a certain well-known knowledge representation technique, including inference rules, decision trees, tables, diagrams, images, analytical expressions, and so on. Inference rules (If-Then rules) are the most frequently used technique. Decision trees are a suitable alternative, since with many machine learning algorithms the concepts the program learns are represented in the form of decision trees. Transformations between decision trees and inference rules

(13)

are easy and straightforward. Rules are often dependent on each other, so the discovered knowledge often has the form of causal chains or networks of rules.

KDD systems usually apply some probabilistic technique to represent uncertainty in discovered patterns. Some form of certainty factors often goes well with inference rules. Probability distribution is easy to compute statistically, since databases that KDD systems start from are sufficiently large.

Probability distribution is especially suitable for modeling noise in data. Fuzzy sets and fuzzy logic are also used sometimes. However, it is important to note that an important factor in modeling uncertainty is the effort to actually eliminate sources of uncertainty (such as noisy and missing data) in early phases of the process.

What technique exactly should be used to represent discovered knowledge depends on the goals of the discovery process. If discovered knowledge is for the users only, then natural language representation of rules or some graphically rich form is most suitable. Alternatively, discovered knowledge may be used in the knowledge base of another intelligent application, such as an expert system in the same domain. In that case, discovered knowledge should be translated into the form used by that other application. Finally, discovered knowledge may be used along with the previously used domain knowledge to guide the next cycle of the KDD process. That requires representing discovered patterns in the same way the other domain knowledge is represented in the KDD system.

2.6. Distributed Intelligent Systems

Distributed AI systems are concerned with the interactions of groups of intelligent agents that

cooperate when solving complex problems. Distributed problem solving, as a subfield of AI, deals with strategies by which the decomposition and coordination of computation in a distributed system are matched to the structural demands of the task domain. Distributed intelligent systems model a number of information processing phenomena that occur in the natural world. Such phenomena are a source of a number of useful metaphors for distributed processing and distributed problem solving.

Recent developments in distributed systems in general, and particularly the Internet, have further contributed to the importance of distributed intelligent systems. The increase of information technology capabilities due to the development of the Internet and the World Wide Web has made possible to develop more powerful and often widely dispersed intelligent systems. Although these new systems often merely implement in a new way some well-established AI techniques, such as planning, search, and problem solving, they definitely have their own identity. Technologies such as intelligent agents, knowledge servers, virtual organizations, and knowledge management (to name but a few) are tightly coupled with the Internet, and have opened new fields for application and practical use of AI during the last decade.

Since intelligent agents and knowledge management are covered with more details in the other dedicated sections of this paper, this section covers knowledge servers and virtual organizations more extensively.

A knowledge server is an entity in a computer network (most often on the Internet) that provides high- level, high-quality, knowledge-based services such as expert advice, heuristic search and adaptive information retrieval, automatic data interpretation, portals for knowledge sharing and reuse, intelligent collaboration, and so on. For example, an expert system on the Internet that performs its tasks remotely can be thought of as a knowledge server. Users access the system from a distant location and get expert problem solving over the Internet. As another example, viewing intelligent agents as servers with appropriate knowledge is sometimes also convenient. Recall that all intelligent agents posses some knowledge and some problem-solving capabilities. Deploying agents to do some work on behalf of their users, communicating with other agents in a network environment, can be interpreted as knowledge-based assistance (service) to remote users.

Communicating with a knowledge server from a remote computer necessarily requires an appropriate software front-end. Nowadays such front-ends are most often Java-based interfaces. Figure 14 illustrates how a knowledge server and its user-interface front-end application operate in an essentially client-server manner. Note, however, that dividing a previously developed, integrated intelligent system into a client and a server in order t o operate on a network (Internet) can be difficult. The most important reason is the need to minimize communication between the client and the server and ensuring acceptable response time. As Figure 14 shows, the solution is usually to let the client perfo rm computationally expensive user-interface operations, and let the knowledge server both store the knowledge base and perform the problem-solving task locally. A mobile agent can be deployed to provide appropriate interaction between the client and the server. Ensuring multiplatform support in such applications is a must, so Java virtual machine usually comes as a rescue.

(14)

End user

Knowledge server Problem solver, knowledge base Network

connection Client

User-interface front end

Figure 14. Client-server architecture of knowledge servers

If multiple knowledge servers are to cooperate in solving a complex problem, the architecture is more complex and calls for knowledge sharing between multiple servers. This is just another problem in the field of intelligent systems that stresses the need for their ontological foundation.

Practical problems of using knowledge servers are those of network congestion, unreliable network connection, and server availability. Installing mirror sites for knowledge servers can increase server robustness and availability.

In a virtual organization, complementary resources existing in a number of cooperating companies are left in place, on their servers, but are integrated to support business processes or a particular product effort [36]. The resources get selectively assigned to the virtual organization if their "owner" does not use them. Moreover, the resources in a virtual organization get quickly created, assembled and

integrated within a business domain. Virtual organizations use the Internet selectively in order to create or assemble productive resources quickly, frequently and concurrently. Such productive resources include research, manufacturing, design, business, learning and training, etc. AI has been successfully used in a number of virtual organizations, including virtual laboratories, virtual office systems, concurrent engineering projects, virtual classrooms, and virtual environments for training.

Virtual organizations use the diffusion of computer networks and the globalization of specialized knowledge to become independent of geographical constraints [30]. They have restructured around communication networks, building on organizational principles that emphasize flexibility, knowledge accumulation and deployment, and distributed teamwork. Applying intelligent agents in such a setting enables collaboration between the users and resource agents in a virtual organization, provides knowledge exchange about the resources, and facilitates user interaction with these resources. In fact, in virtual organizations intelligent agents, knowledge-based systems, and A I-based planning become essential, since humans have limited ability to keep track of what is going on in the broad range of virtual organization activities, given the tight time constraints and limited resources required and used by virtual organizations [36]. Worse still, there is a frequent interruption of their work; for example, white-collar employees receive a communication (electronic, paper, or oral) every five minutes. Using agents and other AI technologies in virtual organizations mitigates the li mitations and constraints of humans and makes possible to monitor and control substantial resources without the time constraints inherent in human organizations.

2.7. Knowledge Management

Knowledge management is the process of converting knowledge from the sources available to an organization and connecting people with that knowledge [35]. It involves the identification and analysis of available and required knowledge assets and knowledge asset related processes, and the subsequent planning and control of actions to develop both the assets and the processes so as to fulfill organizational objectives [1]. At the Artificial Intelligence Applications Institute of the University of Edinburgh, they define knowledge assets as the knowledge regarding markets, products, processes, technologies and organizations, that a business owns or needs to own and which enable its business processes to generate profits, add value, etc. Knowledge management involves not only these knowledge assets but also the processes that act upon the assets, such as developing knowledge, preserving knowledge, using knowledge, assessing knowledge, applying knowledge, updating knowledge, sharing knowledge, transforming knowledge, and knowledge transfer. Knowledge

management facilitates creation, access, and reuse of knowledge, typically using advanced technology, such as World Wide Web, Lotus Notes, the Internet, and intranets. Knowledge-management systems contain numerous knowledge bases, with both numeric and qualitative data and knowledge (e.g., searchable Web pages). Important AI developments, such as intelligent agents, knowledge discovery in databases, and ontologies, are also important parts of knowledge-management systems.

Formal management of knowledge assets and the corresponding processes is necessary for several reasons. In the fast changing business world, knowledge needs to evolve and be assimilated also fast [1]. On the other hand, due to competitive pressures the number of people who hold this knowledge is decreasing. Knowledge takes time to experience and acquire, and employees have less and less time for this. When employees retire or leave the company, as well as when the company undergoes a change in

(15)

strategic direction, some knowledge gets lost. Within the company, all the important knowledge must be represented using a common vocabulary if the knowledge is to be properly understood. In order to be able to share and reuse knowledge among differing applications in the company and for various types of users, it is necessary to share the company's existing knowledge sources, and to identify, plan, capture, acquire, model, validate, verify and maintain future knowledge assets. In everyday practice, the company's employees need access to the right knowledge, at the right time, in the right location.

Essentially, all this stresses the need to model business processes, the resources, the capabilities, roles and authority, the communication between agents in the company, and the control over the processes.

Knowledge modeling and knowledge engineering practices help achieve these objectives [38]. Figure 15 illustrates what IT/AI technologies are the major knowledge management enablers.

Knowledge management

Intelligent agents

Groupware

Data mining

Document retrieval

Intranets

Knowledge- based systems Ontologies

Decision support Databases

Browsers

Pointers to people XML

Figure 15. Key IT/AI eneblers for formal knowledge management

Numerous KM applications have been developed for enhancing customer-relationship management, research and discovery, and businesses processes, as well as group collaboration for design and other purposes [34]. Their typical architecture is layered, starting from information and knowledge sources at the bottom (word processing, electronic document management, databases, email, Web, people) and an appropriate infrastructure layer on top of the base one (email, file servers, Internet/intranet services).

The adjacent upper layer is knowledge repository that provides content management. Further up from it is the knowledge map, defining corporate taxonomy, and the knowledge management services layer is on top of it. It contains two kinds of services, discovery services and collaboration services. They are just beneath the knowledge portal layer, that provides applications' interface to the knowledge management system. Application layer on top of all that specifies the company's applications that use the knowledge management system. These may include customer-relationship management,

competitive intelligence, best-practice systems, product development and other applications.

Obviously, knowledge management is not a single technology but instead is a collection of indexing, classifying, and information-retrieval technologies coupled with methodologies designed to achieve results desired by the user.

The conventional approach to knowledge management keeps data in a database. In this case, the database schema provides the context for the data. However, this approach suffers from poor scalability. A convenient alternative is to use XML (eXtensible Markup Language) instead, so that XML provides the context for turning a data set into information, and knowledge management provides the tools for managing this information [13].

Superficially, XML is a markup language like HTML. A deeper insight shows that XML is actually a family of languages that provide a more semantic management of information than HTML [3]. It has been designed and published by World Wide Web Consortium as a standard that provides means to describe data and unlike HTML, XML tags are not predefined in XML [48]. XML is self-describing. It uses a Document Type Definition (DTD) to formally describe the data. XML is a universal language for data on the Web that lets developers deliver content from a wide variety of applications to the desktop [52]. XML promises to standardize the way information is searched for, exchanged, adaptively presented, and personalized. Content authors are freed from defining the style of the XML document as the content has been separated out from the presentation of the document. Extensible Stylesheet Language (XSL) is useful in displaying the information in many ways e.g., in the form of a table, a bulleted list, a pie chart, or even in a different language.

In the context of knowledge management, XML is best thought of as a set of rules for defining data structures. Key elements in a document can be categorized according to meaning, rather than simply

(16)

how they are presented in a particular browser [52]. Instead of a search engine selecting a document by the metatags listed in its header, a search engine can scan through the entire document for the XML tags that identify individual pieces of text and image. Defining objects that tag or identify details about data is increasingly popular with companies trying to leverage both the structured data in relational databases and the unstructured data found on the Internet and in other sources. Knowledge

management is trying to bring all this data together, and XML categorizes it [13]. Data modelers produce an XML DTD, which represents this domain data model. Programmers then provide the knowledge management tools with reusable XML parser code so that the community members can easily interact with the model. The knowledge management tools are being used to pass information to one another or to move it between knowledge repositories. The KM tools have data and context together by tagging the exported data with XML syntax o r by interpreting the imported-tagged data according to the already created domain DTD. Knowledge repositories store the data either as marked- up flat files or in databases with schemas that are consistent with the domain DTD. The data is never separated from its context, and the context always accurately represents the domain data model.

A fundamental building block of a knowledge management infrastructure and architecture is

knowledge portal. Generally, portals are large doors or gateways to reach many other places, indicating that the portal itself is not the final destination. A Web portal is a web site, usually with little content, providing links to many other sites that can either be accessed directly by clicking on a designated part of a browser screen, or can be found by following an organized sequence of related categories. A knowledge portal provides two distinct interfaces: a knowledge producer interface, supporting the knowledge mapping needs of the knowledge worker in their job of gathering, analyzing, adding value, and sharing information and knowledge among peers, and a knowledge consumer interface that facilitates the communication of the product of the knowledge workers and its dissemination through the enterprise to the right people, at the right time, to improve their decision-making. Knowledge portal provides all the facilities of an information catalog plus collaborative facilities, expertise management tools, and a knowledge catalog (access to the knowledge repository). The knowledge catalog is a metadata store that supports multiple ways of organizing and gathering content according to the different taxonomies used in the enterprise practice communities, including an enterprise wide taxonomy when defined.

2.8. User Modeling

A user model is an explicit representation of properties of a particular user, which allows the system to adapt diverse aspects of its performance to individual users. Adapting the definition from [58], user modeling can be thought of as the construction of a qualitative representation, called a user model, that accounts for user behavior in terms of a system’s background knowledge. These three - the user model, the user behavior, and the background knowledge - are the essential elements of user modeling.

Techniques for user modeling have been developed and evaluated by researchers in a number of fields, including artificial intelligence, education, psychology, linguistics, and human-computer interaction.

As a matter of fact, the field of user modeling has resulted in significant amounts of theoretical work, as well as practical experience, in developing applications that "care" about their users in traditional areas of human-computer interaction and tutoring systems [55]. It also has a significant impact on recent developments in areas like information filtering, e-commerce, adaptive presentation techniques, and interface agents [7].

In user modeling, user behavior is the user's observable response to a stimulus from the application that maintains the student model. The user behavior is typically an action (e.g., completing a form on the Web) or, more commonly, the result of that action (e.g., the completed form). For example, while a user is "browsing" through an adaptive hyperdocument all user actions are registered [6]. Based on these observations, the user modeling system maintains a model of the user's knowledge about each domain model concept. In the case of adaptive hyperdocument systems, for each domain model concept the user model keeps track of how much does the user kn ow about this concept and whether the user has read something about this concept. Some user modeling systems can construct a user model from a single piece of behavior, while some other require multiple behaviors to accomplish their task.

The background knowledge comprises the correct facts, procedures, concepts, etc. of that domain, as well as the misconceptions held and other errors made by a population of users in the same domain [58]. Other background knowledge may include historical knowledge about a particular user (e.g., past qualitative models and quantitative measures of performance, user preferences and idiosyncracies, etc.), and stereotypical knowledge about user populations in the domain.

(17)

Being the output of the user modeling process, the user mo del contains partial, primarily qualitative representation of the user's knowledge (beliefs) about a particular topic, skill, procedure, domain, or process. The user model describes objects and processes in terms of spatial, temporal, or causal relations. In tutoring systems, the user modeling process detects mismatches between actual and desired behaviors and knowledge in terms of incorrect or inconsistent knowledge (i.e., misconceptions) and missing or incomplete knowledge. Intelligent analysis of the user's actions has to decide whether the actions are correct or not, find out what exactly is wrong or incomplete, and possibly identify which missing or incorrect knowledge may be responsible for the error (the last functionality is referred as knowledge diagnosis) [7]. Intelligent analyzers can provide the user with extensive error feedback and update the user model. A more convenient approach is to use interactive intelligent help support that provides intelligent advice to the user on each step of his/her interaction with the application. The level of help can vary from signaling about a wrong step, to giving a hint, to executing the next step for the user. In any such a case, the user modeling system watches the actions of the user, understands them, and uses this understanding to provide help and to update the user model.

The field of user modeling is very popular and constantly growing in recent years. There is a number of interesting research issues, such as:

?? construction of user models: knowledge, beliefs, and misconceptions, preferences, goals and plans, cognitive styles, user modeling agents and brokers;

?? exploitation of user models to achieve: adaptive information filtering and retrieval, tailored information presentation, transfer of task performance from user to system, selection of instructional actions, interface adaptation;

?? integrating active intelligent agents into user modeling systems, in order to let an active intelligent agent be a guide or assistant to each individual user and attempt to take into account the current progress of the user in the application's domain or skills, the specific goal/task of the user, the specific needs of the user to communicate with other users, and the perceived mental model of the user in the charting of a personalized use of the system;

?? applying user modeling agents in collaborative environments and work scenarios; that is, integrating the ideas of interactive learning environments, collaborative systems, distributed systems, and open systems;

?? cooperative user mo dels, i.e. constructing the user models for a system/application in collaboration with each user;

?? using statistical techniques for predicting the user's ability and update the user model accordingly, based on the number of hints the user has required up to a certain point in interacting with the system;

?? inference techniques for user modeling, including neural networks, numerical uncertainty management, epistemic logic or other logic -based formalisms, stereotype or task hierarchies;

?? creating and maintaining appropriate user models for Web-based adaptive systems;

?? integration of individual user models in collaborative work environments in order to create multiple user models or group models;

?? applications of user modeling techniques in various areas, like adaptive learning and on-line help environments, e-commerce, interface agents, explanation of system actions, adaptive hypermedia and multimodal interaction, support of collaboration, of users with special needs;

?? practical issues of user modeling, like privacy, consistency, evaluation, standardization.

3. Projects, Systems, and Applications

There are a number of practical AI systems, applications and development projects that employ the knowledge modeling techniques surveyed in the first part of this paper. The sections of this second part present some concrete examples of such projects, systems, and applications.

3.1. I3 program

I3 stands for Intelligent Integration of Information. It is a long-term, DARPA -sponsored research and development program, concerned with the development of large-scale, distributed intelligent

applications [37]. I3 uses several specific science areas, such as knowledge-base management, object- oriented software development, and meta-language descriptions. It builds upon earlier work established by DARPA on knowledge representation and communication standards.

Referințe

DOCUMENTE SIMILARE