Knowledge-based systems

Last updated

A knowledge-based system (KBS) is a computer program that reasons and uses a knowledge base to solve complex problems. Knowledge-based systems were the focus of early artificial intelligence researchers in the 1980s. The term can refer to a broad range of systems. However, all knowledge-based systems have two defining components: an attempt to represent knowledge explicitly, called a knowledge base, and a reasoning system that allows them to derive new knowledge, known as an inference engine.

Contents

Components

The knowledge base contains domain-specific facts and rules [1] about a problem domain (rather than knowledge implicitly embedded in procedural code, as in a conventional computer program). In addition, the knowledge maybe structured by means of a subsumption ontology, frames, conceptual graph, or logical assertions. [2]

The inference engine uses general-purpose reasoning methods to infer new knowledge and to solve problems in the problem domain. Most commonly, it employs forward chaining or backward chaining. Other approaches include the use of automated theorem proving, logic programming, blackboard systems, and term rewriting systems such as Constraint Handling Rules (CHR). These more formal approaches are covered in detail in the Wikipedia article on knowledge representation and reasoning.

Aspects and development of early systems

Knowledge-based vs. expert systems

The term "knowledge-based system" was often used interchangeably with "expert system", possibly because almost all of the earliest knowledge-based systems were designed for expert tasks. However, these terms tell us about different aspects of a system:

Today, virtually all expert systems are knowledge-based, whereas knowledge-based system architecture is used in a wide range of types of system designed for a variety of tasks.

Rule-based systems

The first knowledge-based systems were primarily rule-based expert systems. These represented facts about the world as simple assertions in a flat database and used domain-specific rules to reason about these assertions, and then to add to them. One of the most famous of these early systems was Mycin, a program for medical diagnosis.

Representing knowledge explicitly via rules had several advantages:

  1. Acquisition and maintenance. Using rules meant that domain experts could often define and maintain the rules themselves rather than via a programmer.
  2. Explanation. Representing knowledge explicitly allowed systems to reason about how they came to a conclusion and use this information to explain results to users. For example, to follow the chain of inferences that led to a diagnosis and use these facts to explain the diagnosis.
  3. Reasoning. Decoupling the knowledge from the processing of that knowledge enabled general purpose inference engines to be developed. These systems could develop conclusions that followed from a data set that the initial developers may not have even been aware of. [3]

Meta-reasoning

Later[ when? ] architectures for knowledge-based reasoning, such as the BB1 blackboard architecture (a blackboard system), [4] allowed the reasoning process itself to be affected by new inferences, providing meta-level reasoning. BB1 allowed the problem-solving process itself to be monitored. Different kinds of problem-solving (e.g., top-down, bottom-up, and opportunistic problem-solving) could be selectively mixed based on the current state of problem solving. Essentially, the problem-solver was being used both to solve a domain-level problem along with its own control problem, which could depend on the former.

Other examples of knowledge-based system architectures supporting meta-level reasoning are MRS [5] and SOAR.

Widening of application

In the 1980s and 1990s, in addition to expert systems, other applications of knowledge-based systems included real-time process control, [6] intelligent tutoring systems, [7] and problem-solvers for specific domains such as protein structure analysis, [8] construction-site layout, [9] and computer system fault diagnosis. [10]

Advances driven by enhanced architecture

As knowledge-based systems became more complex, the techniques used to represent the knowledge base became more sophisticated and included logic, term-rewriting systems, conceptual graphs, and frames.

Frames, for example, are a way representing world knowledge using techniques that can be seen as analogous to object-oriented programming, specifically classes and subclasses, hierarchies and relations between classes, and behavior[ clarification needed ] of objects. With the knowledge base more structured, reasoning could now occur not only by independent rules and logical inference, but also based on interactions within the knowledge base itself. For example, procedures stored as daemons on[ clarification needed ] objects could fire and could replicate the chaining behavior of rules. [11]

Advances in automated reasoning

Another advancement in the 1990s was the development of special purpose automated reasoning systems called classifiers. Rather than statically declare the subsumption relations in a knowledge-base, a classifier allows the developer to simply declare facts about the world and let the classifier deduce the relations. In this way a classifier also can play the role of an inference engine. [12]

The most recent[ as of? ] advancement of knowledge-based systems was to adopt the technologies, especially a kind of logic called description logic, for the development of systems that use the internet. The internet often has to deal with complex, unstructured data that cannot be relied on to fit a specific data model. The technology of knowledge-based systems, and especially the ability to classify objects on demand, is ideal for such systems. The model for these kinds of knowledge-based internet systems is known as the Semantic Web. [13]

See also

Related Research Articles

<span class="mw-page-title-main">Cyc</span> Artificial intelligence project

Cyc is a long-term artificial intelligence project that aims to assemble a comprehensive ontology and knowledge base that spans the basic concepts and rules about how the world works. Hoping to capture common sense knowledge, Cyc focuses on implicit knowledge that other AI platforms may take for granted. This is contrasted with facts one might find somewhere on the internet or retrieve via a search engine or Wikipedia. Cyc enables semantic reasoners to perform human-like reasoning and be less "brittle" when confronted with novel situations.

<span class="mw-page-title-main">Expert system</span> Computer system emulating the decision-making ability of a human expert

In artificial intelligence (AI), an expert system is a computer system emulating the decision-making ability of a human expert. Expert systems are designed to solve complex problems by reasoning through bodies of knowledge, represented mainly as if–then rules rather than through conventional procedural code. Expert systems were first created in the 1970s and then proliferated in the 1980s. Expert systems were among the first truly successful forms of AI software. An expert system is divided into two subsystems: the inference engine and the knowledge base. The knowledge base represents facts and rules. The inference engine applies the rules to the known facts to deduce new facts. Inference engines can also include explanation and debugging abilities.

Knowledge representation and reasoning is the field of artificial intelligence (AI) dedicated to representing information about the world in a form that a computer system can use to solve complex tasks such as diagnosing a medical condition or having a dialog in a natural language. Knowledge representation incorporates findings from psychology about how humans solve problems and represent knowledge, in order to design formalisms that will make complex systems easier to design and build. Knowledge representation and reasoning also incorporates findings from logic to automate various kinds of reasoning.

<span class="mw-page-title-main">Symbolic artificial intelligence</span> Methods in artificial intelligence research

In artificial intelligence, symbolic artificial intelligence is the term for the collection of all methods in artificial intelligence research that are based on high-level symbolic (human-readable) representations of problems, logic and search. Symbolic AI used tools such as logic programming, production rules, semantic nets and frames, and it developed applications such as knowledge-based systems, symbolic mathematics, automated theorem provers, ontologies, the semantic web, and automated planning and scheduling systems. The Symbolic AI paradigm led to seminal ideas in search, symbolic programming languages, agents, multi-agent systems, the semantic web, and the strengths and limitations of formal knowledge and reasoning systems.

In the field of artificial intelligence, an inference engine is a software component of an intelligent system that applies logical rules to the knowledge base to deduce new information. The first inference engines were components of expert systems. The typical expert system consisted of a knowledge base and an inference engine. The knowledge base stored facts about the world. The inference engine applied logical rules to the knowledge base and deduced new knowledge. This process would iterate as each new fact in the knowledge base could trigger additional rules in the inference engine. Inference engines work primarily in one of two modes either special rule or facts: forward chaining and backward chaining. Forward chaining starts with the known facts and asserts new facts. Backward chaining starts with goals, and works backward to determine what facts must be asserted so that the goals can be achieved.

Forward chaining is one of the two main methods of reasoning when using an inference engine and can be described logically as repeated application of modus ponens. Forward chaining is a popular implementation strategy for expert systems, business and production rule systems. The opposite of forward chaining is backward chaining.

<span class="mw-page-title-main">Logic in computer science</span> Academic discipline

Logic in computer science covers the overlap between the field of logic and that of computer science. The topic can essentially be divided into three main areas:

A blackboard system is an artificial intelligence approach based on the blackboard architectural model, where a common knowledge base, the "blackboard", is iteratively updated by a diverse group of specialist knowledge sources, starting with a problem specification and ending with a solution. Each knowledge source updates the blackboard with a partial solution when its internal constraints match the blackboard state. In this way, the specialists work together to solve the problem. The blackboard model was originally designed as a way to handle complex, ill-defined problems, where the solution is the sum of its parts.

In artificial intelligence, model-based reasoning refers to an inference method used in expert systems based on a model of the physical world. With this approach, the main focus of application development is developing the model. Then at run time, an "engine" combines this model knowledge with observed data to derive conclusions such as a diagnosis or a prediction.

A "production system" is a computer program typically used to provide some form of artificial intelligence, which consists primarily of a set of rules about behavior, but it also includes the mechanism necessary to follow those rules as the system responds to states of the world. Those rules, termed productions, are a basic representation found useful in automated planning, expert systems and action selection.

Semantic integration is the process of interrelating information from diverse sources, for example calendars and to do lists, email archives, presence information, documents of all sorts, contacts, search results, and advertising and marketing relevance derived from them. In this regard, semantics focuses on the organization of and action upon information by acting as an intermediary between heterogeneous data sources, which may conflict not only by structure but also context or value.

A deductive language is a computer programming language in which the program is a collection of predicates ('facts') and rules that connect them. Such a language is used to create knowledge based systems or expert systems which can deduce answers to problem sets by applying the rules to the facts they have been given. An example of a deductive language is Prolog, or its database-query cousin, Datalog.

Norms can be considered from different perspectives in artificial intelligence to create computers and computer software that are capable of intelligent behaviour.

Frames are an artificial intelligence data structure used to divide knowledge into substructures by representing "stereotyped situations". They were proposed by Marvin Minsky in his 1974 article "A Framework for Representing Knowledge". Frames are the primary data structure used in artificial intelligence frame languages; they are stored as ontologies of sets.

A semantic reasoner, reasoning engine, rules engine, or simply a reasoner, is a piece of software able to infer logical consequences from a set of asserted facts or axioms. The notion of a semantic reasoner generalizes that of an inference engine, by providing a richer set of mechanisms to work with. The inference rules are commonly specified by means of an ontology language, and often a description logic language. Many reasoners use first-order predicate logic to perform reasoning; inference commonly proceeds by forward chaining and backward chaining. There are also examples of probabilistic reasoners, including non-axiomatic reasoning systems, and probabilistic logic networks.

In computer science, a rule-based system is a computer system in which domain-specific knowledge is represented in the form of rules and general-purpose reasoning is used to solve problems in the domain.

In information technology a reasoning system is a software system that generates conclusions from available knowledge using logical techniques such as deduction and induction. Reasoning systems play an important role in the implementation of artificial intelligence and knowledge-based systems.

A legal expert system is a domain-specific expert system that uses artificial intelligence to emulate the decision-making abilities of a human expert in the field of law. Legal expert systems employ a rule base or knowledge base and an inference engine to accumulate, reference and produce expert knowledge on specific subjects within the legal domain.

A deductive classifier is a type of artificial intelligence inference engine. It takes as input a set of declarations in a frame language about a domain such as medical research or molecular biology. For example, the names of classes, sub-classes, properties, and restrictions on allowable values. The classifier determines if the various declarations are logically consistent and if not will highlight the specific inconsistent declarations and the inconsistencies among them. If the declarations are consistent the classifier can then assert additional information based on the input. For example, it can add information about existing classes, create additional classes, etc. This differs from traditional inference engines that trigger off of IF-THEN conditions in rules. Classifiers are also similar to theorem provers in that they take as input and produce output via first-order logic. Classifiers originated with KL-ONE frame languages. They are increasingly significant now that they form a part in the enabling technology of the Semantic Web. Modern classifiers leverage the Web Ontology Language. The models they analyze and generate are called ontologies.

This glossary of artificial intelligence is a list of definitions of terms and concepts relevant to the study of artificial intelligence, its sub-disciplines, and related fields. Related glossaries include Glossary of computer science, Glossary of robotics, and Glossary of machine vision.

References

  1. Smith, Reid (May 8, 1985). "Knowledge-Based Systems Concepts, Techniques, Examples" (PDF). reidgsmith.com. Schlumberger-Doll Research. Retrieved 9 November 2013.
  2. Sowa, John F. (2000). Knowledge Representation: Logical, Philosophical, and Computational Foundations (1st ed.). Pacific Grove: Brooks / Cole. ISBN   978-0-534-94965-5.
  3. Hayes-Roth, Frederick; Donald Waterman; Douglas Lenat (1983). Building Expert Systems. Addison-Wesley. ISBN   0-201-10686-8.
  4. Hayes-Roth, Barbara; Department, Stanford University Computer Science (1984). BB1: an Architecture for Blackboard Systems that Control, Explain, and Learn about Their Own Behavior. Department of Computer Science, Stanford University.
  5. Genesereth, Michael R. "1983 - An Overview of Meta-Level Architecture". AAAI-83 Proceedings: 6.
  6. Larsson, Jan Eric; Hayes-Roth, Barbara (1998). "Guardian: An Intelligent Autonomous Agent for Medical Monitoring and Diagnosis". IEEE Intelligent Systems. 13 (1). Retrieved 2012-08-11.
  7. Clancey, William (1987). Knowledge-Based Tutoring: The GUIDON Program. Cambridge, Massachusetts: The MIT Press.
  8. Hayes-Roth, Barbara; Buchanan, Bruce G.; Lichtarge, Olivier; Hewitt, Mike; Altman, Russ B.; Brinkley, James F.; Cornelius, Craig; Duncan, Bruce S.; Jardetzky, Oleg (1986). PROTEAN: Deriving Protein Structure from Constraints. AAAI. pp. 904–909. Retrieved 2012-08-11.
  9. Robert Engelmore; et al., eds. (1988). Blackboard Systems. Addison-Wesley Pub (Sd).
  10. Bennett, James S. (1981). DART: An Expert System for Computer Fault Diagnosis. IJCAI.
  11. Mettrey, William (1987). "An Assessment of Tools for Building Large Knowledge- BasedSystems". AI Magazine. 8 (4). Archived from the original on 2013-11-10. Retrieved 2013-11-10.
  12. MacGregor, Robert (June 1991). "Using a description classifier to enhance knowledge representation". IEEE Expert. 6 (3): 41–46. doi:10.1109/64.87683. S2CID   29575443.
  13. Berners-Lee, Tim; James Hendler; Ora Lassila (May 17, 2001). "The Semantic Web A new form of Web content that is meaningful to computers will unleash a revolution of new possibilities". Scientific American. 284: 34–43. doi:10.1038/scientificamerican0501-34. Archived from the original on April 24, 2013.

Further reading