Conceptual dependency theory

Last updated

Conceptual dependency theory is a model of natural language understanding used in artificial intelligence systems.

Contents

Roger Schank at Stanford University introduced the model in 1969, in the early days of artificial intelligence. [1] This model was extensively used by Schank's students at Yale University such as Robert Wilensky, Wendy Lehnert, and Janet Kolodner.

Schank developed the model to represent knowledge for natural language input into computers. Partly influenced by the work of Sydney Lamb, his goal was to make the meaning independent of the words used in the input, i.e. two sentences identical in meaning, would have a single representation. The system was also intended to draw logical inferences. [2]

The model uses the following basic representational tokens: [3]

  • real world objects, each with some attributes.
  • real world actions, each with attributes
  • times
  • locations

A set of conceptual transitions then act on this representation, e.g. an ATRANS is used to represent a transfer such as "give" or "take" while a PTRANS is used to act on locations such as "move" or "go". An MTRANS represents mental acts such as "tell", etc.

A sentence such as "John gave a book to Mary" is then represented as the action of an ATRANS on two real world objects, John and Mary.

DESCRIPTIONACTIONEXAMPLE
Transfer of abstract relationshipATRANSgive
Transfer of the physical location of the objectPTRANSgo
Application of physical force to an objectPROPELpush
Grasping of an object by an actorGRASPclutch
Movement of a body part by its ownerMOVEkick

See also

Related Research Articles

Knowledge representation and reasoning is the field of artificial intelligence (AI) dedicated to representing information about the world in a form that a computer system can use to solve complex tasks such as diagnosing a medical condition or having a dialog in a natural language. Knowledge representation incorporates findings from psychology about how humans solve problems and represent knowledge in order to design formalisms that will make complex systems easier to design and build. Knowledge representation and reasoning also incorporates findings from logic to automate various kinds of reasoning, such as the application of rules or the relations of sets and subsets.

<span class="mw-page-title-main">Natural language processing</span> Field of linguistics and computer science

Natural language processing (NLP) is an interdisciplinary subfield of linguistics, computer science, and artificial intelligence concerned with the interactions between computers and human language, in particular how to program computers to process and analyze large amounts of natural language data. The goal is a computer capable of "understanding" the contents of documents, including the contextual nuances of the language within them. The technology can then accurately extract information and insights contained in the documents as well as categorize and organize the documents themselves.

<span class="mw-page-title-main">Data model</span> Model that organizes elements of data and how they relate to one another and to real-world entities.

A data model is an abstract model that organizes elements of data and standardizes how they relate to one another and to the properties of real-world entities. For instance, a data model may specify that the data element representing a car be composed of a number of other elements which, in turn, represent the color and size of the car and define its owner.

Natural-language understanding (NLU) or natural-language interpretation (NLI) is a subtopic of natural-language processing in artificial intelligence that deals with machine reading comprehension. Natural-language understanding is considered an AI-hard problem.

In artificial intelligence, symbolic artificial intelligence is the term for the collection of all methods in artificial intelligence research that are based on high-level symbolic (human-readable) representations of problems, logic and search. Symbolic AI used tools such as logic programming, production rules, semantic nets and frames, and it developed applications such as knowledge-based systems, symbolic mathematics, automated theorem provers, ontologies, the semantic web, and automated planning and scheduling systems. The Symbolic AI paradigm led to seminal ideas in search, symbolic programming languages, agents, multi-agent systems, the semantic web, and the strengths and limitations of formal knowledge and reasoning systems.

<span class="mw-page-title-main">Image schema</span>

An image schema is a recurring structure within our cognitive processes which establishes patterns of understanding and reasoning. As an understudy to embodied cognition, image schemas are formed from our bodily interactions, from linguistic experience, and from historical context. The term is introduced in Mark Johnson's book The Body in the Mind; in case study 2 of George Lakoff's Women, Fire and Dangerous Things: and further explained by Todd Oakley in The Oxford handbook of cognitive linguistics; by Rudolf Arnheim in Visual Thinking; by the collection From Perception to Meaning: Image Schemas in Cognitive Linguistics edited by Beate Hampe and Joseph E. Grady.

The language of thought hypothesis (LOTH), sometimes known as thought ordered mental expression (TOME), is a view in linguistics, philosophy of mind and cognitive science, forwarded by American philosopher Jerry Fodor. It describes the nature of thought as possessing "language-like" or compositional structure. On this view, simple concepts combine in systematic ways to build thoughts. In its most basic form, the theory states that thought, like language, has syntax.

Roger Carl Schank was an American artificial intelligence theorist, cognitive psychologist, learning scientist, educational reformer, and entrepreneur.

<span class="mw-page-title-main">Object Process Methodology</span> Modelling language and methodology for capturing knowledge and designing systems

Object Process Methodology (OPM) is a conceptual modeling language and methodology for capturing knowledge and designing systems, specified as ISO/PAS 19450. Based on a minimal universal ontology of stateful objects and processes that transform them, OPM can be used to formally specify the function, structure, and behavior of artificial and natural systems in a large variety of domains.

A conceptual model is a representation of a system. It consists of concepts used to help people know, understand, or simulate a subject the model represents. In contrast, physical models are physical object such as a toy model that may be assembled and made to work like the object it represents.

Sydney MacDonald Lamb is an American linguist and professor at Rice University, whose stratificational grammar is a significant alternative theory to Chomsky's transformational grammar. He has specialized in Neurocognitive Linguistics and a stratificational approach to language understanding.

A modeling perspective in information systems is a particular way to represent pre-selected aspects of a system. Any perspective has a different focus, conceptualization, dedication and visualization of what the model is representing.

In philosophy of mind, the computational theory of mind (CTM), also known as computationalism, is a family of views that hold that the human mind is an information processing system and that cognition and consciousness together are a form of computation. Warren McCulloch and Walter Pitts (1943) were the first to suggest that neural activity is computational. They argued that neural computations explain cognition. The theory was proposed in its modern form by Hilary Putnam in 1967, and developed by his PhD student, philosopher, and cognitive scientist Jerry Fodor in the 1960s, 1970s, and 1980s. Despite being vigorously disputed in analytic philosophy in the 1990s due to work by Putnam himself, John Searle, and others, the view is common in modern cognitive psychology and is presumed by many theorists of evolutionary psychology. In the 2000s and 2010s the view has resurfaced in analytic philosophy.

<span class="mw-page-title-main">Grammar induction</span>

Grammar induction is the process in machine learning of learning a formal grammar from a set of observations, thus constructing a model which accounts for the characteristics of the observed objects. More generally, grammatical inference is that branch of machine learning where the instance space consists of discrete combinatorial objects such as strings, trees and graphs.

Frames are an artificial intelligence data structure used to divide knowledge into substructures by representing "stereotyped situations". They were proposed by Marvin Minsky in his 1974 article "A Framework for Representing Knowledge". Frames are the primary data structure used in artificial intelligence frame languages; they are stored as ontologies of sets.

Hierarchical temporal memory (HTM) is a biologically constrained machine intelligence technology developed by Numenta. Originally described in the 2004 book On Intelligence by Jeff Hawkins with Sandra Blakeslee, HTM is primarily used today for anomaly detection in streaming data. The technology is based on neuroscience and the physiology and interaction of pyramidal neurons in the neocortex of the mammalian brain.

Script theory is a psychological theory which posits that human behaviour largely falls into patterns called "scripts" because they function analogously to the way a written script does, by providing a program for action. Silvan Tomkins created script theory as a further development of his affect theory, which regards human beings' emotional responses to stimuli as falling into categories called "affects": he noticed that the purely biological response of affect may be followed by awareness and by what we cognitively do in terms of acting on that affect so that more was needed to produce a complete explanation of what he called "human being theory".

The history of natural language processing describes the advances of natural language processing. There is some overlap with the history of machine translation, the history of speech recognition, and the history of artificial intelligence.

The following outline is provided as an overview of and topical guide to natural-language processing:

This glossary of artificial intelligence is a list of definitions of terms and concepts relevant to the study of artificial intelligence, its sub-disciplines, and related fields. Related glossaries include Glossary of computer science, Glossary of robotics, and Glossary of machine vision.

References

  1. Roger Schank, 1969, A conceptual dependency parser for natural language Proceedings of the 1969 conference on Computational linguistics, Sång-Säby, Sweden pages 1-3
  2. Cardiff University on Conceptual dependency theory
  3. Language, mind, and brain by Thomas W. Simon, Robert J. Scholes 1982 ISBN   0-89859-153-8 page 105