CTI staff have conducted both fundamental and applied research
in the fields of uncertainty representation, inference, and
belief revision. In the course of this work, they have collaborated
with leading researchers in the area, such as Zadeh, Shafer,
Schum, Lindley, and Shastri.
As part of this work, we have investigated both Bayesian
and non-Bayesian representations of uncertainty. One objective
of this work has been to represent not simply "betting
odds" (as exemplified in classical Bayesian probabilities),
but also the amount of knowledge underlying
a set of beliefs. CTI staff explored the theoretical underpinnings
of uncertainty in a variety of reviews
and analyses. Our recent work in this area has involved
the development of an artificial
intelligence system that reasons about qualitatively
different patterns of uncertainty.
Inference is the expansion of one's set of beliefs that occurs
when new beliefs are derived from a set of pre-existing beliefs
by means of "rules" (e.g., of logic or probability).
CTI has investigated, constructed, and tested a variety of
mechanisms for inference, from assumption-based
truth maintenance to rapid
parallel reasoning in a connectionist system.
Rules of inference, however, are most definitely not equivalent
to rules of reasoning. Belief revision involves strategies
for deciding what inferences to attempt and when. Belief revision
strategies are particularly important when new information
conflicts with pre-existing beliefs. In this case, the belief
set must contract and not simply expand, and a choice must
be made among alternative ways to revise pre-existing beliefs
to make them consistent with new information or goals. Belief
revision has been one of CTI's prime areas of research.
CTI's work in this area began with a generic inference engine,
called the Non-Monotonic Probabilist,
which combines aspects of a formal numerical uncertainty calculus
with assumption-based reasoning. The
Non-Monotonic Probabilist was applied in the development
of systems for several different domains:
- an expert
system for image understanding (ETL),
- the Self-Reconciling
Evidential Database, an information management system for
national intelligence analysts, and
- an in-flight
pilot decision aid for route replanning (WPAFB).
CTI's more recent work has involved connectionist implementation
of a layered
reasoning architecture, in which a reflective (metacognitive)
subsystem monitors and regulates the activity of a rapid reflexive
(recognitional) subsystem. The reflective subsystem learns
effective strategies for belief revision from experience.
CTI's work on uncertainty, inference, and belief revision
has led to insights into human reasoning in real-world domains.
In particular, it sheds light on some so-called biases
in decision making that may involve appropriate reasoning
strategies rather than faulty rules of
inference. This work on biases, in turn, was the basis
for the development of Personalized
and Prescriptive Decision Aiding.
This work has also led to research on inferential retrieval,
in which reasoning about mental models of a research domain
supports more efficient retrieval of relevant documents.