Shift of Bias As Non-Monotonic Reasoning (1990)
by Benjamin N. Grosof and Stuart J. Russell
Abstract:
We show how to express many kinds of inductive leaps, and shifts of
bias, as deductions in a non-monotonic logic of {prioritized
defaults}, based on circumscription. This continues our effort to
view learning a concept from examples as an inference process, based
on a declarative representation of biases, developed in (Russell &
Grosof 1987, 1989). In particular, we demonstrate that "version
space" bias can be encoded formally in such a way that it will be
weakened when contradicted by observations. Implementation of
inference in the non-monotonic logic then enables the principled,
automatic modification of the description space employed in a concept
learning program, which Bundy {et al.} (1985) named as "the most
urgent problem facing automatic learning". We also show how to
formulate with prioritized defaults two kinds of "preference" biases:
maximal specificity and maximal generality. This leads us to a
perspective on inductive biases as {preferred beliefs} (about the
external environment). Several open questions remain, including how
to implement efficiently the required non-monotonic theorem-proving,
and how to provide the epistemological basis for default axioms and
priorities among default axioms.
Last update: 1-8-98
Up to Benjamin Grosof's Papers page
Up to Benjamin Grosof home page
[ IBM Research home page ][
IBM home page |
Order |
Search |
Contact IBM |
Help |
(C) |
(TM)
]