We discuss metacognitive modelling as an enhancement to cognitive modelling and computing. Metacognitive control mechanisms should enable AI systems to self-reflect, reason about their actions, and to adapt to new situations. In this respect, we propose implementation details of a knowledge taxonomy and an augmented data mining life cycle which supports a live integration of obtained models.
Deep Dive into On Introspection, Metacognitive Control and Augmented Data Mining Live Cycles.
We discuss metacognitive modelling as an enhancement to cognitive modelling and computing. Metacognitive control mechanisms should enable AI systems to self-reflect, reason about their actions, and to adapt to new situations. In this respect, we propose implementation details of a knowledge taxonomy and an augmented data mining life cycle which supports a live integration of obtained models.
arXiv:0807.4417v2 [cs.AI] 15 Jan 2009
On Introspection, Metacognitive Control and
Augmented Data Mining Live Cycles
Daniel Sonntag
German Research Center for Artificial Intelligence
66123 Saarbr¨ucken, Germany
sonntag@dfki.de
Abstract. We discuss metacognitive modelling as an enhancement to
cognitive modelling and computing. Metacognitive control mechanisms
should enable AI systems to self-reflect, reason about their actions, and
to adapt to new situations. In this respect, we propose implementation
details of a knowledge taxonomy and an augmented data mining life cy-
cle which supports a live integration of obtained models.
Keywords: Metacognitive Modelling, Data Mining
1
Introduction
Cognitive computing is the development of computer techniques to emulate hu-
man perception, intelligence, and problem solving. Cognitive models are equipped
with artificial sensors and actuators which are integrated and embedded into
physical systems or ambient intelligence environments to act in the physical
world. The goal is to have cognitive capabilities and to perform cognitive con-
trol (e.g., see [1]). To overcome problems in shared control (of, e.g., navigating
robots [2]), direct communication (in natural language dialogue) between a hu-
man participant and a technical control architecture can be employed. This
could be used for mutual disambiguation of multiple sensory modalities in a
learning environment. As one of the major topics of sensory-based control mech-
anisms, automatic perception learning by introspection and relevance feedback
could help in this disambiguation task. In order to pursue the idea of cogni-
tive systems able to self-reflect, reason about their actions, and to adapt to new
situations, metacognitive strategies can be employed.
In this paper, we will present the core idea of a metacognitive control model of
machine learning with respect to problem solving capabilities to be exemplified
by improving autonomous reaction behaviour.
We start by clarifying the term metacognition. Metacognition is cognition
about cognition. It can, in principle, enable artificial intelligence systems to
monitor and control themselves, choose goals, assess progress, and adopt new
strategies for achieving goals.1 [4] associates metacognitive components with the
1 For example, students preparing for an exam judge about the relative difficulty of
the learning material and use this for study strategies. The resulting reasoning task
ability of a subject (or an intelligent agent in general) to orchestrate and monitor
knowledge of the problem solving process; [5] argues that metacognitive abilities
correlate with standard measures of intelligence; [6] talks about systems that
know what they are doing.
Here, we adopt the growing interest in metacognitive strategies2 for AI sys-
tems to build a metacognitive model for adaptable AI systems, which involves
computational models of self-representation and self-awareness. Ontologies rep-
resent the knowledge groundwork for the self-representation of a system informa-
tion state to be included into a metacognitive model.3 For example, McCarthy
defines the term introspection as a machine having a belief about its own mental
state rather than a belief about propositions concerning the world.
According to this explanation of metacognition we hypothesise that researchers
in adaptable AI systems should investigate in metacognition because it can help
us:
1. address the difficulty to write down control management rules. Rules may
not be obvious, tangible, or identifiable, or they may present an engineering
overhead.
2. provide self-improvement through adaptation and customisation.
3. offer designs for never-ending learning.
4. integrate a variety of previously isolated findings: dialogue architectures,
finite state strategies, information states, (un)supervised learning, stacked
generalisation, reinforcement learning, interactive learning, and embedded
data mining.
Apart from its complexity, metacognition highlights an empirically tractable
model creation and verification process.
2
Model, Introspective View and Control
We use the term model in the sense given by [7]:
To an observer B, an object A• is a model of an object A to the extent
that B can use A• to answer questions that interest him about A.
A can be the world or a specific sub-domain such as the football domain.
To answer questions about the football domain, an A• has to be constructed.
is a second-order reasoning process about the own learning abilities called meta-
reasoning or, more generally, metacognition.
2 IBM Autonomic Computing Initiative, http://www.research.ibm.com/autonomic/,
and, e.g., DARPA Information Processing Technology Office on Cognitive Systems,
http://www.darpa.mil/ipto/thrust areas/thrust cs.asp.
3 [8] outlines that for intelligent behaviour, a declarative knowledge model must be
created first. Examination of, e.g., own beliefs would then be possible when the beliefs
are explicitly represented. McCarthy sees introspectio
…(Full text truncated)…
This content is AI-processed based on ArXiv data.