Abstract

For over sixty years, the artificial intelligence and cognitive systems communities have represented problems to be solved as a combination of an initial and goal state along with some background domain knowledge. In this paper, I challenge this representation because it does not adequately capture the nature of a problem. Instead, a problem is a state of the world that limits choice in terms of potential goals or available actions. To capture this view of a problem, a representation should include a characterization of the context that exists when a problem arises and an explanation that causally links the part of the context that contributes to the problem with a goal whose achievement constitutes a solution. The challenge to the research community is not only to represent such features but to design and implement agents that can infer them autonomously.
Saving...