Address by the 2017 recipient of the Herbert A. Simon Prize
for Advances in Cognitive Systems
Address by the 2017 recipient of the Herbert A. Simon Prize for Advances in Cognitive Systems
Deep learning has had enormous success on perceptual tasks but still struggles in providing a model for inference. To address this gap, Bottou (2011) suggested that an ability to reuse and compose trainable learning modules could provide the basis of a new form of inference calculus, and recently some ideas of this sort have been explored (differentiable neural computers, neural theorem provers, etc.). I will focus on the Memory-Attention-Composition networks (MACnets) developed in my group. The MACnet design provides a strong prior for explicitly iterative reasoning, enabling it to learn explainable, structured inference, as well as achieve good generalization from a modest amount of data. The model builds from the great success of existing recurrent cells such as LSTMs: A MacNet is a sequence of single recurrent Memory, Attention, and Composition (MAC) cells. However, its design imposes structural constraints on the operation of each cell and the interactions between them, incorporating explicit control and soft attention mechanisms. We demonstrate the model’s strength and robustness on the challenging CLEVR data set for visual reasoning (Johnson et al. 2016), achieving a new state-of-the-art 98.9% accuracy, halving the error rate of the previous best model. More importantly, we show that the new model is more data efficient, achieving good results from even a modest amount of training data.
This talk reports Joint work with Drew Hudson.
Speech-based and text-based intelligent assistants provide many benefits. Through natural language interaction, they can help people in a number of ways: they can answer questions, solve problems, complete tasks, address concerns, offer reassurance, and so on. In this talk, I examine customer texts from the Relational Strategies in Customer Service (RSiCS) Data Set.* I focus on challenging cases in which the customer's exact concern is unclear. I argue that creativity is required to address these cases. In particular, I argue that cognitive capabilities for lateral thinking and tact are required to understand and clarify the customer's concern and compose a helpful reply.
Beaver, I., Freeman, C., & Mueen, A. (2017). An annotated corpus of relational strategies in customer service. https://arxiv.org/abs/1708.05449
The field of AI was motivated originally by the objective of automating tasks performed by humans. While advances in machine learning have enabled impressive capabilities such as self-driving vehicles, more cognitive tasks such as planning or design have resisted full automation because of the vast amounts of knowledge and commonsense reasoning that they require. This talk describes a line of research aimed at developing AI systems that are designed to augment rather than replace human capabilities, leveraging automated planning, machine learning, and natural language understanding technologies. The talk will also describe successful applications of the research in deployed systems. Looking to the future, the talk will close with discussion of several open challenges for future work on AI that augments human skills.
In recent years, deep learning systems have accelerated progress in a number of pattern recognition tasks. In this talk, we look at tasks that go beyond pattern recognition -- tasks that require hierarchical modeling, reasoning, and planning. We examine what techniques of learning, attention, and representation are effective for these tasks, and how to bridge learned versus hand-engineered solutions, as well as symbolic versus subsymbolic systems.
Problem solving, or goal-directed sequential activity, is now typically understood within the context of the wider cognitive architecture, including the use of domain-specific knowledge and heuristics in the service of goals. Key features of human cognition include both the inevitability of fixation on initial ideas and the ability to introduce variation while generating novel ideas. In this talk, I present findings from engineers and designers that identify two sources of creative invention in design: Exploring the presented problem and intentional generation of multiple, diverse alternatives. The reported studies identify systematic patterns that capture abstract similarities among designs that are useful in idea generation, and may serve as candidate representations for domain knowledge used in inventive problem solving. Encapsulating this metaknowledge about design poses a representational challenge for cognitive systems.