WORKSHOP #1: Can Deep Neural Networks solve the problems of  Artificial General Intelligence?  (Chair: Leslie Smith)

Administrative details:
July 17, 2016, 10:30-12:30

Location: New School Lower Manhattan. 63 Fifth Avenue, Manhattan, New York.


Leslie Smith. Introduction: setting the scene (8 minutes) (video)
Title: Deep neural networks: the only show in town?

Short talks:18 minutes each.

Pei Wang. Title: AGI via DL? (slides | video)
Abstract: Though deep learning has many remarkable progress in many domains, it has fundamental limitations when evaluated from the viewpoint of AGI:
(1) Though deep learning techniques have various forms and can be applied to many domains, they are not general-purpose, but limited to a special type of problem.
(2) Deep learning treats "learning" as a stand-alone function approximation, while in AGI systems learning should interweave with many other cognitive processes.
(3) AGI systems typically need to work in realistic environments where the system’s knowledge and resource supply are usually insufficient, while deep learning has ignored these factors.
Therefore, deep learning cannot lead us to AGI, though it provides AGI with ideas, lessons, and tools.

Cosmo Harrigan. Title: Deep Learning for AGI: Survey of Recent Developments (slides | video)
Abstract: The relevance of deep learning to the field of Artificial General Intelligence research is described, in terms of the expanding scope of deep learning model designs and the increasing combination of deep learning with other methods to form hybrid architectures. Deep learning is a rapidly expanding research area, and various groups have recently proposed novel extensions to earlier deep learning models, including: generative models; the ability to interface with external memory and other external resources; Neural Turing Machines which learn programs; deep reinforcement learning; neuroevolution; intrinsic motivation and unsupervised learning; and more complex network models.
The presentation is organized with a view towards the integration of additional abilities into deep learning architectures, including: planning; reasoning and logic; data efficient learning and one-shot learning; program induction; additional learning algorithms other than backpropagation; more sophisticated techniques for unsupervised learning and reinforcement learning; and structured prediction. We can view deep learning research as making significant contributions relevant to AGI, but also note that future progress in the field will likely depend on integrating threads of research from cognitive science, machine learning, universal artificial intelligence and symbolic artificial intelligence, resulting in systems that significantly extend the boundaries of what might be considered “deep learning” today.

Brandon Rohrer. Title: Deep neural networks can’t make AGI (video)
Abstract: Deep neural networks are excellent at finding patterns, but achieving human-level performance on all measures of intelligence requires more than finding patterns.

There are at least four gaps between DNNs and AGI.
1. DNNs can’t generalize, except on structured two-dimensional data.
2. DNNs require many more exposures for learning.
3. DNNs can’t learn flexible state-action mappings.
4. DNNs can’t learn action sequences.

To bridge these gaps and achieve AGI, DNNs will need to be paired with other tools, such as model-based reinforcement learning and planners.

Panel Session (Pei, Cosmo, Brandon, Ben, Leslie) (1 hour) (video)

Come and make this a lively session! Bring your views, and expect to have them challenged.


WORKSHOP #2: Environments and Evaluation for AGI 

Organizer: Ben Goertzel

Workshop Keynote: Julian Togelius, “Videogames as a Platform for AGI R&D” (video)


  • Kris Thorisson, “Task Theory and a Task Performance Evaluation Toolkit” (video)
  • Ben Goertzel, “Adapting Child IQ Tests for Today’s Robots” (video)

A substantial portion of the workshop will consist of moderated discussion among the audience members as well as the speakers.  This will be a highly participatory workshop.


As the pursuit of AGI advances, an increasing number of R&D groups and architectures and theoretical paradigms are entering into the fray –which intensifies the importance of having a solid set of standard, broadly accepted environments for experimenting with multiple different AGI systems, and a flexible set of metrics for gauging the performance of different AGI systems.

This workshop will explore existing and envisioned environments for experimentation with in-progress AGI systems, and metrics for measuring the performance of AGI systems in these environments. Topics might include:

* What kinds of environments might be most useful for qualitatively or quantitatively experimenting with various AGI systems and/or for enabling different AGI systems, created by different groups, to interact together?
* What kinds of tests or competitions might be useful for evaluating early-stage AGI systems (as distinct from narrow-AI systems)?
* What are the pitfalls involved with comparison and quantitative evaluation of early-stage AGI systems, and what can be done to avoid these pitfalls?
* What might be a good software architecture for incorporating environments and metrics created by different groups into a common framework?