Foundations and Theory of AGI
- Mathematically rigorous AGI models.
- Mathematical fomalization of concepts important in AGI, like reasoning, generalization, learning, intelligence, etc.
- Proofs of properties of AGI models and concepts.
The Role of Embodiment in AGI
What is the role of embodiment in general intelligence? Is it, for instance,
- a necessary aspect,
- an optional but useful aspect
- a feature of humanity that need not be included in a non-human-like AGI
- a feature of humanity that need not be included even in an AGI intended to emulate human intelligence closely?
Different AGI projects may address embodiment in different ways, e.g.
- avoiding it altogether
- postponing it initially and incorporating it after a certain level of intelligence has been reached without it
- using physical robotics
- using embodiment in a simulation world such as AGISim
Different theoretical perspectives lead to different choices in this regard. The relationship between theory and practice as regards the AGI/embodiment issue is an intimate and subtle one.
Key Enabling Applications for AGI
Ever since its inception, the field of artificial intelligence has been driven by a small number of ambitious but practical long-term goals, from a world champion chess machine to soccer-playing robot dogs, that have set the long-term research agenda and defined the aspirations of the field. Those goals often have a pervasive impact on the field, biasing its evolution by favoring some techniques over others. In recent history, those challenges, e.g. the DARPA Grand Challenge, have tended to favor special-purpose over general solutions to intelligence. If the field of AGI is to enjoy a renaissance, we need to identify and promote long-term challenges that will fuel its development.
Key questions include:
- What are the characteristics of applications that favor general over special-purpose solutions?
- Should we strategically avoid in the short-to-medium term known AI-hard problems prone to special-purpose solutions or pursue them as a key differentiator?
- Should we embrace breadth over depth? If so, how do we practically demonstrate the advantages of breadth? If not, how do we avoid falling into the pitfalls of special-purpose AI?
Tool Use Among AGI's
Human beings depend on tools for jobs beyond the capability of their innate body/minds. In fact it has been argued that humans are intrinsically tool-using intelligences; that our minds are fundamentally defined by (among other features) their paterns of tool use.
There is every reason to expect AGI's to have a similar, but even more intense, relationship with tools. As a single example, consider the uses to an AGI of linguistics tools like online dictionaries, and mathematical tools like computer algebra systems. In general, from an AGI perspective, a "tool" can be any hardware/software that is outside the AGI system.
Issues to be studied in this regard include (but are not limited to):
- how to represent a tool within the system
- how to connect a tool with the system
- how to find out the precondition and consequence of using a novel tool
- how to compare candidate tools for a given job
Management of Complex Goal Structures
An AGI typically needs to deal with concurrent goals, which may come from different sources, and with mutual relations unknown in advance.
Issues to be studied here include (but are not limited to):
- how to decide what to do when there are multiple goals
- what to do if there are incompatible goals
- how to handle resource competition among goals
- whether a derived goal can become an adversary or competitor of its parent goal
- how to deal with goals that can never be fully achieved
Lifelong and Multi-Strategy Learning
Most current works in machine learning involve "learning algorithms" that are explicitly invoked on given input to produce desired output, according to a well-defined overall process.
In the human mind, however, "learning" is often a lifelong process interwoven with other activities, and does not follow a single strategy. AGI may require the same sort of open-ended, broad-scope approach to learning -- a fact that gives rise to the following issues:
- how to divide a learning process into atomic steps, which can be combined flexibly
- how to trigger and terminate learning in some other processes
- how to combine learning in some other processes
- how to handle anytime and real-time learning
When an AGI encounters a novel problem instance without an algorithm for the corresponding problem type, it should, like a human in similar situations, try to solve the problem in a case-by-case manner, by exploring special features of the current instance. In the end, the system may successfully solve this case, though it still lacks an algorithm with which to solve all problems of the same type in the same manner. The issues to be studied include (but are not limited to):
- how to make a system solve such a problem in a case-by-case manner
- how to relate this working mode to algorithmic problem-solving
AGI-based Natural Language Processing
Traditionally, natural language processing (NLP) is usually carried out by treating language capability in isolation. In an AGI system, however, there is the opportunity to integrate NLP with other capabilities, which include (but are not limited to):
- how to apply general-purpose learning mechanisms to NLP
- how to use inference for disambiguation
- how to use commonsense and domain knowledge in NLP
- how to connect NLP to sensorimotor mechanism
- how to use speech acts to achieve goals
Connecting Sensorimotor and Concept-level Cognition
AGI faces the requirement of connecting sensorimotor and concept-level cognition -- a topic which has been addressed in several subfields of AI, though with arguable adequacy. Is there still important new headway to be made in this area? When an AGI system is taken as a whole, what roles do sensorimotor and concept-level cognition play? For what purposes should they be connected? In what sort of way?
Coherence of Integrative/Hybrid AGI Systems
One natural approach to AGI is to combine existing results in various subfields of AI to get an integrative/hybrid system.
One problem faced by this approach is the coherence of the components.
Issues arising here include (but are not limited to):
- how to define "coherence" in a heterogeneous system
- how to handle the interface between different components
- how to decide the level of integration between different techniques
- how to handle internal inconsistency
- how to handle internal competition
Evaluation and Comparison of AGI Projects
Since existing AGI projects are based on very different assumptions, how can we meaningfully compare them?
How can we objectively evaluate the progress of AGI research?