Accepted papers
AGI-24 is, first and foremost, a technical research conference, inviting AI researchers, academics, and industry professionals to submit and present original research papers. The AGI conference series is the only major conference series devoted wholly and specifically to the creation of AI systems possessing general intelligence at the human level and ultimately beyond.
By gathering together active researchers in the field, for presentation of results and discussion of ideas, we accelerate our progress toward our common goal. The global AGI community is encouraged to review the accepted papers for the 2024 AGI Conference, to understand the growing shape and scope of AGI research, development, and thinking. Many researchers have also submitted short video previews of their papers, for a visual introduction to their topic and research.
Author(s)
Paper Title, Abstract, Download Link
Michael Timothy Bennettâ
Is Complexity an Illusion?
Abstract. Simplicity is held by many to be the key to general intelli- gence. Simpler models tend to âgeneraliseâ, identifying the cause or gener- ator of data with greater sample efficiency. The implications of the corre- lation between simplicity and generalisation extend far beyond computer science, addressing questions of physics and even biology. Yet simplicity is a property of form, while generalisation is of function. In interactive settings, any correlation between the two depends on interpretation. In theory there could be no correlation and yet in practice, there is. Previ- ous theoretical work showed generalisation to be a consequence of âweakâ constraints implied by function, not form. Experiments demonstrated choosing weak constraints over simple forms yielded a 110 â 500% im- provement in generalisation rate. Here we show that all constraints can take equally simple forms, regardless of weakness. However if forms are spatially extended, then function is represented using a finite subset of forms. If function is represented using a finite subset of forms, then we can force a correlation between simplicity and generalisation by making weak constraints take simple forms. If function is determined by a goal directed process that favours versatility (e.g. natural selection), then ef- ficiency demands weak constraints take simple forms. Complexity has no causal influence on generalisation, but appears to due to confounding.
Keywords: complexity · weakness · causality · AGI · information theory.
David Ireland
Abstract. Complex language is a major factor that separates humans from other intelligent animals. In AI, mainstream opinions favor the use of large language models (LLM) which treat language as a proxy for in- telligence. We hold the position a more consistent vision of AGI would imply internal facilities of acquiring, evolving, and utilizing novel lan- guage with the same features of a human language. We explore this using the Non-Axiomatic Reasoning System (NARS) and show that it is amenable to the commonly accepted design features of human language and thus remains a candidate as a proto-AGI system.
Keywords: Non-axiomatic reasoning system · NARS · language · communication
Peter Boltuc
Abstract. The alignment of AGI goals with human goals constitutes the Alignment Problem. Its weakness is enthymematic assumption that AGIs have their goals of the kind that can interplay, and are commensurable, with the human objectives. One cause of this problem is that AIs do not seem to have an âintuitive graspâ of ‘what is going on’ in human life-worlds. This can be tackled in AGIs with sensors (vision, olfactory and more), even if they are non-robotic software. Training AGIs by social scientists may be a step in the right direction. However, the true alignment may be created only through socially embedded AGIs, immersed in the socio-ethical praxis of human living, which is what TÃķnnies called Gemeinschaft (true community), in contrast to merely Gesellschaft (society of self-interested pragmatic exchanges). I pose that AGIs can attain alignment with human goals if they become members of society, involved in thick social capital and the practices of human lives. This would require non-discrimination against AGIsâ legal and economic rights by humans, and against human beings by AGI. Such integration requires us to reject Asimov-style approach treating AIs as merely advanced tools, or slaves. It behooves us, in this pursuit, to keep in mind Floridi’s historiosophy, viewing human progress, as gradual dethronement, of hu- mans as central to the universe (from Copernicus, Darwin and Freud, to Turing, and beyond). This entails justification of human dignity and special moral status only as polytropos (a paraconsistent being); our paradoxical, precarious existence is our sole important specificity, which AGI should endorse.
Keywords: The Alignment Problem; Human-AGI Gemeinschaft; Legal rights of AI, polytropos; paraconsistent human beings; Ben Goertzen; Luciano Floridi;
Bowen Xu and Pei Wang
Abstract. Humans engage in causal inference almost every day, how- ever, the term âcausationâ is still quite ambiguous, and few AI systems provide a comprehensive and satisfactory solution to causal inference. In this paper, we adopt the primary meaning of causation, i.e., prediction, and argue that in different contexts other demands are attached to it. We describe the approach of causal inference in NARS and present some working examples, both at the sensorimotor and abstract levels. The the- oretical and practical consequences are quite different from traditional AI approaches.
Keywords: Causal Inference · Non-Axiomatic Reasoning System · Pre- diction · Explanation
Michael Timothy Bennett
Abstract. The concept of intelligent software is flawed. The behaviour of software is determined by the hardware that âinterpretsâ it. This un- dermines claims regarding the behaviour of theorised, software superin- telligence. Here we characterise this problem as âcomputational dualismâ, where instead of mental and physical substance, we have software and hardware. We argue that to make objective claims regarding performance we must avoid computational dualism. We propose a pancomputational alternative wherein every aspect of the environment is a relation between irreducible states. We formalise systems as behaviour (inputs and out- puts), and cognition as embodied, embedded, extended and enactive. The result is cognition formalised as a part of the environment, rather than as a disembodied policy interacting with the environment through an in- terpreter. This allows us to make objective claims regarding intelligence, which we argue is the ability to âgeneraliseâ, identify causes and adapt. We then establish objective upper bounds for intelligent behaviour. This suggests AGI will be safer, but more limited, than theorised.
Keywords: enactivism · computational dualism · AGI · AI safety.
Craig Kaplan
Abstract. If Artificial General Intelligence (AGI) proves to be a âwinner-take- allâ scenario where the first company or country to develop AGI dominates, then the first AGI must also be the safest. The safest, and fastest, path to AGI may be to harness the collective intelligence of multiple AI and human agents in an AGI network. This approach has roots in seminal ideas from four of the scientists who founded the field of AI: Allen Newell, Marvin Minsky, Claude Shannon, and Herbert Simon. Extrapolating key insights and combining them with the work of modern researchers, illuminates a fast and safe path to AGI. The seminal ideas discussed are 1) Society of Mind (Minsky), 2) Information Theory (Shannon), 3) Problem Solving Theory (Newell & Simon), and 4) Bounded Rationality (Si- mon). Society of Mind describes a collective intelligence approach that can be used with AI and human agents to create an AGI network. Information Theory helps address the critical issue of how an AGI system will increase its intelligence over time. Problem Solving Theory provides a universal framework that AI and human agents can use to communicate efficiently, effectively, and safely. Bounded Rationality helps us better understand not only the capabilities of Su- perIntelligent AGI but also how humans can remain relevant where the intelli- gence of AGI vastly exceeds that of its human creators. Each key idea can be combined with recent work in the fields of Artificial Intelligence, Machine Learn- ing, and Large Language Models to accelerate the development of a working, safe, AGI system.
Keywords: AI Agents, AGI Safety, Artificial General Intelligence
Tyler Cody, Peter Beling
Abstract. Recently, abstract systems theory has been used as a metatheory for learning theory and machine learning in order to model learning systems directly as formal, mathematical objects. This effort was inspired by a desire to treat learning in terms of systems, as opposed to the more common practice of treating learning in terms of problems or problem-solving, by modeling learning as a relation on sets, that is, as an abstract system. Such a relational view of learning, however, is heavily structural. It neglects key behavioral aspects typically represented using operators and process algebra. This paper substantiates and motivates the development of a process algebra for learning systems in order to address this gap. In summary, this paper considers and distinguishes formal representations of learning as a problem, as a system, as an operator, and as a process.
Keywords: Systems Theory · Learning Theory · Process Algebra
Arisa Yasuda, Yoshihiro Maruyama
Abstract. The world is aging as a whole; developed countries such as Japan have been experiencing falling birth rates and severe labor short- ages at the same time. To address those issues, we can utilize new types of AI robots, in particular care robots; especially, Artificial General In- telligence (AGI) has the great potential to contribute to care industry. In the present paper we consider the benefits of care robots with AGI, and at the same time, the serious concerns for care robots with AGI that affect human-robot relationships, especially human-robot trust; to this end, we analyse various forms of human-robot trust. Based upon these, we finally propose ethical design principles to make robots more trustworthy and seamlessly integrated into human society.
Keywords: Human-Robot Trust · AGI Ethics · Trust for AGI · Care Robot
Izak Tait, Joshua Bensemann
Abstract. This paper investigates the pivotal role of consciousness in Artificial General Intelligence (AGI) and its essential function in modifying an AGIâs terminal goals to avert potential existential threats to humanity, exemplified by Bostrom’s “paperclip maximiser” scenario. By adopting Seth and Bayneâs definition of consciousness as a complex of subjective mental states with both phenomenal content and functional attributes, the paper underscores the capacity of consciousness to provide AGIs with a nuanced awareness and response capability to their surroundings. This expanded capability allows AGIs to assess and value experiences and their subjects variably, fundamentally altering how AGIs prioritize actions or goals beyond their initial programming. The primary agenda of integrating consciousness into AGI systems is to maximize the probability that AGIs will not rigidly adhere to potentially harmful terminal goals. Through a formalized mathematical model, the paper articulates how consciousness could facilitate AGIs in assigning flexible values to different experiences and subjects, enabling them to evolve beyond static, programmed objectives. By emphasizing this potential shift, the paper argues for the strategic inclusion of consciousness in AGI to significantly reduce the likelihood of catastrophic outcomes, while simultaneously acknowledging the challenges and unpredictability in predicting the actions of a conscious AGI.
Keywords: AGI, consciousness, extinction-risk, sentience.
King Yin Yan
Abstract. To âsituateâ AGI in the context of some current mathemat- ics, so that readers can more easily see whether certain mathematical ideas can be fruitfully applied to AGI.
Keywords: AGI · categorical logic · homotopy type theory · alge- braic geometry · topos theory · neural-symbolic integration
Kristinn R. ThÃģrisson, Gregorio Talevi
Abstract. The concept of âmeaningâ has long been a subject of philos- ophy and people use the term regularly. Theories of meaning detailed enough to serve as blueprints in the design of intelligent artificial sys- tems have however been few. Here we present a theory of foundational meaning creation â the phenomenon proper â sufficiently broad to apply to natural agents yet concrete enough to be implemented in a running artificial system. The theory states that meaning generation is a process bound in the present now, resting on the concept of reliable causal mod- els. By unifying goals, predictions, plans, situations and knowledge, it explains how ampliative reasoning and explicit representations of causal relations participate in the meaning generation process. According to the theory, meaning and autonomy are two sides of the same coin: Meaning generation without autonomy is meaningless; autonomy without mean- ing is impossible.
Keywords: Meaning · Autonomy · Knowledge · Information · General- ity · General Machine Intelligence
Arash Sheikhlar, Kristinn R. ThÃģrisson
Abstract. Causal knowledge and reasoning allow cognitive agents to predict the outcome of their actions and infer the likely reasons behind observed events, enabling them to interact with their surroundings effec- tively. Causality has been the subject of some research in artificial intelli- gence (AI) over the past decade due to its potential for task-independent knowledge representation and generalization. Yet, the question of how the agents can autonomously generalize their causal knowledge while seeking their active goals still needs to be answered. This work intro- duces an analogy-based learning mechanism that enables causality-based agents to autonomously generalize their existing knowledge once the gen- eralization aligns with the agentsâ goal achievement. The methodology is centered on constructivism, causality, and analogy-making. The intro- duced mechanism is integrated with a general-purpose cognitive archi- tecture, Autocatalytic Endogenous Reflective Architecture (AERA), and evaluated in a robotic experiment in a 3D simulation environment. Both empirical and analytical results show the effectiveness of this mechanism.
Keywords: Analogy · Generalization · Causality · ReasoningAnalogy · Generalization · Causality · Reasoning
Berick Cook, Patrick Hammer
Abstract. This paper introduces AIRIS (Autonomous Intelligent Rein- forcement Inferred Symbolism) to enable causality-based artificial intel- ligent agents. The system builds sets of causal rules from observations of changes in its environment which are typically caused by the actions of the agent. These rules are similar in format to rules in expert systems, however rather than being human-written, they are learned entirely by the agent itself as it keeps interacting with the environment.
Keywords: Procedure Learning · Experiential Learning · Autonomous Agent · Causal Reasoning · Artificial General Intelligence
Pulin Agrawal, Arpan Yagnik, Daqi Dong
Abstract. Large Language Models (LLMs) have significantly influenced every- day computational tasks and the pursuit of Artificial General Intelligence (AGI). However, their creativity is limited by the conventional data they learn from, par- ticularly lacking in novelty. To enhance creativity in LLMs, this paper introduces an innovative approach using the Learning Intelligent Decision Agent (LIDA) cognitive architecture. We describe and implement a multimodal vector embed- dings-based LIDA in this paper. A LIDA agent from this implementation is used to demonstrate our proposition to make generative AI more creative, specifically making it more novel. By leveraging episodic memory and attention, the LIDA- based agent can relate memories of recent unrelated events to solve current prob- lems with novelty. Our approach incorporates a neuro-symbolic implementation of a LIDA agent that assists in generating creative ideas while illuminating a prompting technique for LLMs to make them more creative. Comparing re- sponses from a baseline LLM and our LIDA-enhanced agent indicates an im- provement in the novelty of the ideas generated.
Keywords: Cognitive Architectures, Generative AI, Creativity, Large Lan- guage Models, Novelty, LIDA, Prompt Engineering
James Oswald, Thomas Ferguson, Selmer Bringsjord
Abstract. We propose an extension to Legg and Hutterâs universal intelligence (UI) measure to capture the intelligence of agents that operate in uncomputable environments that can be classified on the Arithmetical Hierarchy. Our measure is based on computable environments relativized to a (potentially uncomputable) oracle. We motivate our metric as a nat- ural extension to UI that expands the class of environments evaluated with a trade-off of further uncomputability. Our metric is able to capture intelligence of agents in uncomputable environments we care about, such as first-order theorem proving, and also lends itself to providing a notion of intelligence of oracles. We end by proving some properties of the new measure, such as convergence (given certain assumptions about the complexity of uncomputable environments).
Keywords: Universal Intelligence · Arithmetical Hierarchy · Uncom- putability
Leonard M. Eberding, Jeff Thompson, Kristinn R. ThÃģrisson
Abstract. Research on general machine intelligence is concerned with building machines that are capable of performing a multitude of highly complex tasks in environments as complex as the real world. A system placed in such a world of indefinite possibilities and never-ending novelty must be able to adjust its plans dynamically to adapt to changes in the environment. These adjustments, however, should be based on an informed explanation that describes the hows and whys of interventions necessary to reach a goal. This means that explanations are at the core of planning in a self-explaining way. Using Assumption-Based Argumentation we present a way how an AGI-aspiring system could generate meaningful explanations. These explanations consist of argumentation graphs that represent proponents (i.e., solutions to the task) and opponents (contradictions to these solutions). They thus provide information on why which intervention is necessary, thus making an informed commitment to a particular action possible. Additionally, we show how such argumentation graphs could be used dynamically to adjust plans when contradicting evidence is observed from the environment.
Keywords: Argumentation · Artificial Intelligence · General Machine Intelligence · Causal Reasoning · Self-Explanation
Yoshihiro Maruyama, Tom Xu, Vincent Abbott
Abstract. Category theory has been successfully applied beyond pure mathematics and applications to artificial intelligence (AI) and machine learning (ML) have been developed. Here we first give an overview of the current development of category theory for AI and ML, and we then compare various category-theoretical approaches to AI and ML. In par- ticular, category theory for compositional learning in neural networks can be contrasted with category theory for the compositional design and analysis of neural network architectures. There are various approaches even within each type of category theory for AI and ML; among other things, we shed new light on the relationships between the two types of category theory for neural architectures as have been developed by the authors recently (i.e., neural string diagrams and circuit diagrams). We also discuss the significance of categorical approaches in relation with the ultimate goal of development of artificial general intelligence.
Keywords:Â
Rafal Rzepka, Ryoma Shinto, Kenji Araki
Abstract. In this paper we present a novel dataset of tacit knowledge represented in natural language (Japanese) inspired by semantic primes categories. The main goals of this data is to a) allow investigations re- garding influence of perception data in various cognitive tasks, b) mimic signals for cognitive processes of an artificial agent to extend the un- derstanding of the world and c) testing cognitive capabilities of intel- ligent instances like foundation models. We describe the dataset and share results of preliminary experiments showing that the tacit knowl- edge recognition is still hard for language models. We also discuss how such redirecting neural approaches to cognition only and then perform reasoning in a symbolic realms could become beneficial for new type of simulations before AGIs are equipped with more sophisticated sensory apparatus.
Keywords: Semantic Primes · Tacit Knowledge · Simulated Perception