The 17th Annual AGI Conference

Accepted posters

AGI-24 is, first and foremost, a technical research conference, inviting AI researchers, academics, and industry professionals to submit and present original research papers. The AGI conference series is the only major conference series devoted wholly and specifically to the creation of AI systems possessing general intelligence at the human level and ultimately beyond.

By gathering together active researchers in the field, for presentation of results and discussion of ideas, we accelerate our progress toward our common goal. The global AGI community is encouraged to review the accepted posters for the 2024 AGI Conference, to understand the growing shape and scope of AGI research, development, and thinking. Many researchers have also submitted short video previews of their posters, for a visual introduction to their topic and research.

Author(s)
Paper Title, Abstract, Download Link
Alfredo Ibias Martínez, Guillem Ramirez-Miranda, Enric Guinovart and Eduard Alarcon

Abstract. The Artificial Intelligence field is flooded with optimisation methods. In this paper, we change the focus to developing modelling methods with the aim of getting us closer to Artificial General Intelligence. To do so, we propose a novel way to interpret reality as an information source, that is later translated into a computational framework able to capture and represent such information. This framework is able to build elements of classical cognitive architectures, like Long Term Memory and Working Memory, starting from a simple primitive that only processes Spatial Distributed Representations. Moreover, it achieves such level of verticality in a seamless scalable hierarchical way.

Keywords: Cognitive Architectures · Hierarchical Abstractions · Primitive-based Models

Poster: View here

Howard Schneider

Abstract. The Causal Cognitive Architecture is a brain-inspired cognitive architecture whereby millions of neocortical minicolumns are modeled in the architecture as millions of navigation maps, capable of holding spatial features and small procedures. The Causal Cognitive Architecture 7 (CCA7), possessing the same properties of its predecessor of fully grounded, continuous lifetime learning, associative reasoning, full causal reasoning, analogical reasoning, and near-full compositional language comprehension, also possesses superhuman planning abilities and on a conceptual level can serve as a proxy for superhuman artificial general intelligence (AGI). A simulation of this architecture, and subsets of it, are used to model pre-mammalian and non-primate mammalian-level artificial intelligence (AI), human-level artificial intelligence (HLAI), superhuman AGI and links to a large language model (LLM) as a proxy for an alien-like (i.e., non-biologically-based) AGI. The models were tested on a compositionality problem with the best scores: superhuman >= HLAI > LLM > pre-mammalian/mammalian (p<0.001). Testing on a traveling salesperson problem: superhuman > LLM > HLAI > pre-mammalian/mammalian (p<0.001). These results indicate the need to consider intrinsic compositional and planning abilities in the development of AGI systems.

Keywords: Artificial General Intelligence (AGI) · Superintelligence · Cognitive Architecture · Compositionality · Planning

Poster: View here

Maxim Tarasov

Abstract. Starting with the premise that rule-based systems still have their place in AI toolkit, we explore different ways of implementing such systems. We find that some of the criticisms towards rule-based systems, namely their large number of rules and high maintenance cost can be addressed by using declarative style programming and a shift to a higher level of abstraction from rules to meta-rules, operating on the structure of data rather than its contents. We use OpenNARS 4 as example implementation to demonstrate advantages of this approach but the results are generally applicable to other rule-based systems.

Keywords: Non-Axiomatic Logic · AGI · Relational Programming

Poster: View here

Austin Davis and Gita Sukthankar

Abstract. Mechanistic interpretability (MI) studies aim to identify the specific neural pathways that underlie decision-making in neural networks. Here we analyze both the horizontal and vertical information flows of a chess-playing transformer. This paper introduces a new taxonomy of chessboard attention patterns that synchronize to guide move selection. Our findings show that the early layers of the chess transformer correctly identify moves that are highly ranked by the final layer. Experiments conducted on human chess players laid the foundation for much of our current understanding of human problem-solving, cognition, and visual memory. We believe that the study of chess language transformers may be an equally fruitful research area for AGI systems.

Keywords: Chess Cognition · Mechanistic Interpretability · Transformers

Poster: View here

Cedric Mesnage

Abstract. We propose a novel architecture to build an Artificial General Intelligence(AGI) in a virtual environment. To experiment with curiosity we use as a reward in a reinforcement learning(RL) algorithm the cosine similarity between recent thoughts and past thoughts as sentences given by a large language model(LLM). The agent can decide, using the Bellman equation to act as a standard agent, by moving, jumping, performing a task, observing and thinking. Observing and thinking is the process of modifying its inner dialogue by given a representation of the environment to a LLM and reflecting on its past thoughts which will consequently change its predicted Q values and decision making. We have developed an experimental intelligent agent which interacts with the open source Minetest video game as a virtual environment.

Keywords: AGI architecture · RL · Virtual Environment · LLM

Poster: View here

Arisa Yasuda and Yoshihiro Maruyama

Abstract. We propose the concept of Artificial General Universe (AGU), which is the ultimate form of metaverse just as Artificial General Intelligence (AGI) is the ultimate form of AI. Here we discuss its fundamental and social and ethical issues, including the intertwined relationships between AGI and AGU. For one thing, AGU is a universe in which the potential of AGI can be maximised due to the nature of AGU that frees agents from its real-word constraints, ultimately the conditions of the real physical universe and its fundamental laws (e.g., the laws of motion; it is even possible for AGI to design a suitable AGU for itself to maximize the performance). For another, AGU itself is operated by a form of AGI and yet at the same time AGU is a universe in which AI and AGI agents can be accommodated as well as digitalised human agents (or avators). AGU may thus be regarded as encompassing a vast collection of AIs and AGIs. AI and AGI ethics issues, then, are amplified at a much larger scale in AGU or even in a premature incomplete metaverse. Put differently, while AI/AGI Ethics are concerned with issues caused by a single AI/AGI, Metaverse Ethics and AGU Ethics are concerned with issues caused and amplified by Collective AI/AGI. In particular, common issues such as surveillance capitalism, real-virtual border issues, and governance and accountability issues would be severely amplified in the metaverse and AGU. The acceleration of surveillance capitalism and other social and ethical issues would become even more difficult to stop when the informationalisation of the entire universe comes into reality. It would thus be a pivotal challenge of our time to implement appropriate measures against them and prevent the dystopian acceleration of them from impairing the well-being of the human race.

Keywords: Metaverse · Artificial General Universe · Artificial General Intelligence · AI Ethics · AGI Ethics · Metaverse Ethics · AGU Ethics · Surveillance Capitalism · Governance and Accountability · Border between the Real and the Virtual · Ethical, Legal and Social Implications

Poster: View here

Zarathustra Goertzel

Abstract. This position paper conjectures that for beneficial AGI, it is necessary and sufficient for AGI systems to care about people and to employ goals whose success is collaboratively determined by the others involved in the situation. Moreover, I posit that any goal whose success can be determined without the consensual feedback of those concerned is likely to lead to the manifestation of dark factor traits. Integrating care reduces the risk that an AGI will be incentivized to seek harmful shortcuts to obtaining satisfactory feedback. Employing collaborative goals reduces the risk that an AGI will optimize for superficial features of success and proxy goals. Together, these ideas propose a fundamental shift away from the traditional control-centric “AI Safety” strategies. This paradigm not only promotes more beneficial outcomes but also enables AGIs to learn from and adapt to complex moral landscapes, thus continuously improving their capacity to contribute positively to the wellbeing of humans and other sentient beings.

Keywords: AI Ethics · Collaborative AI · Value Alignment

Poster: View here