Generative Systems and Extended Mind as Transformation Similarity Models Connecting Two Cultures

Abstract: The H.Om.E project aims to build cross-cultural bridges between new computational technologies and traditional cultures. To create meaningful connections between the two cultures, this paper collects cognitive science theoretical models that explain generative AI and perception based on the similarity analogy, and applies these models to cultural heritage and new computational technologies to speculate on a larger and more complex perception system, even though this may be a poetic act of panpsychism. This paper innovatively regards both human and non-human residents as networks generated by flowing lines and views cultural heritage as thinking agents within the framework of Extended Mind Theory (EMT). It proposes using generative AI tools (such as Variational Autoencoders, VAEs) to extract compressed representations from cultural heritage and nature, comparing this process to how cognitive science analogizes generative AI to perception. The discussion includes the possibility of using these compressed representations to support this network of thinking objects. Additionally, on the interdisciplinary philosophical level, the paper discusses the possibility of using generative tools to obtain abstract representations that become higher-level abstract concepts to encompass the lower-level technical terms of experts from various fields within the research team. This approach may alleviate the two-culture problem between the sciences and the humanities.

Keywords: explanatory modeling, theory and models, consciousness, textiles, philosophy of artificial intelligence, extended mind

1. Introduction

One of the goals of the H.Om.E project is to connect the similarities between cultural heritage sites around the world. The collaboration with cultural astronomy in the I_C project has provided us with an insight: the similarities between various cultures, such as traditional textiles, symbols, languages, scripts, and astronomical systems can be understood as evidence of a larger, more complex system. This method of drawing analogies between two systems based on similarities is inspired by the relationship between cognitive psychology and artificial intelligence systems, akin to two sides of the same coin (Müller 2023), offering a bidirectional depiction of perception in both biological and machine contexts. This paper aims to support a worldview of a network composed of cultural objects with cognition by collecting various authors’ perspectives. Technology has become the physical embodiment of the mind. The mental state of humans a hundred thousand years ago might be very different from that of humans today, but the physical brain from a hundred thousand years ago might not be much different from the brain today. At some point, humans created symbol systems (such as writing and images) that allowed perceptions and ideas to be transmitted and reinterpreted. In a sense, structured coding, whether in textiles or large language models (LLMs), is our thoughts and ideas, enabling us to express and place our internal thoughts and concepts into the external world, becoming powerful tools for information retrieval and resolving uncertainties. Not only humans but other animals also use tools and behaviors to understand and respond to the environment, which is an extended cognitive process. Hypothesizing a material-based cognitive Om network using the concept of the extended mind may seem like an unexplanatory and limited panpsychism hypothesis. However, it provides a form of transformation for traditional cultural objects and contemporary computational technologies, offering an alternative from the perspective of reflective ethnography for connecting contemporary computational technologies with the field of ancient cultural preservation. At the design level, the shared encoding and decoding concepts in information-generating consciousness systems and art-generating systems provide a higher level of abstraction from the perspective of the science of consciousness to encompass the conceptual ambiguities between traditional and contemporary technologies. This offers a functional foundation for the visualization of textile data and the generative architectural design work in this study. In other words, the participation of generative systems can provide a form of common language between the two cultures.

2. The explanatory models for abstraction transformation

The abstract correspondence between computational models and the human mind is a philosophical issue, and clarifying the relationship between them is a necessary task. Reviewing the modeling history of these explanatory systems will aid future metaphysical work in this research. Any discussion about neural network systems having psychological attributes quickly encounters resistance. These non-biological systems lack bodies and, in most details, are very different from our brains. This section collects some relevant explanatory work in cognitive neuroscience, noting that these explanatory theories are mostly based on connectionist systems. Due to space limitations, we skip the historical debate between connectionism and symbolism, knowing that although connectionism is currently mainstream, it is also an idealized model. Many researchers emphasize the limitations of explanatory systems but also suggest that we underestimate the importance of the similarities between perception and deep convolutional neural networks (DCNNs). They argue that these explanatory systems help in the design of deep learning tools and our understanding of cognitive functions such as attention, imagination, planning (prediction), memory, curiosity, and creativity (Buckner 2023). Even if the similarity between computational models and brain neurons is based on a certain explanatory theory, it largely concerns the philosophy of science. Science itself might even be considered an idealized model. When we further consider the principles of modeling, we need to discuss the “philosophy of science,” which mainly refers to the study of scientific modeling methods, particularly the legitimacy of idealization and abstraction in the modeling process. Miracchi points out that, in most cases, the agent models designated by computer scientists for the phenomena they discuss are often simple definitions without empirical or philosophical arguments. The specified definitions are often susceptible to obvious counterexamples and resist empirically-based revisions. The philosophy of science literature explores how models represent reality, how models are used in scientific research to derive meaningful conclusions (figure 1), and how the similarity and relevance between models and target systems are determined. AI computational models are like abstract paintings that explain the human perceptual system. Even those who believe that human cognitive abilities and computational model processing methods are completely different might recognize the significant implications of deep learning for the human mind. All psychological behaviors are essentially conscious or normative, and the interaction with the world in the perceptual process is continuous and dynamic. Even if these characteristics cannot be realized in machines, it cannot be denied that deep learning models share some commonalities with the human mind (Lisa Miracchi 2019).

Figure 1. A logical empiricist picture of a scientific theory. Reproduced from Feigl (Herbert Feigl 1970).
Figure 1. A logical empiricist picture of a scientific theory. Reproduced from Feigl (Herbert Feigl 1970).

I would like to first review the concept of isomorphism proposed by Hofstadter, which can help us better understand the similarities between systems and the classical sources of these explanatory systems. Isomorphism refers to the correspondence between parts of two complex systems where each part in one system has a corresponding part in the other system, and these parts have similar functions within their respective systems. However, the strict definition of isomorphism comes from mathematics, and in practical applications, we usually deal with more coarse or partial isomorphisms. A famous example is how Hubel and Wiesel’s research on the cat’s visual system inspired the design of DCNNs. For simple organisms like the worm, the neural cells have a high degree of isomorphism between individuals, but in more complex organisms, this mapping relationship quickly diminishes. Piccinini provides an explanation of how his so-called “neural computation” model accounts for mental abilities (Piccinini 2015, 2020a). The key to his view is that artificial neural network models may identify various aspects of biological neural mechanisms that, if arranged correctly, produce certain aspects of human cognition. “Aspect” here is a technical term; an aspect of a system is a property it possesses, which is neither identical to nor entirely different from its underlying components but is jointly determined by the arrangement of those underlying components. Stinson argues that the correct relationship here is not merely behavioral mimicry but a shared membership in an abstract mechanical structure, which she calls “universal mechanisms.” In other words, we need to identify an abstract mechanical structure — defined by the types of structural properties, such as component types and their connections — that can be shared between the brain and DCNNs, thus exhibiting a certain range of similar behaviors (figure 2). “Because certain aspects of DCNNs perform correctly, it can be inferred that human thoughts or brains should also be correct.” However, this inference cannot be justified based on the model mimicking certain behaviors of the target system, as the two systems may achieve these results in different ways (Stinson 2018).

Figure 2. Stinson’s “generic mechanism” viewpoint suggests that if computational models (such as deep neural networks) instantiate the same type of abstract mechanical structure, they can help explain the phenomena realized by target systems (such as the brain). Because they are instances of the same universal mechanism, inferences about computational models can be extended to target systems based on the similarity of their abstract mechanisms. Theorists should, of course, be cautious that these inferences from computational models to target systems are derived from their shared abstract mechanical structure rather than their different specific implementations (Stinson 2020).

Miracchi’s approach requires theorists to develop three interconnected and independent background models to determine how artificial neural network models influence human thought. We need an “agent model” that provides a theory of the relevant psychological phenomena (which may include irreducible appeals to consciousness, normativity, or other intentional properties); a “basis model” that describes the characteristics of artificial systems in computational, mechanical, and/or behavioral terms; and a “generative” model that explains how changes in basic characteristics affect or determine features in the agent (Table 1).

Table1. The Three Types of Models (with Their Respective Roles) Recommended by Miracchi’s (2019) Generative Difference-Maker Approach to Explanation

Using AI generative models to describe processes of the mind or memory often involves balancing abstraction and concreteness. On one hand, abstract models need to be simple enough to be understandable and operable; on the other hand, they must be able to capture important details and dynamic processes of the actual systems. Cao and Yamins’ 3M+ model (2021a) suggests developing target system models, computational system models, and displaying how to map the “transformation similarity” mappings that correlate granular features between computational models and target models. This is done to predict how the impact of abstract dynamic changes in one system can forecast the impact of abstract dynamic changes in another system (Figure 3). This is similar to Hofstadter’s high-level abstract concepts in the brain, but he describes them in terms of symbols and subsystems (GEB 1979).

Future 3. Cao and Yamins’ (2021a) “3M++” perspective on the explanatory relationship between computational models and underlying systems. Similar to Miracchi’s view, their approach requires a third theory, which specifies “transformation similarity.” This allows for the mapping of state transitions that generate behaviors in computational models to state transitions that generate behaviors in the target system. They point out that the low-level implementation details between one instance of a target system that generates biological behavior (e.g., a human) and another instance (e.g., another person or a member of a different species like a monkey) can differ. The lowest level of organization (e.g., specific brain activities or neural state transitions) needs to be mapped using the same transformation similarity method. (Cao, Yamins 2021)

Here are some key points about explanatory modeling:

Metaphysical Status of Models

Model systems and the target systems they help us understand belong to the same type. Models can be viewed as abstract entities that can be studied mathematically and need to map abstract properties to the physical properties of the target. With proper design, models can reflect part of the reality of the target models, but models and simulations do not equate to reality (Godfrey-Smith, 2009).

Similarity

The philosophy of science literature discusses the extent to which computational models need to resemble target systems to produce relevant and generalizable results. This is particularly important for understanding connectionist models, as these models attempt to simulate the workings of neural networks. The relationship between models and targets should go beyond simple mimicry and seek more complex correspondences.

Idealization

Idealization involves simplifying or altering details in the modeling process to make the model easier to work with or better suited for research purposes. For example, connectionist models often ignore irrelevant neural details to focus more on the key similarities between the model and the target, making connectionism an ideal model.

Abstraction

Abstraction helps address modeling issues when computational models do not have the same level of detail as the target systems. Generative systems help explain how humans form concepts and engage in abstract thinking. By simulating the hierarchical structure of concepts in the brain, these systems can demonstrate how humans abstract concrete experiences and apply them to new contexts. This is significant for understanding language comprehension and creative thinking. These explanatory theories connecting human cognitive abilities with computational models provide a reference for our modeling and are worth further in-depth study in the future. DCNNs mechanically offer good explanations for cognitive neuroscience, even if they do not provide “real” explanations. These physical systems can effectively learn to perform tasks using specific types of input data. “How-possibly” models may also be indirectly useful in various other roles in the context of discovery, even if they only offer a very partial explanation of how abstract systems operate.

3. EMT for material culture

Understanding the thought processes in the brain involves two key challenges: explaining how low-level neural discharges lead to high-level symbolic activations and communication, and developing a theory to explain high-level symbolic activations independently of low-level neural events. If we can achieve the latter, then intelligence can be implemented on hardware different from the brain, indicating that intelligence is a software property that can be extracted from its physical substrate. This means that phenomena like consciousness and intelligence operate at a high level and, while dependent on, can be separated from low-level processes. Conversely, if the patterns of symbolic activation require specific neural hardware, intelligence remains confined to the human brain, making explanations more challenging (Hofstadter, 1979). After recognizing the importance of modeling explanatory theories, I would like to further introduce the application of the Extended Mind Theory (EMT) in material culture research and its functionalist application beyond philosophy, particularly its relevance to artworks, wearable devices, and media. Chalmers’ Extended Mind Theory posits that cognitive processes can exist outside the brain, encompassing external devices and environments, such as notebooks or digital tools, which act as components of our cognitive systems. This view challenges the traditional notion that the mind exists solely within the brain, suggesting that our mental activities can be distributed across external resources. EMT asserts that mental states and cognitive functions sometimes depend on organized processes and mechanisms that span the boundaries of the brain, body, and world. External resources, located outside the skull, can become part of the cognitive apparatus under certain conditions (referred to as glue and trust conditions). These external resources are considered part of our mind only if they are portable, easily accessible, and automatically trusted (reliable). PP (Principle of Parity), an important principle of EMT, is well explained in the story of Otto and Inga’s notebooks by Clark and Chalmers. Chalmers argues that the information in Otto’s notebook, which guides his actions, serves the same causal role as Inga’s biological memory does in guiding hers. Therefore, PP invites us to assess whether a state can be considered a belief, in part, based on the causal role it performs:

“When we face a task, if part of the world functions in a way that, if done in the head, we would have no hesitation in recognizing as part of the cognitive process, then that part of the world is (so we claim) part of the cognitive process” (Clark & Chalmers, 1998, p.2).

Research on cognitive ecologies and material culture also galvanized EMT research (Knappett, 2011; Malafouris, 2004). Studies on so-called cognitive artefacts, according to the concepts of “exograms” and “meshworks of things” (Ingold, 2010), demonstrated that objects could have a cognitive life of their own. This inspired EMT proponents to develop the idea that cognition primarily exists as an enactive relationship between and among people and things. The central notion of this line of research is the idea of material engagement, a synergistic process involving material forms that people create/build, through which human cognitive and social life are mediated and often constituted. An example of material engagement is culturally specific technologies, which can be said to have constituted part of human cognitive architectures since at least the Upper Paleolithic period (Donald, 1991). For example, memory research in cognitive psychology and EMT has led to a new understanding of the recall process, now often described as a constructive and creative reasoning process rather than mere reproduction. Some memory theorists begin to view the representational carriers in memory and the processes of memory itself as effectively spanning the brain, body, and world. More precisely, proponents of extended memory believe that the stable storage of information can, in many cases, only be achieved within a social context and through the integration of biological and external materials (e.g., symbols, technologies, and cultural artefacts) (Sutton, 2006). Critics argue that this emerging framework may seem too loose and all-encompassing. Adams and Aizawa (2001: 62) complain that the potential scientific subject matter of distributed cognition will form “an unscientific hodgepodge.” They point out that, for instance, Merlin Donald’s pioneering discussion of the history of human use of external symbol systems, “exograms,” provides a rich description including the development of “body decorations, grave decorations, sculptures, megaliths, hieroglyphs, cuneiform, maps, charts, and musical notation” (Adams and Aizawa 2001: 58, citing Donald 1991). Supporters argue that “the study of distributed cognition is essentially a study of the diversity and subtlety of coordination.” Distributed cognition theory attempts to answer the key question of how the elements and components in distributed systems — people, tools, forms, devices, maps, and less obvious resources — can be well-coordinated so that the system can adapt and accomplish its tasks. Ingold attempts to respond to these contemporary discussions from anthropology and archaeology to art history and material culture studies from an ontological perspective, using the “parliament of lines” to explain that objects are not constant solids but continuously flowing lines. The distinction between the lines of flow of the meshwork and the lines of connection of the network is critical. However, this distinction is often blurred in what is known as “actor-network theory.” This theory’s roots are not in environmental thinking but in the sociological study of science and technology. In this field, much of its appeal comes from its claim that agency is not concentrated in human hands but is distributed among all the elements that are connected or mutually implicated in a field of action. However, the term “actor-network” first entered the Anglophone literature as a translation from the French “acteur réseau.” As one of its leading proponents, Bruno Latour, later noted, the translation gave it a significance that was never intended. In everyday language, due to innovations in information and communication technology, the defining attribute of a network is connectivity (Latour 1999: 15). However, “réseau” can refer to both networks and meshworks — the texture of fabric, the mesh of lace, the network of the nervous system, or a spider’s web. For example, the lines of a spider’s web do not connect points or join things as communication network lines do. They are more like materials exuded from the spider’s body, laid down as it moves about. In this sense, they are extensions of the spider’s existence, trailing into the environment. These lines are the paths along which it lives, perceives, and acts in the world. The original intention of “acteur réseau” was to be composed of such generative lines. With this support and formal generalization, we can hypothesize that both ancient textiles and current DCNN technology can be seen as extensions of the human mind — suggesting that the causal relationships of these minds can still exist in other complex systems as abstract structures.### Translation

4. The meanings of generative systems as abstract transformation models

One of the goals of this research is to discuss how to use generative AI tools, such as Variational Autoencoders (VAEs), to analyze cultural heritage items like ancient languages, astronomical data, and textiles. The compressed features extracted from these cultural heritage items can be considered the foundation for supporting Tim Ingold’s concept of the network of things. According to Ingold’s perspective, the compressed features of VAEs can be viewed as a form of abstract external memory (exogram), which externalizes cognitive processes and integrates them into the material world, forming a network of things with cognitive capabilities. These theories are helpful for realizing artificial artefacts. In the 2017 “Tribe Against Machine”, artists created the first prototype of ethnic costume replicas by combining electronic materials, conductive threads, and traditional clothing elements. The next prototype design aims to use core rope memory technology to create wearable art devices that electronically preserve heritage data. The next step proposed in this paper is to use weaving techniques to create a woven multiplier matrix composed of memristors (Figure 4).

Figure 4. “Fuzhi” was the core concept of Tribe Against Machine and represents the transformation and innovation of traditional culture through contemporary technology, re-shaping it into a tangible medium. It is a conceptual project aimed at producing a “journal” that speaks for traditional culture in the form of smart clothing. The left image shows a replica of Atayal attire made by camp artists using electronic textile technology. The middle image details the structure of core rope memory, and the right image explains how memristors serve as core components of multiplier matrices in AI chips and their physical products. Viewing the relationships between the real world and objects outside the skull as intracranial cognitive functions allows us to apply high-level abstractions from cognitive neuroscience to the low-level neuronal framework in interdisciplinary communication. Such a framework can map the methodologies of cognitive neuroscience and the history of human material culture, enabling us to reinterpret the arrangement of things from the perspective of objects with cognitive capabilities. In the previous discussion of explanatory theories, we can create high-level abstractions to encompass different languages but with essentially the same object properties. This high-level abstraction could pertain to the definition of the mind, and the generative abstract transformation method might provide a concrete way to achieve interdisciplinary communication similar to a language of thought (figure 5).
Figure 5. This paper proposes using abstract transformation models to connect different fields, appropriating cognitive structures from cognitive neuroscience to other object networks. This provides a high-level and low-level understanding structure that may facilitate conceptual transformation and communication between interdisciplinary domains.

Modeling memory in cognitive neuroscience using generative systems

In cognitive neuropsychology, the use of abstract models faces many challenges. These abstract models are often implemented through deep convolutional neural networks (DCNNs), which attempt to simulate the operational mechanisms of the human mind, particularly high-level cognitive functions such as memory and imagination. A recent trend in psychology, neuroscience, and the philosophy of memory is to blur the distinction between imagination and memory, and consequently, the distinction between memory’s primary function of pointing to the past and its function of pointing to the future (De Brigard 2014; McClelland 1995; Robins 2019; Schacter and Addis 2007). In other words, this research direction suggests that even normal recollection involves a certain degree of creative reconstruction, which seems to be oriented towards future decision-making rather than the accuracy of the past (Buckner 2024). Similar to the relationship between the cat’s visual system and DCNNs, VAEs are also related to biological mechanisms. Lashley’s experiments with cortical lesions in rat brains showed that it was impossible to find a specific area in the mouse brain corresponding to the memory of maze paths. On the other hand, neurosurgeon Wilder Penfield demonstrated the exact opposite — that specific memories are localized to particular areas. One possible explanation for this contradictory result is that memories are locally encoded but redundantly across different cortical regions — a defensive strategy developed during evolution to counteract cortical losses that may occur in battles or in experiments conducted by neurophysiologists. Another explanation is that memory can be reconstructed from dynamic processes distributed across the entire brain but can be locally triggered. This local encoding and memory philosophy can be well explained by Variational Autoencoders (VAEs), which learn a “smoother” latent space representation that covers a wider range of meaningful features and noise parameters (Grover, Dhar, and Ermon 2018). From a connectionist perspective, VAEs nicely infer how an agent learns from environmental stimuli and stores information in the form of connections between neurons. The structure of the latent space also well explains the generative properties of memory and imagination (Figure 6 and 7).

Figure 6. Depiction of the Variational Autoencoder (VAE) Architecture. In a Variational Autoencoder, the discriminative part of the network runs first, followed by the generative part. Unlike most Generative Adversarial Networks (GANs), these two networks share a common representation of the latent space. Additionally, the representation of latent space factors is decomposed into means and variances, allowing for more control over the sampling of the latent space than GANs. The latent factors in a VAE can be learned or manually defined by the programmer.

Generative information as consciousness

The implications of the brain using generative models to produce representations are rarely considered. In cognitive psychology, generative systems are also used as an important concept to explain consciousness. Many authors believe that internally generated information is related to consciousness. Andy Clark’s predictive mind theory emphasizes that the brain is a system that constantly generates predictions and compares them with actual perceptions. Prediction errors are used to update the brain’s internal model, allowing the brain to better adapt to its environment. VAEs consist of two parts: an encoder and a decoder. The decoder decompresses data back to its original form. Ryota Kanai and colleagues correspond this to information generation in the brain. In the brain, information generation corresponds to top-down predictions in the predictive coding framework, where high-level representations generate predictions and pass them to lower-level representations. In this framework, high-level representations predict lower-level representations, and when predictions do not match actual sensory input, prediction errors feed back to high-level representations and update the predictions. This bidirectional interaction is similar to the encoder and decoder in VAEs (Figure 8). Although it remains controversial whether the brain truly uses a predictive coding architecture, this provides an interesting abstract transformation models that may correspond to how the biological brain operates. Additionally, the Kolmogorov Theory of consciousness proposes that consciousness is produced by a compressed model of the world — the consciousness system is considered an efficient compressor that tries to represent sensory input in the simplest way possible. For example, when compressing JPEG images, repetitive patterns do not need to be recorded for every pixel. Instead, by assuming they will continue to repeat, only changes are focused on. These changes are seen as errors in the brain. This method is considered a highly efficient way of perception and processing.

Figure 7. Comparison of Information Generation in (a) Autoencoders and (b) Predictive Coding. (a) In an autoencoder, the encoder part (shown in red) compresses sensory information into compact representations in the latent space. This representation is then decoded into sensory data format. The decoder (shown in blue) can use seeds selected from the latent space to generate counterfactual information. Variables z1 and z2 represent latent variables. (b) In the predictive coding hypothesis of the biological brain, bottom-up error signals (shown in red) correspond to data compression or encoding in the autoencoder, while top-down predictions (shown in blue) correspond to information generation. Note that in predictive coding, Ryota Kanai and colleagues hypothesize that top-down predictions generate conscious experiences.

Generative systems in art

Philip Galanter has developed a comprehensive series of theories on generative art. He believes that artists have always learned from nature to create exciting new works. However, art is not just the creation of objects; it is also a segment of human intellectual progress corresponding to a larger cultural context. He combines the concepts of generative art with evolutionary art, which supports the current study’s aim to integrate cognitive neuroscience systems with generative systems.

“Generative art refers to any art practice where the artist uses a system, such as a set of natural language rules, a computer program, a machine, or other procedural invention, which is set into motion with some degree of autonomy contributing to or resulting in a completed work of art.”

He also pointed out that the key element of generative art is using an external system, where the artist transfers some or all control to that system. He emphasized that, firstly, the term generative art merely refers to how the art is created and does not involve why it is created in this way or its content. Secondly, generative art is not tied to specific technologies; it may be high-tech or not. Thirdly, the system should have clear rules and structures and should operate independently without external intervention. However, these definitions do not exclude entirely hand-made artworks; it just means that control over certain aspects of creation is handed over to an external system, and some decisions are not made intuitively by the artist. For example, the symmetrical system in ancient textiles is also generative because the placement of these patterns is not entirely determined by the craftsman but by manual symmetry algorithms. The growing divide between the sciences and the humanities was first widely recognized by C.P. Snow in his 1959 Rede Lecture, “The Two Cultures.” “Literary intellectuals thought scientists were too optimistic and ignored the human condition, while scientists thought literary intellectuals lacked foresight, were anti-intellectual, and concerned only with the present.” As the humanities adopted more postmodern attitudes, they became relatively pessimistic, tending towards radical relativism. Postmodernism posits that concepts like progress, reason, and empiricism emphasized in the Enlightenment and scientific projects are products of specific historical and cultural contexts and do not have universal applicability or absolute truth. Therefore, these concepts and methods, like the myths of other cultures, are just one way humans understand and interpret the world, rather than the truth. Philip Galanter’s “complexism” is a new trend and worldview that emerged after postmodernism. Complexism projects the worldview and attitudes of complexity science onto the issues of art and the humanities, providing a high-level synthesis that encompasses modern and postmodern concerns, attitudes, and activities. He believes that complexism must go beyond the misunderstandings of postmodern scientific research and recognize that even physical systems are full of uncertainties — from classical to modern physics, the deterministic universe has been replaced by an uncertain statistical universe. Mathematical truths are also unprovable — Gödel’s incompleteness theorems prove that any axiomatic system contains truths that cannot be proved, Turing’s halting problem shows the algorithmic analogue, and Chaitin’s unprovable axioms further demonstrate the existence of countless unprovable truths within axiomatic systems. These discoveries did not hinder the progress of science or mathematics but revealed the major victories of 20th-century science and mathematics, achieving unprecedented progress within the realms of uncertainty and incompleteness. These powerful ideas need to be correctly integrated into the cultural context, providing rich creative material for complexism artists. This implies that we might not be able to fully understand consciousness through existing mathematical or computational theories, as there may be truths or phenomena within consciousness that transcend these theoretical frameworks, providing significance for innovative research approaches in this study.

Figure 8. Philip Galanter’s concept of complexism provides a quantitative standard based on effective complexity to bridge worldviews and generative systems.

5. Conclusion

Combining the abstract transformation models of cognitive neuroscience with theories like the Extended Mind and the network of things is an innovative research approach. This opinion paper reviews explanatory theory modeling methods from various authors to reveal and support these leaps in connection, drawing support from metaphysics. It aims to find strategies for future explanatory theory modeling in the H.Om.E project. The discussions in this paper show how to connect traditional culture with new computational technologies within a broader vision. Of course, this approach has its limitations; I do not believe it can explain all cultural phenomena, especially when insights are generated solely from data without traditional fieldwork. Nevertheless, it serves as a meaningful method for interdisciplinary communication, particularly in bridging generative design with traditional archaeology and anthropology. Moreover, the analysis of generative systems reveals their connection to the concept of time. Generative models like Variational Autoencoders (VAEs) demonstrate how to use compression and generation features to simulate the bidirectional relationship between memory and consciousness, providing a physical form for abstract philosophical expressions. In summary, the H.Om.E project should continue to explore these interdisciplinary explanatory theories and incorporate these new perspectives into modeling strategies. At the same time, it is crucial to maintain respect and understanding of cultural entities in the application of technology, avoiding cultural alienation caused by excessive abstraction and dehumanization. By doing so, we can establish a more comprehensive and profound framework to explore the role and position of technological objects in cultural cognitive systems and provide methodological support for future artistic creation.

References

“Algorithms and Automation: Governance over Rituals, Machines, and Prototypes, from Sundial to Blockchain.” n.d. Routledge & CRC Press. Accessed December 24, 2023. https://www.routledge.com/Algorithms-and-Automation-Governance-over-Rituals-Machines-and-Prototypes/Kera/p/book/9781032038636.

Buckner, Cameron J. 2023. From Deep Learning to Rational Machines: What the History of Philosophy Can Teach Us about the Future of Artificial Intelligence. 1st ed. Oxford University PressNew York. https://doi.org/10.1093/oso/9780197653302.001.0001.

Cao, Rosa, and Daniel Yamins. 2021. “Explanatory Models in Neuroscience: Part 1 — Taking Mechanistic Abstraction Seriously.” arXiv. https://doi.org/10.48550/arXiv.2104.01490.

Cao, Rosa, and Daniel Yamins. 2021. “Explanatory Models in Neuroscience: Part 2 — Constraint-Based Intelligibility.” arXiv. https://doi.org/10.48550/arXiv.2104.01489.

Clark, A., and D. Chalmers. 1998. “The Extended Mind.” Analysis 58 (1): 7–19. https://doi.org/10.1093/analys/58.1.7.

Clark, Andy. 2015. “Embodied Prediction.” In Open MIND, edited by Thomas Metzinger and Jennifer M. Windt. Open MIND. Frankfurt am Main: MIND Group. https://doi.org/10.15502/9783958570115.

Chalmers, David J. n.d. “Facing Up to the Problem of Consciousness.”

Caillon, Antoine, and P. Esling. 2021. “RAVE: A Variational Autoencoder for Fast and High-Quality Neural Audio Synthesis.” ArXiv abs/2111.05011:null.

Douglas, Hofstadter. 2024. *Gödel, Escher, Bach*. Chapter 10 and Chapter 11.Chapter 10 and Chapter 11.

Godfrey-Smith, Peter. 2009. “Models and Fictions in Science.” Philosophical Studies 143 (1): 101–16. https://doi.org/10.1007/s11098-008-9313-2.

Goff, Philip. 2009. “Why Panpsychism Doesn’t Help Us Explain Consciousness.” Dialectica 63 (3): 289–311. https://doi.org/10.1111/j.1746-8361.2009.01196.x.

Galanter, Philip. 2008. “Complexism and the Role of Evolutionary Art.” https://doi.org/10.1007/978-3-540-72877-1_15.

Galanter, Philip. 2008. “What Is Complexism? Generative Art and the Cultures of Science and the Humanities.” https://www.semanticscholar.org/paper/90da494215c460a8f8b7e518b75795f2177d269d.

James, William. 1890. The Principles of Psychology. Vol. 1. New York: Henry Holt and Company.

Kanai, Ryota, Acer Chang, Yen Yu, Ildefons Magrans de Abril, Martin Biehl, and Nicholas Guttenberg. 2019. “Information Generation as a Functional Basis of Consciousness.” Neuroscience of Consciousness 2019 (1): niz016. https://doi.org/10.1093/nc/niz016.

Millière, Raphaël, and Cameron Buckner. 2024. “A Philosophical Introduction to Language Models — Part I: Continuity With Classic Debates.” arXiv. http://arxiv.org/abs/2401.03910.

Miracchi, Lisa. 2019. “A Competence Framework for Artificial Intelligence Research.” Philosophical Psychology 32 (July):589–634. https://doi.org/10.1080/09515089.2019.1607692.

Müller, Vincent C. n.d. “Philosophy of AI: A Structured Overview.”

Rao, Rajesh PN, and Dana H. Ballard. 1999. “Predictive Coding in the Visual Cortex: A Functional Interpretation of Some Extra-Classical Receptive-Field Effects.” Nature Neuroscience 2 (1): 79–87. https://doi.org/10.1038/4580.

Ruffini, Giulio. 2017. “An Algorithmic Information Theory of Consciousness.” Neuroscience of Consciousness 2017 (1): nix019. https://doi.org/10.1093/nc/nix019.

Rosner, Daniela K. 2018. Critical Fabulations: Reworking the Methods and Margins of Design. MIT Press.

Stinson, Catherine. 2020. “From Implausible Artificial Neurons to Idealized Cognitive Models: Rebooting Philosophy of Artificial Intelligence.” Philosophy of Science 87 (4): 590–611. https://doi.org/10.1086/709730.

Stinson, Catherine. 2018. “Explanation and Connectionist Models.” In The Routledge Handbook of the Computational Mind, by Mark Sprevak and Matteo Colombo, edited by Mark Sprevak and Matteo Colombo, 1st ed., 120–33. Milton Park, Abingdon, Oxon ; New York : Routledge, 2019. |: Routledge. https://doi.org/10.4324/9781315643670-10.

Sun, Ron. 2006. “The CLARION Cognitive Architecture: Extending Cognitive Modeling to Social Simulation.” Cognition and Multi-Agent Interaction, 79–99.

Tononi G. An information integration theory of consciousness. BMC Neurosci 2004;5:42.