2505.13969v1
Model: healer-alpha-free
# Hypothesis on the Functional Advantages of the Selection-Broadcast Cycle Structure: Global Workspace Theory and Dealing with a Real-Time World
**Authors**: Junya Nakanishi, Jun Baba, Yuichiro Yoshikawa, Hiroko Kamide, and Hiroshi Ishiguro
> *This work was not supported by any organization 1 Junya Nakanishi, Yuichiro Yoshikawa, and Hiroshi Ishiguro are with the Graduate School of Engineering Science, The University of Osaka, Osaka 560-0043, Japan 2 Jun Baba is with AI Lab, CyberAgent Inc., Tokyo 150-0042, Japan 3 Hiroko Kamide is with Faculty/School of Law, Kyoto University, Kyoto 606-8501, Japan
## Abstract
This paper discusses the functional advantages of the Selection-Broadcast Cycle structure proposed by Global Workspace Theory (GWT), inspired by human consciousness, particularly focusing on its applicability to artificial intelligence and robotics in dynamic, real-time scenarios. While previous studies often examined the Selection and Broadcast processes independently, this research emphasizes their combined cyclic structure and the resulting benefits for real-time cognitive systems. Specifically, the paper identifies three primary benefits: Dynamic Thinking Adaptation, Experience-Based Adaptation, and Immediate Real-Time Adaptation. This work highlights GWTâs potential as a cognitive architecture suitable for sophisticated decision-making and adaptive performance in unsupervised, dynamic environments. It suggests new directions for the development and implementation of robust, general-purpose AI and robotics systems capable of managing complex, real-world tasks.
## I INTRODUCTION
In recent years, a major research theme in the fields of artificial intelligence (AI), robotics, and cognitive science has been how to implement the advanced intelligence and flexible problem-solving abilities of humans and animals into systems [1, 2]. With the technical advances in machine learning (most notably deep learning) and the heightened performance of hardware in robotics, there has been growing interest in âmultimodalâ and âparallelâ architectures that carry out tasks while simultaneously leveraging multiple cognitive functions [3, 4]. However, even if several specialized modules (e.g., vision, language, logical reasoning, motor control) each have excellent capabilities, there are still many aspects of information exchange and control methods that have not been fully organized due to the simultaneous parallel operation of multiple modules [5].
Against this background, the Global Workspace Theory (GWT), which was devised by imitating human consciousness, is attracting attention. GWT positions âconsciousnessâ from the perspective of information processing structure and proposes a framework in which information that has been competed for and integrated among numerous parallel specialized modules is temporarily brought âinto consciousnessâ and then shared system-wide [6]. Since it was first proposed by the psychologist Bernard Baars, GWT has been linked to many empirical findings in neuroscience and cognitive science [7, 8]. More recently, its advantages as an information processing architecture have begun to attract attention in AI research as well. Previous GWT research suggests that the âSelectionâ process, which integrates information among multiple parallel specialized modules, and the âBroadcastâ process, which disseminates the selected information throughout the system, are expected to be effective as a wide range of functions, including creative thinking, transfer learning, top-down control, and attention allocation [8, 9, 10]. However, in many of these discussions, âSelectionâ and âBroadcastâ are treated separately, and the effectiveness of these two processes being executed in parallel and intermittently are not fully addressed.
In this paper, we call the process of exchanging information through âSelectionâ and âBroadcastâ the âSelection-Broadcast Cycleâ, and focus on it. In the Selection-Broadcast Cycle, we are considering information processing that has a time dimension, that is, information processing that is not a single information processing, but rather a series of multiple information processes, such as responding to an environment that changes over time or taking time to search for an answer. These information processing methods with a time dimension are important research topics in robotics, where real-time processing is required, and in artificial intelligence systems that handle complex tasks that require long-term learning and adaptation [11, 12]. For instance, for continuous tasks that span a period of time, a robot will inevitably need to change its approach during interactions with humans. Moreover, sensor data are updated moment by moment, and task goals or external conditions may change depending on the situation. Therefore, there is a need for a real-time processing framework that can dynamically decide âwhen and which module to call uponâ in an online setting and swiftly reflect the results in the next step.
Accordingly, this paper focuses on the Selection-Broadcast Cycle structure proposed in GWT and discusses the functional advantages its dynamic, cyclic structure offers from the perspective of applying it to the design of real AI and robotic systems. Specifically, we highlight: Dynamic Thinking Adaptation:
a capacity to dynamically rearrange module execution order, thereby enabling flexible adaptation to unexpected task changes or evolving goals Experience-Based Adaptation:
an acceleration of consciousness processing by exploiting past experiences stored in memory modules, facilitating faster predictions and decision-making Immediate Real-Time Adaptation:
a quick intervention route to consciousness processing allows for immediate response to real-time changes
Our aim is to theoretically clarify âwhy such a structure is useful for real-time intelligent systems.â By doing so, we hope to offer fresh insights into the design philosophy and implementation guidelines of cognitive architectures based on GWT and contribute to the development of robust, general-purpose AI and robotic systems capable of adapting to complex tasks and unknown environments.
## II LITERATURE REVIEW
### II-A Overview of GWT
<details>
<summary>x1.png Details</summary>

### Visual Description
## Diagram: System Architecture with Modules, Sensor, and Memory
### Overview
The image displays a partial technical diagram illustrating a system architecture. It shows a central processing or functional unit enclosed within a dashed boundary, containing two primary modules. External components, specifically a "Sensor" and "Memory," provide inputs to this unit. The diagram uses color-coded lines and arrows to indicate different types of data or control flow.
### Components/Axes
The diagram is composed of the following labeled components and connecting lines:
**Components (Boxes):**
1. **Module1**: A rectangular box with rounded corners, outlined in purple. Located inside the dashed boundary, on the left side.
2. **Module2**: A rectangular box with rounded corners, outlined in purple. Located inside the dashed boundary, to the right of Module1.
3. **Sensor**: A rectangular box with rounded corners, outlined in black. Positioned below the dashed boundary, on the left.
4. **Memory**: A rectangular box with rounded corners, outlined in black. Positioned below the dashed boundary, on the right.
5. **Dashed Boundary**: A large, dashed, dark gray line forming a rounded rectangle that encloses Module1 and Module2. This likely represents a subsystem, processor, or integrated circuit boundary.
6. **Partial Element (Top Right)**: A dashed, dark gray outline of a rounded rectangle is partially visible in the top-right corner. Inside it, a solid black outline of a smaller rectangle is also partially visible. Adjacent to this, the purple text "Sele" is visible, which is likely the beginning of a word like "Selector" or "Selection."
**Connections (Lines & Arrows):**
* **Green Lines/Arrows**: Two light green lines with arrowheads enter the dashed boundary from the top. One points directly into the left side of **Module1**. The other points into the left side of **Module2**. These represent external input or control signals.
* **Purple Lines/Arrows**: A network of solid purple lines connects the modules internally.
* A line originates from the top of **Module1**, travels right, and connects to the top of **Module2**.
* A line originates from the bottom of **Module2**, travels left, and connects to the bottom of **Module1**.
* A line originates from the bottom of **Module1**, travels down, then right, then up to connect to the bottom of **Module2**.
* These lines have arrowheads pointing *into* the modules at their connection points, indicating data or signal flow into the modules from this internal bus or network.
* **Black Lines/Arrows**: Two solid black lines with arrowheads point vertically upward.
* One originates from the top of the **Sensor** box and points to the bottom edge of the dashed boundary, directly below Module1.
* The other originates from the top of the **Memory** box and points to the bottom edge of the dashed boundary, directly below Module2.
### Detailed Analysis
**Spatial Layout and Flow:**
* The **Sensor** and **Memory** are input sources located at the bottom of the diagram. Their black arrows indicate they feed data or signals into the subsystem defined by the dashed boundary.
* Inside the dashed boundary, **Module1** and **Module2** are the core processing elements. They are interconnected by a complex purple network, suggesting bidirectional communication or a shared internal bus. The arrow directions on the purple lines show that signals flow *into* both modules from this network.
* The **green arrows** represent a separate, parallel input path from above, directly targeting each module. This could be control signals, configuration data, or a primary data stream.
* The partial element in the **top-right** ("Sele...") is positioned above the main dashed boundary. Given its label fragment and dashed outline (similar to the main boundary), it may represent a higher-level control unit, like a "Selector," that governs the system but is not fully shown.
**Color Coding:**
* **Purple**: Used for Module outlines and their internal interconnection network. This color likely denotes the core processing domain or a specific type of signal (e.g., data bus, internal control).
* **Green**: Used for external input lines to the modules. This may signify a different class of signal, such as primary input or global control.
* **Black**: Used for the Sensor, Memory, and their connection lines. This typically represents standard or foundational I/O components.
### Key Observations
1. **Parallel Input Structure**: The system has two distinct input pathways: one from the bottom (Sensor/Memory) feeding the subsystem as a whole, and one from the top (green lines) feeding each module directly.
2. **Inter-Module Connectivity**: Module1 and Module2 are not isolated; they are tightly coupled via multiple purple connections, indicating a need for constant communication, data sharing, or synchronization.
3. **Subsystem Encapsulation**: The dashed boundary clearly groups Module1 and Module2 as a single functional unit, separate from the Sensor and Memory.
4. **Incomplete Information**: The diagram is cropped. The full context of the "Sele..." component and the ultimate destination of any outputs from the modules or the subsystem are not visible.
### Interpretation
This diagram depicts a **data processing or control system architecture**. The **Sensor** provides real-world data, and **Memory** provides stored data or parameters. These are fed into a **processing subsystem** (dashed box) containing at least two specialized **modules**.
The modules operate in a coordinated manner, as evidenced by their internal purple interconnections. They also receive independent external instructions or data via the **green lines**. The system design suggests a pipeline or parallel processing approach where different modules handle specific tasks but must communicate results or states.
The **"Sele..."** element, likely a **Selector**, implies a higher-level decision-making component that chooses between data sources, processing paths, or operational modes, though its exact role is unclear from this fragment.
**In essence, the diagram illustrates how raw inputs (from Sensor/Memory) are processed by an interconnected module pair under the influence of external controls, within a defined subsystem boundary.** The missing portions of the diagram would be necessary to understand the system's outputs and the full role of the selection mechanism.
</details>
Figure 1: Architecture of the Global Workspace Theory
The Global Workspace Theory (GWT) is a cognitive science theory of information processing in consciousness, proposed by the psychologist Bernard Baars [6]. The essence of GWT is a framework in which information is competed and integrated among many specialized modules (e.g., vision, hearing, memory, language) that operate in parallel, and the information that eventually wins is then shared among all modules (Figure 1). The winning information is temporarily retained in a conscious form within a memory area called the âglobal workspaceâ. Only a limited amount of information can win at a time, and other competing information is considered to be processed unconsciously in the background. In this way, GWT is positioned as a framework to explain the interaction between a serial, limited-capacity conscious process and parallel, large-capacity unconscious processes. This model is supported by numerous experimental findings [7, 8]. For example, in brain imaging studies (e.g., fMRI, PET, EEG), stimuli processed under consciousness involve extensive regions of the brain, including the frontal and parietal lobes, exhibiting recurrent signaling, whereas stimuli not reaching conscious awareness (i.e., unconscious processing) remain confined to local, transient activity [13, 14]. This is consistent with the mechanism proposed by GWT that once some piece of information wins, it is broadcast globally to the entire system.
On the other hand, GWT mainly deals with âWhat information processing structures do we use?â, so it does not provide a direct answer to the question of âWhy did we arrive at this kind of information processing structure?â. From the biological and evolutionary perspective, we can address this question by considering how such a structure might have provided adaptive advantages in terms of survival and reproduction [9]. In previous research, the focus has often been placed on the part of GWTâs information processing structure related to competing and integrating information among multiple specialized modules operating in parallel (Selection process) and on the part that shares the selected information with the entire system (Broadcast process), and the advantages and benefits of these have been discussed.
### II-B Functional Advantages of Selection
In this paper, the process of selecting information from among the information processed in parallel by multiple specialized modules and then integrating them in a global workspace is called the âSelectionâ process.
#### II-B 1 Diverse Perspectives
By comparing and examining the outputs of multiple specialized modules, it is thought that it will be possible to generate a wider variety of solutions and ideas for a given task [15, 16]. For instance, if both a visual module and a language module are operating simultaneously, approaches that capture a problem from a pictorial/imaginative viewpoint can be compared with those that capture it from a linguistic/logical viewpoint. This concept is akin to the notion of âensemble learningâ [17] : by combining multiple models or modules with different specializations, they can complement the diverse aspects that a single model alone would not capture, thereby producing higher predictive accuracy and robustness overall.
Furthermore, the mechanism that integrates multiple parallel modules enables unexpected combinations of knowledge and skills from each module, which is thought to lead to creative thinking [10, 16]. For example, imagine a module responsible for visual thinking, inspired by metaphorical expressions provided by a language processing module, giving rise to a new diagram or prototype, which is then validated by a logical reasoning module. Alternatively, a module specializing in reinforcement learning might combine with a sensorimotor moduleâs proposed action strategy, leading to previously unanticipated solutions or task-execution procedures. The process of generating these incidental or divergent ideas and then evaluating, narrowing down, and integrating them is considered by many to be at the core of creative thinking [18].
#### II-B 2 Transfer Learning
When faced with a new task, utilizing the skills already acquired in the specialized modules reduces the need to learn from scratch, and as a result, it is thought that the efficiency and speed of learning will improve [10, 16]. For instance, if there are modules that excel in visual recognition, language processing, or logical reasoning and each is independently trained, then when facing a new domain or a different task, it becomes possible to adapt quickly by making use of the knowledge and representations already accumulated in these modules. This is analogous to âtransfer learningâ [19] in machine learning. In fact, when adapting a deep neural network learned in one domain (source domain) to another domain (target domain), reusing the lower-level feature extraction parts shortens the early training phase while still delivering high performance.
### II-C Functional Advantages of Broadcast
In this paper, the process of sharing selected information with all specialized modules is called the âBroadcastâ process.
#### II-C 1 Shared Attention
It is thought that broadcasting allows each specialized module to concentrate its resources on information that is deemed to be extremely important according to the current goals and environmental conditions, thereby improving the efficiency and accuracy of task execution [16, 20]. For example, consider a robot endowed with multiple sensory modules for vision, hearing, and touch, which is tasked with detecting, identifying, and accurately grasping an object. First, the visual module, operating unconsciously, generates multiple candidates, performing tasks such as location estimation and object classification in parallel. Meanwhile, the hearing module tries to gather hints from environmental sounds or voice commands that could modify actions. The tactile module prepares feedback control for the stage at which the robot actually grasps the object. After the information generated by each module is integrated by the Selection process, if the decision âto combine accurate location estimation from the visual module with minor corrective commands from auditory instructionsâ wins, that information is shared with all modules via the Broadcast function. As a result, the robot can carry out the plan âmove the arm toward the coordinates estimated visually, corrected by auditory informationâ in coordination across all modules.
This mechanism seems to be highly relevant to the âTransformer architectureâ [21]. Transformers, which demonstrate extremely high performance in various tasks such as natural language processing and image recognition, have a core mechanism known as âself-attentionâ. In self-attention, the inputs (or feature vectors) compute their mutual relevance, enabling the network as a whole to incorporate necessary contextual information. This mechanism is akin to GWTâs claim of handling diverse information while spotlighting important items and sharing them throughout the system. Though the transformer was not initially designed with the goal of mimicking consciousness, the fact that it achieves such high performance in language processing, image recognition, and more by way of sharing of important information hints at the fundamental usefulness of a strategy that shares the most crucial elements globally in an intelligent system.
#### II-C 2 Predictive Coding
Among the specialized modules, there are those that receive data from sensors (e.g., visual, auditory, tactile). If they receive predictions or metacognition as broadcast information, it may enhance the performance of the moduleâs output [10, 16]. For example, when the visual module is only processing lower-level features such as raw pixel data and edge information, it will only output tentative recognition results based on local statistics and pattern recognition. However, when higher-level context and objectives such as âthis scene is outdoors and there is a high possibility that there are multiple people in the pictureâ and âthe task is to judge the facial expressions of specific peopleâ are broadcast from the global workspace, the visual module will re-evaluate its output while referring to these predictions and hypotheses. As a result, corrections such as prioritizing the extraction of resolution and regions of interest that are appropriate for the task, or more carefully searching for clues to separate people and backgrounds, can be expected to improve recognition performance and reduce false positives. This aligns closely with the concept of âpredictive codingâ [22] often discussed in neuroscience and cognitive science. Predictive coding posits that the brain or cognitive system is constantly sending top-down predictions from higher (i.e., more advanced) modules to lower (i.e., more basic) modules, while the lower-level modules calculate and return the discrepancy (prediction error) between the actual sensory input and the prediction. If the discrepancy is large, it implies that something different from the predictions is likely in the scene, and this error is returned upstream so that the higher-level module can update or generate new predictions. If the discrepancy is small, it implies that the prediction and actual data largely match, thus increasing the likelihood that it is really as observed. Through repeated mutual interplay between top-down predictions and bottom-up prediction errors, the entire perception and cognition system dynamically adapts to the environment.
## III HYPOTHESIS
In this paper, in addition to the structural advantages from each of the traditional GWT perspectives (Selection and Broadcast), we newly focus on the advantage of a cycle structure in which information processing occurs through Selection and Broadcast (Selection-Broadcast Cycle). Within this cycle structure, we discuss the dynamic, stepwise information processing in which Selection and Broadcast intertwine in parallel and intermittently.
### III-A Dynamic Thinking Adaptation
The Selection-Broadcast Cycle possesses a structure that can realize any order of serial processing steps of specialized modules. The serial processing referred to here means processing that is carried out step by step (e.g., a chain of thought [23], inductive and deductive reasoning [24]). In contrast to parallel processing, in which multiple modules operate simultaneously, serial processing involves processing being carried out in order, with the information generated or selected by one module being passed on as input to the next module. In serial processing, the final answer is derived from the inferences and logical development that take place in the intermediate processing. This process of deriving conclusions in steps allows reliable problem solving and decision making with a small number of inferences and logical knowledge for various complex tasks. For example, by simply memorizing the results of addition and multiplication of 0 to 9 and the methodology of longhand arithmetic, you can calculate any addition or multiplication of integers (e.g., 11Ă2=10Ă2+1Ă2). In this way, by breaking down complex tasks into simpler sub-tasks (i.e., tasks that can be processed using limited memory or simple rules) and dealing with them in stages, it is possible to deal with a wide range of different tasks using relatively little memory capacity.
<details>
<summary>x2.png Details</summary>

### Visual Description
## Diagram: System Architecture with Broadcast and Selection
### Overview
The image displays a technical system architecture diagram illustrating a data flow or communication process. The diagram consists of labeled components connected by directional arrows, indicating the flow of information or signals. The primary languages present are English, as all labels are in English.
### Components/Axes
The diagram is composed of the following labeled components and connecting lines:
1. **Components (Nodes):**
* **GW**: An oval-shaped node located in the upper-center of the diagram. The label "GW" is written in black text inside the oval.
* **M1**: A rectangular node located at the bottom-left. The label "M1" is written in black text inside the rectangle.
* **M2**: A rectangular node located at the bottom-right. The label "M2" is written in black text inside the rectangle.
2. **Labels and Connectors:**
* **Broadcast**: A label in green text positioned at the top-center of the diagram, above the main structure.
* **Selection**: A label in purple text positioned to the left of the central vertical connector, between the GW and the M1/M2 nodes.
* **Green Lines (Broadcast Path):** A green line originates from the "Broadcast" label, forms a rectangular frame around the diagram, and terminates with inward-pointing arrows at both the **M1** and **M2** nodes. This indicates a broadcast signal or data stream being sent simultaneously to both M1 and M2.
* **Purple Lines (Selection Path):** A purple line originates from the top of both the **M1** and **M2** nodes, merges into a single vertical line, and terminates with an upward-pointing arrow at the bottom of the **GW** node. This indicates that outputs or selections from M1 and M2 are combined and sent to the GW.
### Detailed Analysis
* **Flow Direction:** The diagram depicts a clear, two-stage flow:
1. **Stage 1 (Broadcast):** A signal or data originates from the "Broadcast" source and is distributed to both modules, M1 and M2.
2. **Stage 2 (Selection):** Modules M1 and M2 process the broadcast input. Their outputs are then selected or aggregated (as indicated by the "Selection" label) and forwarded to the central "GW" component.
* **Spatial Relationships:** The "GW" node is centrally positioned as the final destination. The "M1" and "M2" nodes are placed symmetrically at the bottom, acting as parallel processing units. The "Broadcast" label and its associated green frame encompass the entire system, emphasizing its role as the initial, wide-reaching input.
### Key Observations
* The diagram uses color coding to distinguish between different data paths: **green** for the broadcast distribution and **purple** for the selective aggregation.
* The structure is symmetrical with respect to M1 and M2, suggesting they may have identical or similar functions within the system.
* The "GW" (likely an abbreviation for "Gateway") is the singular convergence point, indicating it is a central controller, aggregator, or decision-making unit.
### Interpretation
This diagram illustrates a common pattern in distributed systems, sensor networks, or communication protocols. The "Broadcast" represents a one-to-many communication (e.g., a command, a sensor ping, or a data packet sent to all nodes). The modules "M1" and "M2" act as receivers and processors. The "Selection" process implies that not all data from M1 and M2 is forwarded; instead, a specific subset, a result, or a chosen signal is sent to the "GW". This could represent:
* A **data fusion** system where multiple sensors (M1, M2) report to a central processor (GW).
* A **redundancy or voting system** where the GW selects the most reliable input from M1 and M2.
* A **resource allocation** scenario where the GW makes a decision based on inputs from two competing modules.
The absence of numerical data means the diagram's purpose is to convey **structural and relational information** rather than quantitative results. It defines the architecture and data flow logic of the system.
</details>
Figure 2: Example of GWT-based structure with two modules
<details>
<summary>x3.png Details</summary>

### Visual Description
## Diagram: System Feedback Loop with Gateway and Modules
### Overview
The image displays a technical diagram illustrating a system with three primary components and their interconnections via directional arrows of different colors. The diagram is simple, using basic shapes and lines to represent entities and flows. It appears to depict a control or data flow system with feedback mechanisms.
### Components/Axes
The diagram consists of the following labeled components and connecting lines:
1. **Components (Shapes):**
* **GW:** An oval shape located in the upper-center of the diagram. It contains the text "GW".
* **M1:** A rounded rectangle located in the lower-left quadrant, directly below and slightly to the left of "GW". It contains the text "M1".
* **M2:** A rounded rectangle located in the lower-right quadrant, directly below and slightly to the right of "GW", and immediately to the right of "M1". It contains the text "M2".
2. **Connecting Lines & Arrows:**
* **Red Arrow 1:** A thick red line with an arrowhead. It originates from the bottom of the "GW" oval, travels downward, then splits. One branch goes left and down into the top of the "M1" rectangle. The other branch goes right and down into the top of the "M2" rectangle.
* **Red Arrow 2 (Feedback Loop):** A thick red line with an arrowhead. It originates from the right side of the "M2" rectangle, travels right, then up, then left, and finally down into the right side of the "M2" rectangle itself, forming a closed loop.
* **Light Green Arrow:** A thin, light green line with an arrowhead. It originates from an unseen point to the left (implied by the dashed line), travels right, and points into the left side of the "M1" rectangle.
* **Purple Line:** A thin, purple line without an arrowhead. It connects the top-right corner of the "M1" rectangle to the top-left corner of the "M2" rectangle, forming a horizontal bridge between them.
3. **Other Elements:**
* **Dashed Vertical Line:** A black, dashed vertical line runs along the far-left edge of the image, suggesting a boundary or separation from another system or environment.
### Detailed Analysis
* **Spatial Layout:** The "GW" component is positioned centrally at the top. "M1" and "M2" are positioned side-by-side at the bottom, forming a base. The dashed line is on the extreme left.
* **Flow Directions:**
* The primary red flow originates from "GW" and distributes to both "M1" and "M2".
* A secondary red flow creates a feedback loop from "M2" back to itself.
* An external input (light green) flows into "M1" from the left boundary.
* A bidirectional or static connection (purple) exists between "M1" and "M2".
* **Color Coding:** The diagram uses color to differentiate connection types:
* **Red:** Used for the main output from "GW" and the feedback loop on "M2". This likely signifies a primary control, power, or data signal.
* **Light Green:** Used for the input to "M1" from the left. This likely signifies an external or initiating signal.
* **Purple:** Used for the link between "M1" and "M2". This likely signifies a secondary, internal, or synchronization channel.
### Key Observations
1. **Asymmetric Input:** Only module "M1" receives the external (light green) input. Module "M2" does not have a direct external input shown.
2. **Feedback Specificity:** The red feedback loop is explicitly drawn only for module "M2", not for "M1" or the central "GW".
3. **Inter-Module Link:** The purple line creates a direct connection between "M1" and "M2", suggesting they can communicate or share state independently of the central "GW".
4. **Centralized Distribution:** The "GW" component acts as a central hub, sending signals to both subordinate modules ("M1" and "M2").
### Interpretation
This diagram models a system where a central gateway or controller (**GW**) manages two modules (**M1**, **M2**). The system has the following characteristics:
* **Hybrid Control:** The system is driven by both an external trigger (light green input to M1) and internal control signals (red outputs from GW).
* **Specialized Module Roles:** The asymmetry suggests "M1" may be an interface or sensor module that receives external data, while "M2" might be an actuator or processing module with a self-regulating feedback loop (the red loop on M2).
* **Distributed Processing:** The purple link between M1 and M2 implies a level of peer-to-peer coordination or data sharing, allowing the modules to operate semi-autonomously without always routing through the central GW.
* **System Boundary:** The dashed line clearly demarcates the system's boundary, with the light green arrow representing the sole point of external interaction shown in this view.
**In essence, the diagram depicts a control architecture that combines centralized command (from GW), localized feedback (on M2), external triggering (into M1), and inter-module coordination (between M1 and M2).** This structure is common in robotics, industrial control systems, or distributed computing nodes where a main controller delegates tasks to specialized subunits that can also interact directly.
</details>
Figure 3: Flow of pipeline and GWT process in the GWT-based example
The Selection-Broadcast Cycle process has a space where such intermediate inferences and logical developments can be freely performed. Figure 2 shows an example of a simple Selection-Broadcast Cycle structure with two modules (M1, M2). The upper part of Figure 3 shows an example of the execution procedure of modules, and the lower part shows the processing flow that executes that execution procedure in the Selection-Broadcast Cycle. As you can see, the Selection-Broadcast Cycle process can execute any execution procedure using the modules by switching the selection well. In order to implement such a vast serial processing space for intermediate inferences and logical development as a pipeline, it is thought that a large tree structure made up of a large number of modules is necessary. The Selection-Broadcast Cycle process is thought to be a structure made up of a minimum number of modules using looped information processing.
Furthermore, this function enables flexible and dynamic processing, allowing you to try out all kinds of thought processes and change your thought processes in response to changes in the situation. This is a great advantage when dealing with situations that are difficult to handle with a fixed pipeline process, such as when the processing procedure is unclear or the goal is changed partway through. For example, consider the case where a robot explores a room based on information from multiple sensors (vision, touch, audio input, etc.). At the start of the search, the main objective was to search for and move along the shortest route, and the processing was set up to call the object detection module and the route planning module in order. However, during the search, there were many collisions with people in the room along the route. In this case, the Selection-Broadcast Cycle makes it possible to share the problem with the whole system, devise a solution, and make changes to the processing, for example, by calling a human detection module while planning a route. Also, if a voice instruction is received and the content of the instruction changes, it is possible to call a voice recognition module to share the analysis results with the whole system, and then reconfigure the execution order of the visual module and route planning module in response to the results. Thanks to this variable serial processing, the order in which the necessary specialized modules are called can be flexibly rearranged in response to changes in the situation or new goals, making it possible to accomplish tasks that would be difficult with fixed pipeline processing.
<details>
<summary>x4.png Details</summary>

### Visual Description
## Diagram: Accelerated Thinking (Consciousness Cycle1)
### Overview
The image is a process diagram illustrating a concept titled "Accelerated Thinking," with a focus on "Consciousness (Cycle1)." It depicts two sequential processes originating from a common starting component (M1) and leading to distinct outcomes (M2 and M3), structured to show flow and differentiation between results.
### Components/Axes
- **Title**: "Accelerated Thinking" (centered, underlined, black text; overarching theme)
- **Cycle Label**: "Consciousness (Cycle1)" (centered below the title, gray text; specifies the context of the depicted processes)
- **Top Process Flow**:
- Left component: Rectangular box labeled "M1" (purple border, white fill)
- Right component: Rectangular box labeled "M2" (purple border, white fill)
- Connection: Gray arrow pointing from M1 to M2 (indicates causal/sequential flow)
- **Bottom Process Flow**:
- Left component: Rectangular box labeled "M1" (purple border, white fill; identical to the top M1)
- Right component: Rectangular box labeled "M3" (orange border, white fill)
- Connection: Gray arrow pointing from M1 to M3 (indicates causal/sequential flow)
- Additional detail: Small unlabeled red mark on the right edge of M3
### Detailed Analysis
The diagram is split into two horizontal, parallel process flows, both anchored by the same starting component (M1):
1. **Top Flow**: M1 (purple box) â M2 (purple box) via a gray arrow, representing one pathway from the starting point.
2. **Bottom Flow**: M1 (purple box) â M3 (orange box) via a gray arrow, representing a second, distinct pathway from the same starting point.
The title "Accelerated Thinking" frames the entire diagram, while "Consciousness (Cycle1)" narrows the context to a specific iteration of a cyclical cognitive or computational process.
### Key Observations
- **Color Differentiation**: M3 uses an orange border, while M1 and M2 use purple borders, signaling a qualitative difference between M3 and the other components.
- **Consistent Flow Direction**: Both processes flow left-to-right from M1, emphasizing a unidirectional, causal relationship.
- **Cycle Specification**: The "Cycle1" label implies this is one iteration of a repeating process related to consciousness and accelerated thinking.
- **Unlabeled Mark**: The small red mark on M3âs right edge is unlabeled, suggesting an unannotated feature, error, or point of interest.
### Interpretation
The diagram models a cognitive or computational framework for "Accelerated Thinking" within the context of "Consciousness (Cycle1)." M1 likely represents a core starting state (e.g., a mental module, input, or baseline cognitive state) that can produce two distinct outcomes:
- M2 (purple border): A standard or baseline outcome (consistent with M1âs color, implying alignment with the starting state).
- M3 (orange border): A modified, accelerated, or alternative outcome (color change signals a departure from the baseline).
The unlabeled red mark on M3 may indicate a critical point (e.g., a trigger for acceleration, a feedback loop, or an error) not explicitly defined. Overall, the diagram suggests that "Accelerated Thinking" involves branching pathways from a common starting point, with differentiated outcomes tied to a specific cycle of consciousness.
</details>
Figure 4: Flow of accelerated thinking in the GWT-based example
This function can also be said to mean that it is possible to exchange information between any of the modules. VanRullen and Kanai [10] point out that the global workspace functions as a âhubâ between specialized modules, and that cycle-consistency learning [25] can be carried out by exchanging information between the same specialized modules. Cycle-consistency learning is a learning method that imposes constraints on the model to maintain consistency when converting data back and forth. These constraints ensure that once converted data can be restored to its original state by reversing the conversion, and prevent the loss of content or meaning during the conversion process. A major advantage is that it can learn domain mapping even without training data. In this way, the outputs of each specialized module are continuously cross-checked by repeating the Selection-Broadcast Cycle, and the entire system has the potential to detect potential inconsistencies, correct errors, and gradually build more reliable processing results.
### III-B Experience-Based Adaptation
As noted, in GWT, the information that is sequentially raised in the Global Workspace (Consciousness) through the Selection-Broadcast Cycle is shared with all specialized modules in a stepwise manner. Here, we focus on the point that the serial processing carried out in consciousness enters each specialized module in chronological order. It is thought that there are specialized modules that record such chronological consciousness and store it as experience memory [26]. We can further suppose that such experience memory can be recalled if a similar situation arises. If so, it would become possible to speed up or predict the course of serial processing.
Figure 4 shows an example of a simple Selection-Broadcast Cycle structure with two modules (M1, M2) and one experience memory module (M3). Consciousness(Cycle1), Consciousness(Cycle2), and Consciousness(Cycle3) are each listed in the global workspace in chronological order. Since these consciousnesses are broadcast in chronological order, they of course flow into the experience memory module as well. The experience memory module retains them as experiences. Then, when Consciousness(Cycle1) is broadcast again, the experience memory module can output Consciousness(Cycle2) and Consciousness(Cycle3) as recalled memories. This means that it is possible to reach the output of Consciousness(Cycle3) in two cycles, whereas it would have taken three cycles to reach it in the past. As described above, it is thought that the Selection-Broadcast Cycle will enable faster serial processing and prediction. This is similar to the concept of âchunkingâ [27] in cognitive science, and if learned schemas and procedures are stored as a kind of âchunkâ, then when faced with a similar task next time, that chunk can be called up all at once to quickly progress with the processing.
This mechanism not only increases processing speed, but also promotes inference and anticipation of actions. In other words, while referring to past thought processes, it is possible to make predictions such as âthere is a possibility that new information will be lacking at this stageâ or âit would be better to activate the sensorimotor module before the logical inference module in the next stepâ, and it is possible to adjust the order of module calls and resource allocation in advance based on these predictions. As a result, each step in the variable serial processing is no longer a simple âtrial and errorâ process, but rather a planned and efficient process that makes full use of past accumulated knowledge. The meta-cognitive decisions made during this process, such as âwhich module should be activated at what timeâ and âwhen should top-down information be updatedâ, are also optimized through the use of overall information sharing and memory via the Selection-Broadcast Cycle. In this way, by having a system in place that can record and utilize a record of serial processing, it is hoped that the cognitive architecture based on GWT will not only speed up, but also acquire advanced problem-solving capabilities that incorporate reasoning and prediction with an eye on the next move.
There have been several implementations of agent systems that apply experience memory as knowledge (e.g., reasoning and prediction) [28, 29]. For instance, Franklin and colleagues [30] have demonstrated a framework called LIDA (Learning Intelligent Distribution Agent), which builds on GWT to incorporate conscious content into various cognitive modules, including an episodic memory module. In LIDA-based implementations, information that reaches consciousness is not only broadcast to specialized modules but is also chronologically recorded in an episodic (or experience) memory. When a similar situation occurs, the system recalls the sequence of recorded conscious events and applies them as learned knowledge.
<details>
<summary>x5.png Details</summary>

### Visual Description
## Block Diagram: Gateway-Module Communication System
### Overview
This is a technical block diagram illustrating a system with a central gateway (GW), two modules (M1, M2), and an external input, showing directional signal flows between components. A vertical dashed line on the left acts as a boundary for external system interactions.
### Components/Axes
- **Boundary Element**: Vertical dashed line (left side of the diagram, spanning top to bottom).
- **Central Node**: Oval labeled "GW" (Gateway), positioned at the top-center of the diagram.
- **Modules**: Two rectangular blocks:
- "M1" (left module, below GW)
- "M2" (right module, below GW, adjacent to M1)
- **External Input**: Text "External Input1" (blue font, below M1) with a blue upward arrow pointing to M1.
- **Connection Lines (Color-Coded)**:
1. Light green line: Curves from the left dashed boundary to the left side of M1.
2. Red line: Curves from the right side of GW down and right, then left to the right side of M2.
3. Red upward arrow: Connects the top of M1 to the bottom of GW.
4. Light purple line: Horizontal link connecting the top edges of M1 and M2.
### Detailed Analysis
- **Signal Flow Paths**:
1. External Input1 â M1 (blue upward arrow).
2. M1 â GW (red upward arrow).
3. GW â M2 (red curved line).
4. M1 â M2 (light purple horizontal line, bidirectional link).
5. External boundary system â M1 (light green curved line).
- **Component Roles**:
- M1: Primary interface for external input, connects to both GW and M2.
- GW: Central routing node, receives input from M1 and sends output to M2.
- M2: Secondary module, receives data from GW and maintains a direct link with M1.
### Key Observations
- Color coding clearly distinguishes different signal paths (blue for external input, red for gateway-related flows, light green for boundary input, light purple for inter-module communication).
- M1 is the only module receiving direct external input, making it the system's entry point.
- The diagram emphasizes a hierarchical flow: external input â M1 â GW â M2, with a parallel direct link between M1 and M2.
### Interpretation
This diagram likely represents a distributed communication or control system. M1 acts as an edge interface, handling external data and relaying it to a central gateway (GW) for routing. GW then forwards processed data to M2, while the direct M1-M2 link enables low-latency communication between the two modules without gateway mediation. The left boundary line suggests M1 interacts with an external system, making this setup suitable for scenarios where edge modules need to interface with external inputs while coordinating with a central gateway and peer modules.
</details>
Figure 5: Flow of real-time intervention in the GWT-based example
### III-C Immediate Real-Time Adaptation
The selective broadcast cycle allows real-time intervention in the results of intermediate processing by external input. Figure 5 shows a simple scenario in which external intervention occurs within a Selection-Broadcast Cycle process with two modules (M1, M2). As shown, external inputs can influence the global workspaceâs serial processing at any point. For example, if a specialized module detects new, highly significant information, this information can immediately enter the global workspace through the Selection process, which then disseminates it to all other modules via Broadcast. This quick route ensures that unnecessary waiting times and message passing are greatly reduced, and real-time system responsiveness is greatly improved.
In practical robotics scenarios, such flexible intervention mechanisms have notable advantages. For instance, imagine a robot performing an assembly task using multiple sensory modules (visual, tactile, auditory). Suppose the robotâs tactile sensor suddenly detects an unexpected slip or instability in its grip. With the Selection-Broadcast Cycle, this critical information is rapidly promoted into the global workspace, interrupting the ongoing processing sequence. Consequently, other modules (e.g., motor control, vision processing, or reinforcement learning) immediately receive this alert and can swiftly initiate corrective actions. This immediate broadcast enables the system to promptly reconsider and revise its gripping strategy from both top-down (strategic re-planning) and bottom-up (sensor-driven adjustments) perspectives, substantially improving safety, precision, and robustness in real-time.
## IV DISCUSSION
Traditional discussions of GWTâs intelligence have predominantly emphasized the process on static, supervised settings, which rely heavily on pre-labeled data sets, explicit instructions, and predefined tasks (e.g., ensemble learning, transfer learning, self-attention, predictive coding). In such scenarios, intelligence manifests primarily as a systemâs ability to accurately replicate patterns and knowledge derived from historical, structured data. However, the real-world application of artificial intelligence increasingly demands a shift toward dynamic, unsupervised settings, where tasks, environments, and goals continuously evolve, often without explicit guidance or labeled examples.
In dynamic, unsupervised scenarios, intelligent systems face fundamentally different challenges. Rather than relying on historical labels or fixed benchmarks, these systems must autonomously discover meaningful patterns, adapt swiftly to changing contexts, and continuously learn from ongoing experiences. In this paper, we discussed the strengths of GWT in such real-time processing by focusing on Selection-Broadcast Cycle. We explained that this Selection-Broadcast Cycle realizes flexible processing, is capable of being accelerated, and is a mechanism that can respond immediately to real-time changes. Thus, by highlighting the advantages of the Selection-Broadcast Cycle, this paper extends traditional conceptions of GWT intelligence into the realm of dynamic, unsupervised learning, opening new pathways toward the development of more robust, adaptive, and autonomous artificial intelligence systems capable of thriving in complex real-time environments. Future research could further explore practical implementations and empirical evaluations to validate these theoretical insights and expand the applicability of GWT-based architectures in diverse, real-world scenarios.
Furthermore, although GWT seems well-suited for thriving in the real-time world, one potential way to enhance its adaptability further could involve multiple consciousness (GWT) processes operating in parallel. This parallelization could facilitate the simultaneous exploration of diverse solutions, enhance adaptability by rapidly responding to varied and unpredictable changes, and effectively distribute cognitive load, thereby potentially surpassing the limitations inherent to a single, centralized consciousness structure. Such a mechanism might represent the collective intelligence observed in groups of humans, suggesting that human societies themselves could represent natural exemplars of parallel consciousness networks capable of robust, adaptive decision-making in complex and dynamic environments. For example, Taniguchi [31] is researching the dynamics of such group intelligence and language development.
## V LIMITATIONS AND FUTURE WORK
While the proposed Selection-Broadcast Cycle structure inspired by the Global Workspace Theory (GWT) provides a compelling theoretical framework for adaptive, real-time cognitive architectures, several critical limitations need to be acknowledged and addressed in future work.
One significant limitation of this study is the absence of empirical validation. The advantages of the Selection-Broadcast Cycle, such as dynamic thinking, experience-based acceleration, and immediate real-time responsiveness, remain largely theoretical. Currently, the paper does not present experimental results, simulations, or quantitative analyses to substantiate these claims. Therefore, readers must accept the described benefits without direct evidence of improved adaptability or efficiency compared to other existing methods. To strengthen future iterations of this research, practical implementations such as comparative simulations or robot-based experiments demonstrating fewer task failures or quicker adaptation would be essential.
In particular, the overall effectiveness of the system depends greatly on the quality of the Selection process. There remain important open questions regarding whether such effective and sophisticated selection mechanisms can actually be put to practical use. In static environments, there is an example of Selection process that improves the overall performance of the system by weighting based mainly on internal indicators such as happiness and past experience [32]. In order to put GWT-based architectures to practical use in real-world applications, it is extremely important to achieve robust and adaptive Selection processes, so future research will need to address this implementation issue in dynamic environments as well.
## VI CONCLUSIONS
In this paper, we explored the potential of the Global Workspace Theory (GWT) and, in particular, the Selection-Broadcast Cycle, as an information processing architecture suitable for dynamic, unsupervised real-time environments. Traditional approaches to artificial intelligence often rely heavily on structured, labeled data, where intelligence primarily involves replicating known patterns. However, real-world applications require systems that can continuously adapt and respond to evolving tasks, environments, and goals. In this context, we highlighted the Selection-Broadcast Cycleâs strengths: its flexibility to rearrange module execution order dynamically, its capability for acceleration through experience-driven predictions, and its responsiveness to immediate real-time inputs.
Our hypothesis suggests that a cognitive architecture based on GWT and, specifically, the Selection-Broadcast Cycle, provides a robust framework for dynamic decision-making and rapid adaptation in complex environments. The ability to rearrange processing sequences dynamically, accelerate learning through experience-based memory, and intervene swiftly in response to changing conditions positions GWT-based architectures to effectively handle the challenges posed by real-time intelligence.
A critical unresolved question remains the practical feasibility of implementing robust and adaptive Selection mechanisms in real-world systems. Future research must address this challenge, potentially through integrating machine learning techniques and advanced evaluative frameworks, to further validate and extend the applicability of GWT-based architectures. By tackling these challenges, we can move closer to developing truly autonomous, flexible artificial intelligence systems capable of thriving in the complexities and uncertainties of the real-time world.
## References
- [1] D. Hassabis, D. Kumaran, C. Summerfield, and M. Botvinick, âNeuroscience-inspired artificial intelligence,â Neuron, vol. 95, no. 2, pp. 245â258, 2017.
- [2] M. K. Ho and T. L. Griffiths, âCognitive science as a source of forward and inverse models of human decisions for robotics and control,â Annual Review of Control, Robotics, and Autonomous Systems, vol. 5, no. 1, pp. 33â53, 2022.
- [3] I. Kotseruba and J. K. Tsotsos, â40 years of cognitive architectures: core cognitive abilities and practical applications,â Artificial Intelligence Review, vol. 53, no. 1, pp. 17â94, 2020.
- [4] A. Ajay, S. Han, Y. Du, S. Li, A. Gupta, T. Jaakkola, J. Tenenbaum, L. Kaelbling, A. Srivastava, and P. Agrawal, âCompositional foundation models for hierarchical planning,â Advances in Neural Information Processing Systems, vol. 36, pp. 22 304â22 325, 2023.
- [5] B. Liu, X. Li, J. Zhang, J. Wang, T. He, S. Hong, H. Liu, S. Zhang, K. Song, K. Zhu, et al., âAdvances and challenges in foundation agents: From brain-inspired intelligence to evolutionary, collaborative, and safe systems,â arXiv preprint arXiv:2504.01990, 2025.
- [6] B. J. Baars, âGlobal workspace theory of consciousness: toward a cognitive neuroscience of human experience,â Progress in brain research, vol. 150, pp. 45â53, 2005.
- [7] S. Dehaene and L. Naccache, âTowards a cognitive neuroscience of consciousness: basic evidence and a workspace framework,â Cognition, vol. 79, no. 1-2, pp. 1â37, 2001.
- [8] G. A. Mashour, P. Roelfsema, J.-P. Changeux, and S. Dehaene, âConscious processing and the global neuronal workspace hypothesis,â Neuron, vol. 105, no. 5, pp. 776â798, 2020.
- [9] A. Juliani, K. Arulkumaran, S. Sasai, and R. Kanai, âOn the link between conscious function and general intelligence in humans and machines,â Transactions on Machine Learning Research, 2022, survey Certification.
- [10] R. VanRullen and R. Kanai, âDeep learning and the global workspace theory,â Trends in Neurosciences, vol. 44, no. 9, pp. 692â704, 2021.
- [11] T. Lesort, V. Lomonaco, A. Stoian, D. Maltoni, D. Filliat, and N. DĂaz-RodrĂguez, âContinual learning for robotics: Definition, framework, learning strategies, opportunities and challenges,â Information Fusion, vol. 58, pp. 52â68, 2020.
- [12] K. Shaheen, M. A. Hanif, O. Hasan, and M. Shafique, âContinual learning for real-world autonomous systems: Algorithms, challenges and frameworks,â Journal of Intelligent & Robotic Systems, vol. 105, no. 1, p. 9, 2022.
- [13] S. Dehaene, L. Naccache, L. Cohen, D. L. Bihan, J.-F. Mangin, J.-B. Poline, and D. RiviĂšre, âCerebral mechanisms of word masking and unconscious repetition priming,â Nature neuroscience, vol. 4, no. 7, pp. 752â758, 2001.
- [14] R. Gaillard, S. Dehaene, C. Adam, S. ClĂ©menceau, D. Hasboun, M. Baulac, L. Cohen, and L. Naccache, âConverging intracranial markers of conscious access,â PLoS biology, vol. 7, no. 3, p. e1000061, 2009.
- [15] I. Ito, T. Ito, J. Suzuki, and K. Inui, âInvestigating the effectiveness of multiple expert models collaboration,â in Findings of the Association for Computational Linguistics: EMNLP 2023, 2023, pp. 14 393â14 404.
- [16] G. A. Wiggins, âCrossing the threshold paradox: Modelling creative cognition in the global workspace,â in International Conference on Computational Creativity, 2012, pp. 180â187.
- [17] R. Polikar, âEnsemble learning,â Ensemble machine learning: Methods and applications, pp. 1â34, 2012.
- [18] G. K. Stojanov and B. Indurkhya, âCreativity and cognitive development: The role of perceptual similarity and analogy.â in AAAI Spring symposium: Creativity and (Early) cognitive development, 2013.
- [19] C. Tan, F. Sun, T. Kong, W. Zhang, C. Yang, and C. Liu, âA survey on deep transfer learning,â in Artificial Neural Networks and Machine LearningâICANN 2018: 27th International Conference on Artificial Neural Networks, Rhodes, Greece, October 4-7, 2018, Proceedings, Part III 27. Springer, 2018, pp. 270â279.
- [20] R. F. J. Dossa, K. Arulkumaran, A. Juliani, S. Sasai, and R. Kanai, âDesign and evaluation of a global workspace agent embodied in a realistic multimodal environment,â Frontiers in Computational Neuroscience, vol. 18, p. 1352685, 2024.
- [21] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ć. Kaiser, and I. Polosukhin, âAttention is all you need,â Advances in neural information processing systems, vol. 30, pp. 6000â6010, 2017.
- [22] K. Friston and S. Kiebel, âPredictive coding under the free-energy principle,â Philosophical transactions of the Royal Society B: Biological sciences, vol. 364, no. 1521, pp. 1211â1221, 2009.
- [23] J. Wei, X. Wang, D. Schuurmans, M. Bosma, F. Xia, E. Chi, Q. V. Le, D. Zhou, et al., âChain-of-thought prompting elicits reasoning in large language models,â Advances in neural information processing systems, vol. 35, pp. 24 824â24 837, 2022.
- [24] T. Shanahan, âDeductive and inductive arguments,â The Internet Encyclopedia of Philosophy, ISSN 2161-0002, https://iep.utm.edu/, accessed on 2025-03-05.
- [25] J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, âUnpaired image-to-image translation using cycle-consistent adversarial networks,â in Proceedings of the IEEE international conference on computer vision, 2017, pp. 2223â2232.
- [26] S. Franklin, B. J. Baars, U. Ramamurthy, and M. Ventura, âThe role of consciousness in memory,â Brains, Minds and Media, vol. 2005, no. 1, 2005.
- [27] F. Gobet, P. C. Lane, S. Croker, P. C. Cheng, G. Jones, I. Oliver, and J. M. Pine, âChunking mechanisms in human learning,â Trends in cognitive sciences, vol. 5, no. 6, pp. 236â243, 2001.
- [28] J. Laird, K. Kinkade, S. Mohan, and J. Xu, âCognitive robotics using the soar cognitive architecture,â AAAI Workshop - Technical Report, pp. 46â54, 01 2012.
- [29] L. Martin, J. H. Rosales, K. Jaime, and F. Ramos, âAffective episodic memory system for virtual creatures: The first step of emotion-oriented memory,â Computational Intelligence and Neuroscience, vol. 2021, no. 1, p. 7954140, 2021.
- [30] S. Franklin, T. Madl, S. Dâmello, and J. Snaider, âLida: A systems-level architecture for cognition, emotion, and learning,â IEEE Transactions on Autonomous Mental Development, vol. 6, no. 1, pp. 19â41, 2013.
- [31] T. Taniguchi, âCollective predictive coding hypothesis: Symbol emergence as decentralized bayesian inference,â Frontiers in Robotics and AI, vol. 11, p. 1353870, 2024.
- [32] E. C. Garrido-MerchĂĄin, M. Molina, and F. M. Mendoza-Soto, âA global workspace model implementation and its relations with philosophy of mind,â Journal of Artificial Intelligence and Consciousness, vol. 9, no. 01, pp. 1â28, 2022.