# Hypothesis on the Functional Advantages of the Selection-Broadcast Cycle Structure: Global Workspace Theory and Dealing with a Real-Time World
**Authors**: Junya Nakanishi, Jun Baba, Yuichiro Yoshikawa, Hiroko Kamide, and Hiroshi Ishiguro
> *This work was not supported by any organization 1 Junya Nakanishi, Yuichiro Yoshikawa, and Hiroshi Ishiguro are with the Graduate School of Engineering Science, The University of Osaka, Osaka 560-0043, Japan 2 Jun Baba is with AI Lab, CyberAgent Inc., Tokyo 150-0042, Japan 3 Hiroko Kamide is with Faculty/School of Law, Kyoto University, Kyoto 606-8501, Japan
## Abstract
This paper discusses the functional advantages of the Selection-Broadcast Cycle structure proposed by Global Workspace Theory (GWT), inspired by human consciousness, particularly focusing on its applicability to artificial intelligence and robotics in dynamic, real-time scenarios. While previous studies often examined the Selection and Broadcast processes independently, this research emphasizes their combined cyclic structure and the resulting benefits for real-time cognitive systems. Specifically, the paper identifies three primary benefits: Dynamic Thinking Adaptation, Experience-Based Adaptation, and Immediate Real-Time Adaptation. This work highlights GWTâs potential as a cognitive architecture suitable for sophisticated decision-making and adaptive performance in unsupervised, dynamic environments. It suggests new directions for the development and implementation of robust, general-purpose AI and robotics systems capable of managing complex, real-world tasks.
## I INTRODUCTION
In recent years, a major research theme in the fields of artificial intelligence (AI), robotics, and cognitive science has been how to implement the advanced intelligence and flexible problem-solving abilities of humans and animals into systems [1, 2]. With the technical advances in machine learning (most notably deep learning) and the heightened performance of hardware in robotics, there has been growing interest in âmultimodalâ and âparallelâ architectures that carry out tasks while simultaneously leveraging multiple cognitive functions [3, 4]. However, even if several specialized modules (e.g., vision, language, logical reasoning, motor control) each have excellent capabilities, there are still many aspects of information exchange and control methods that have not been fully organized due to the simultaneous parallel operation of multiple modules [5].
Against this background, the Global Workspace Theory (GWT), which was devised by imitating human consciousness, is attracting attention. GWT positions âconsciousnessâ from the perspective of information processing structure and proposes a framework in which information that has been competed for and integrated among numerous parallel specialized modules is temporarily brought âinto consciousnessâ and then shared system-wide [6]. Since it was first proposed by the psychologist Bernard Baars, GWT has been linked to many empirical findings in neuroscience and cognitive science [7, 8]. More recently, its advantages as an information processing architecture have begun to attract attention in AI research as well. Previous GWT research suggests that the âSelectionâ process, which integrates information among multiple parallel specialized modules, and the âBroadcastâ process, which disseminates the selected information throughout the system, are expected to be effective as a wide range of functions, including creative thinking, transfer learning, top-down control, and attention allocation [8, 9, 10]. However, in many of these discussions, âSelectionâ and âBroadcastâ are treated separately, and the effectiveness of these two processes being executed in parallel and intermittently are not fully addressed.
In this paper, we call the process of exchanging information through âSelectionâ and âBroadcastâ the âSelection-Broadcast Cycleâ, and focus on it. In the Selection-Broadcast Cycle, we are considering information processing that has a time dimension, that is, information processing that is not a single information processing, but rather a series of multiple information processes, such as responding to an environment that changes over time or taking time to search for an answer. These information processing methods with a time dimension are important research topics in robotics, where real-time processing is required, and in artificial intelligence systems that handle complex tasks that require long-term learning and adaptation [11, 12]. For instance, for continuous tasks that span a period of time, a robot will inevitably need to change its approach during interactions with humans. Moreover, sensor data are updated moment by moment, and task goals or external conditions may change depending on the situation. Therefore, there is a need for a real-time processing framework that can dynamically decide âwhen and which module to call uponâ in an online setting and swiftly reflect the results in the next step.
Accordingly, this paper focuses on the Selection-Broadcast Cycle structure proposed in GWT and discusses the functional advantages its dynamic, cyclic structure offers from the perspective of applying it to the design of real AI and robotic systems. Specifically, we highlight: Dynamic Thinking Adaptation:
a capacity to dynamically rearrange module execution order, thereby enabling flexible adaptation to unexpected task changes or evolving goals Experience-Based Adaptation:
an acceleration of consciousness processing by exploiting past experiences stored in memory modules, facilitating faster predictions and decision-making Immediate Real-Time Adaptation:
a quick intervention route to consciousness processing allows for immediate response to real-time changes
Our aim is to theoretically clarify âwhy such a structure is useful for real-time intelligent systems.â By doing so, we hope to offer fresh insights into the design philosophy and implementation guidelines of cognitive architectures based on GWT and contribute to the development of robust, general-purpose AI and robotic systems capable of adapting to complex tasks and unknown environments.
## II LITERATURE REVIEW
### II-A Overview of GWT
<details>
<summary>x1.png Details</summary>

### Visual Description
\n
## Diagram: System Block Diagram
### Overview
The image depicts a simplified block diagram of a system, likely an electronic or software architecture. It shows the interconnection of several modules: Sensor, Memory, Module1, Module2, and a Selector component. The diagram uses rectangular blocks to represent components and arrows to indicate data or control flow. Dashed lines enclose certain sections, potentially indicating functional groupings or boundaries.
### Components/Axes
The diagram contains the following components:
* **Sensor:** A source of input data.
* **Memory:** A storage component.
* **Module1:** A processing unit.
* **Module2:** Another processing unit.
* **Selector:** A component that appears to choose between outputs.
There are no explicit axes or scales in this diagram. The flow is indicated by arrows.
### Detailed Analysis or Content Details
The diagram shows the following connections:
1. **Sensor to Module1:** An arrow points from the "Sensor" block to the "Module1" block, indicating data flow.
2. **Memory to Module2:** An arrow points from the "Memory" block to the "Module2" block, indicating data flow.
3. **Module1 to Module2:** An arrow points from the "Module1" block to the "Module2" block, indicating data flow.
4. **Module2 to Selector:** An arrow points from the "Module2" block to the "Selector" block, indicating data flow.
5. **Module1 to Selector:** An arrow points from the "Module1" block to the "Selector" block, indicating data flow.
The "Sensor" and "Module1" are enclosed within a dashed-line rectangle. The "Memory" and "Module2" are enclosed within a dashed-line rectangle.
The text "Selector" is present above the Selector block.
### Key Observations
The diagram suggests a system where data from a sensor and memory are processed by modules, and a selector component determines the final output. The dashed lines suggest that the sensor and module1 form a unit, and the memory and module2 form another unit. The selector receives input from both modules.
### Interpretation
This diagram likely represents a simplified control system or data processing pipeline. The sensor provides initial data, which is processed by Module1. Memory provides data to Module2. Both modules contribute to the selector, which then determines the system's output. The dashed lines suggest modularity and potentially independent operation of the sensor/module1 and memory/module2 units. The selector could be implementing a decision-making process based on the outputs of both modules.
Without further context, it's difficult to determine the specific function of each module or the criteria used by the selector. However, the diagram provides a high-level overview of the system's architecture and data flow. The diagram does not provide any quantitative data or specific parameters. It is a conceptual representation of a system's structure.
</details>
Figure 1: Architecture of the Global Workspace Theory
The Global Workspace Theory (GWT) is a cognitive science theory of information processing in consciousness, proposed by the psychologist Bernard Baars [6]. The essence of GWT is a framework in which information is competed and integrated among many specialized modules (e.g., vision, hearing, memory, language) that operate in parallel, and the information that eventually wins is then shared among all modules (Figure 1). The winning information is temporarily retained in a conscious form within a memory area called the âglobal workspaceâ. Only a limited amount of information can win at a time, and other competing information is considered to be processed unconsciously in the background. In this way, GWT is positioned as a framework to explain the interaction between a serial, limited-capacity conscious process and parallel, large-capacity unconscious processes. This model is supported by numerous experimental findings [7, 8]. For example, in brain imaging studies (e.g., fMRI, PET, EEG), stimuli processed under consciousness involve extensive regions of the brain, including the frontal and parietal lobes, exhibiting recurrent signaling, whereas stimuli not reaching conscious awareness (i.e., unconscious processing) remain confined to local, transient activity [13, 14]. This is consistent with the mechanism proposed by GWT that once some piece of information wins, it is broadcast globally to the entire system.
On the other hand, GWT mainly deals with âWhat information processing structures do we use?â, so it does not provide a direct answer to the question of âWhy did we arrive at this kind of information processing structure?â. From the biological and evolutionary perspective, we can address this question by considering how such a structure might have provided adaptive advantages in terms of survival and reproduction [9]. In previous research, the focus has often been placed on the part of GWTâs information processing structure related to competing and integrating information among multiple specialized modules operating in parallel (Selection process) and on the part that shares the selected information with the entire system (Broadcast process), and the advantages and benefits of these have been discussed.
### II-B Functional Advantages of Selection
In this paper, the process of selecting information from among the information processed in parallel by multiple specialized modules and then integrating them in a global workspace is called the âSelectionâ process.
#### II-B 1 Diverse Perspectives
By comparing and examining the outputs of multiple specialized modules, it is thought that it will be possible to generate a wider variety of solutions and ideas for a given task [15, 16]. For instance, if both a visual module and a language module are operating simultaneously, approaches that capture a problem from a pictorial/imaginative viewpoint can be compared with those that capture it from a linguistic/logical viewpoint. This concept is akin to the notion of âensemble learningâ [17] : by combining multiple models or modules with different specializations, they can complement the diverse aspects that a single model alone would not capture, thereby producing higher predictive accuracy and robustness overall.
Furthermore, the mechanism that integrates multiple parallel modules enables unexpected combinations of knowledge and skills from each module, which is thought to lead to creative thinking [10, 16]. For example, imagine a module responsible for visual thinking, inspired by metaphorical expressions provided by a language processing module, giving rise to a new diagram or prototype, which is then validated by a logical reasoning module. Alternatively, a module specializing in reinforcement learning might combine with a sensorimotor moduleâs proposed action strategy, leading to previously unanticipated solutions or task-execution procedures. The process of generating these incidental or divergent ideas and then evaluating, narrowing down, and integrating them is considered by many to be at the core of creative thinking [18].
#### II-B 2 Transfer Learning
When faced with a new task, utilizing the skills already acquired in the specialized modules reduces the need to learn from scratch, and as a result, it is thought that the efficiency and speed of learning will improve [10, 16]. For instance, if there are modules that excel in visual recognition, language processing, or logical reasoning and each is independently trained, then when facing a new domain or a different task, it becomes possible to adapt quickly by making use of the knowledge and representations already accumulated in these modules. This is analogous to âtransfer learningâ [19] in machine learning. In fact, when adapting a deep neural network learned in one domain (source domain) to another domain (target domain), reusing the lower-level feature extraction parts shortens the early training phase while still delivering high performance.
### II-C Functional Advantages of Broadcast
In this paper, the process of sharing selected information with all specialized modules is called the âBroadcastâ process.
#### II-C 1 Shared Attention
It is thought that broadcasting allows each specialized module to concentrate its resources on information that is deemed to be extremely important according to the current goals and environmental conditions, thereby improving the efficiency and accuracy of task execution [16, 20]. For example, consider a robot endowed with multiple sensory modules for vision, hearing, and touch, which is tasked with detecting, identifying, and accurately grasping an object. First, the visual module, operating unconsciously, generates multiple candidates, performing tasks such as location estimation and object classification in parallel. Meanwhile, the hearing module tries to gather hints from environmental sounds or voice commands that could modify actions. The tactile module prepares feedback control for the stage at which the robot actually grasps the object. After the information generated by each module is integrated by the Selection process, if the decision âto combine accurate location estimation from the visual module with minor corrective commands from auditory instructionsâ wins, that information is shared with all modules via the Broadcast function. As a result, the robot can carry out the plan âmove the arm toward the coordinates estimated visually, corrected by auditory informationâ in coordination across all modules.
This mechanism seems to be highly relevant to the âTransformer architectureâ [21]. Transformers, which demonstrate extremely high performance in various tasks such as natural language processing and image recognition, have a core mechanism known as âself-attentionâ. In self-attention, the inputs (or feature vectors) compute their mutual relevance, enabling the network as a whole to incorporate necessary contextual information. This mechanism is akin to GWTâs claim of handling diverse information while spotlighting important items and sharing them throughout the system. Though the transformer was not initially designed with the goal of mimicking consciousness, the fact that it achieves such high performance in language processing, image recognition, and more by way of sharing of important information hints at the fundamental usefulness of a strategy that shares the most crucial elements globally in an intelligent system.
#### II-C 2 Predictive Coding
Among the specialized modules, there are those that receive data from sensors (e.g., visual, auditory, tactile). If they receive predictions or metacognition as broadcast information, it may enhance the performance of the moduleâs output [10, 16]. For example, when the visual module is only processing lower-level features such as raw pixel data and edge information, it will only output tentative recognition results based on local statistics and pattern recognition. However, when higher-level context and objectives such as âthis scene is outdoors and there is a high possibility that there are multiple people in the pictureâ and âthe task is to judge the facial expressions of specific peopleâ are broadcast from the global workspace, the visual module will re-evaluate its output while referring to these predictions and hypotheses. As a result, corrections such as prioritizing the extraction of resolution and regions of interest that are appropriate for the task, or more carefully searching for clues to separate people and backgrounds, can be expected to improve recognition performance and reduce false positives. This aligns closely with the concept of âpredictive codingâ [22] often discussed in neuroscience and cognitive science. Predictive coding posits that the brain or cognitive system is constantly sending top-down predictions from higher (i.e., more advanced) modules to lower (i.e., more basic) modules, while the lower-level modules calculate and return the discrepancy (prediction error) between the actual sensory input and the prediction. If the discrepancy is large, it implies that something different from the predictions is likely in the scene, and this error is returned upstream so that the higher-level module can update or generate new predictions. If the discrepancy is small, it implies that the prediction and actual data largely match, thus increasing the likelihood that it is really as observed. Through repeated mutual interplay between top-down predictions and bottom-up prediction errors, the entire perception and cognition system dynamically adapts to the environment.
## III HYPOTHESIS
In this paper, in addition to the structural advantages from each of the traditional GWT perspectives (Selection and Broadcast), we newly focus on the advantage of a cycle structure in which information processing occurs through Selection and Broadcast (Selection-Broadcast Cycle). Within this cycle structure, we discuss the dynamic, stepwise information processing in which Selection and Broadcast intertwine in parallel and intermittently.
### III-A Dynamic Thinking Adaptation
The Selection-Broadcast Cycle possesses a structure that can realize any order of serial processing steps of specialized modules. The serial processing referred to here means processing that is carried out step by step (e.g., a chain of thought [23], inductive and deductive reasoning [24]). In contrast to parallel processing, in which multiple modules operate simultaneously, serial processing involves processing being carried out in order, with the information generated or selected by one module being passed on as input to the next module. In serial processing, the final answer is derived from the inferences and logical development that take place in the intermediate processing. This process of deriving conclusions in steps allows reliable problem solving and decision making with a small number of inferences and logical knowledge for various complex tasks. For example, by simply memorizing the results of addition and multiplication of 0 to 9 and the methodology of longhand arithmetic, you can calculate any addition or multiplication of integers (e.g., 11Ă2=10Ă2+1Ă2). In this way, by breaking down complex tasks into simpler sub-tasks (i.e., tasks that can be processed using limited memory or simple rules) and dealing with them in stages, it is possible to deal with a wide range of different tasks using relatively little memory capacity.
<details>
<summary>x2.png Details</summary>

### Visual Description
\n
## Diagram: System Architecture - Broadcast and Selection
### Overview
The image depicts a simplified system architecture diagram illustrating a broadcast mechanism and a selection process. It shows a Gateway (GW) receiving a broadcast signal and then a selection process directing signals to two Modules, M1 and M2. The diagram uses shapes (ellipse and rectangles) and arrows to represent components and data flow.
### Components/Axes
* **GW:** Gateway - Represented by an ellipse.
* **M1:** Module 1 - Represented by a rectangle.
* **M2:** Module 2 - Represented by a rectangle.
* **Broadcast:** Label indicating the input signal to the Gateway.
* **Selection:** Label indicating the process directing signals from the Gateway to the Modules.
* **Arrows:** Indicate the direction of data flow.
### Detailed Analysis or Content Details
The diagram shows the following data flow:
1. A "Broadcast" signal enters the system, directed towards the "GW" (Gateway). This is indicated by a light green arrow originating from the top of the diagram and pointing towards the ellipse representing the Gateway.
2. The Gateway (GW) then sends a "Selection" signal downwards, splitting into two separate arrows.
3. One arrow from the "Selection" process points to "M1" (Module 1).
4. The other arrow from the "Selection" process points to "M2" (Module 2).
5. There is a feedback loop from M2 to the GW, indicated by a light green arrow.
### Key Observations
* The diagram highlights a one-to-many relationship, where a single broadcast signal is processed by the Gateway and then distributed to multiple modules (M1 and M2).
* The feedback loop from M2 to GW suggests a potential control or acknowledgement mechanism.
* The diagram is highly abstract and does not provide any quantitative data or specific details about the functionality of each component.
### Interpretation
This diagram likely represents a system where a central Gateway receives a broadcast message and then selectively forwards it to specific modules based on some criteria. The "Selection" process could involve filtering, routing, or prioritization of messages. The feedback loop from M2 to GW suggests that M2 can provide information back to the Gateway, potentially influencing future selections or acknowledging receipt of the broadcast. The diagram is a high-level architectural overview and would require further documentation to understand the specific implementation details and the logic behind the selection process. It is a conceptual illustration rather than a detailed technical specification.
</details>
Figure 2: Example of GWT-based structure with two modules
<details>
<summary>x3.png Details</summary>

### Visual Description
\n
## Diagram: System Flow
### Overview
The image depicts a simplified system flow diagram with three components: GW, M1, and M2. The diagram uses arrows to indicate the direction of flow or connection between these components. The entire diagram is enclosed within a green rectangular border. A dashed vertical line is present on the left side of the diagram.
### Components/Axes
The diagram consists of the following components:
* **GW:** Represented by an oval shape.
* **M1:** Represented by a rectangular shape.
* **M2:** Represented by a rectangular shape.
The flow is indicated by arrows of two colors:
* **Red Arrows:** Indicate a specific flow direction.
* **Green Arrow:** Indicates a flow direction from the left side of the diagram towards M1.
### Detailed Analysis or Content Details
The diagram shows the following connections and flow:
1. A green arrow originates from the left side of the diagram and points towards component M1.
2. A red arrow originates from M1 and points upwards towards component GW.
3. A red arrow originates from GW and points downwards towards M2.
4. A red arrow originates from M2 and loops back to the right side of the diagram.
5. A red arrow forms a loop around the entire diagram, starting and ending on the right side.
There are no numerical values or scales present in the diagram. The diagram is purely representational of a system's flow.
### Key Observations
The diagram suggests a cyclical flow involving the three components. M1 and M2 appear to be connected to GW, and there is a feedback loop involving M2. The green arrow suggests an external input to the system at M1.
### Interpretation
The diagram likely represents a simplified model of a system where GW acts as a central processing or control unit, and M1 and M2 are modules or components that interact with it. The red arrows indicate data or control signals flowing between these components. The green arrow suggests an external input to the system. The loop around the diagram suggests a continuous process or feedback mechanism. Without further context, it's difficult to determine the specific function of each component or the nature of the flow. The dashed line on the left could represent a boundary or input/output interface. The diagram is a high-level representation and lacks specific details about the system's operation.
</details>
Figure 3: Flow of pipeline and GWT process in the GWT-based example
The Selection-Broadcast Cycle process has a space where such intermediate inferences and logical developments can be freely performed. Figure 2 shows an example of a simple Selection-Broadcast Cycle structure with two modules (M1, M2). The upper part of Figure 3 shows an example of the execution procedure of modules, and the lower part shows the processing flow that executes that execution procedure in the Selection-Broadcast Cycle. As you can see, the Selection-Broadcast Cycle process can execute any execution procedure using the modules by switching the selection well. In order to implement such a vast serial processing space for intermediate inferences and logical development as a pipeline, it is thought that a large tree structure made up of a large number of modules is necessary. The Selection-Broadcast Cycle process is thought to be a structure made up of a minimum number of modules using looped information processing.
Furthermore, this function enables flexible and dynamic processing, allowing you to try out all kinds of thought processes and change your thought processes in response to changes in the situation. This is a great advantage when dealing with situations that are difficult to handle with a fixed pipeline process, such as when the processing procedure is unclear or the goal is changed partway through. For example, consider the case where a robot explores a room based on information from multiple sensors (vision, touch, audio input, etc.). At the start of the search, the main objective was to search for and move along the shortest route, and the processing was set up to call the object detection module and the route planning module in order. However, during the search, there were many collisions with people in the room along the route. In this case, the Selection-Broadcast Cycle makes it possible to share the problem with the whole system, devise a solution, and make changes to the processing, for example, by calling a human detection module while planning a route. Also, if a voice instruction is received and the content of the instruction changes, it is possible to call a voice recognition module to share the analysis results with the whole system, and then reconfigure the execution order of the visual module and route planning module in response to the results. Thanks to this variable serial processing, the order in which the necessary specialized modules are called can be flexibly rearranged in response to changes in the situation or new goals, making it possible to accomplish tasks that would be difficult with fixed pipeline processing.
<details>
<summary>x4.png Details</summary>

### Visual Description
\n
## Diagram: Accelerated Thinking - Consciousness Cycle
### Overview
The image presents a diagram illustrating a process labeled "Accelerated Thinking" with a focus on "Consciousness (Cycle1)". The diagram depicts two distinct pathways originating from "M1", leading to "M2" and "M3" respectively. The diagram uses rectangular boxes to represent states or stages (M1, M2, M3) and arrows to indicate the direction of flow or transition between these states.
### Components/Axes
The diagram consists of the following components:
* **Nodes:** M1, M2, M3 - Representing stages or states within the process.
* **Arrows:** Indicating the flow of progression.
* **Text Labels:** "Accelerated Thinking", "Consciousness (Cycle1)".
* **Color Coding:** M3 is highlighted with an orange border.
### Detailed Analysis or Content Details
The diagram shows two separate pathways:
1. **Pathway 1:** M1 â M2. A gray arrow connects the box labeled "M1" to the box labeled "M2".
2. **Pathway 2:** M1 â M3. An orange arrow connects the box labeled "M1" to the box labeled "M3". The box "M3" is outlined in orange.
There are no numerical values or scales present in the diagram. The diagram is purely conceptual, illustrating a flow of progression.
### Key Observations
* The diagram highlights a branching process starting from a common point "M1".
* The orange highlighting of "M3" suggests it may be a particularly important or distinct stage in the process.
* The label "Consciousness (Cycle1)" indicates this diagram represents the first cycle of a larger process related to consciousness.
### Interpretation
The diagram likely represents a simplified model of cognitive processing within the framework of "Accelerated Thinking". "M1" could represent an initial state of awareness or input, and the branching pathways to "M2" and "M3" represent different potential outcomes or stages of processing. The orange highlighting of "M3" suggests it might be a key stage in the cycle, perhaps representing a heightened state of awareness or a specific cognitive outcome. The "Cycle1" label implies that this is an iterative process, and the diagram only shows the initial iteration.
The diagram is abstract and lacks specific details about the nature of "M1", "M2", and "M3", or the processes involved in the transitions between them. It serves as a high-level conceptual overview rather than a detailed technical specification. The diagram suggests a divergence of thought or processing paths from an initial state, with one path (M3) being emphasized.
</details>
Figure 4: Flow of accelerated thinking in the GWT-based example
This function can also be said to mean that it is possible to exchange information between any of the modules. VanRullen and Kanai [10] point out that the global workspace functions as a âhubâ between specialized modules, and that cycle-consistency learning [25] can be carried out by exchanging information between the same specialized modules. Cycle-consistency learning is a learning method that imposes constraints on the model to maintain consistency when converting data back and forth. These constraints ensure that once converted data can be restored to its original state by reversing the conversion, and prevent the loss of content or meaning during the conversion process. A major advantage is that it can learn domain mapping even without training data. In this way, the outputs of each specialized module are continuously cross-checked by repeating the Selection-Broadcast Cycle, and the entire system has the potential to detect potential inconsistencies, correct errors, and gradually build more reliable processing results.
### III-B Experience-Based Adaptation
As noted, in GWT, the information that is sequentially raised in the Global Workspace (Consciousness) through the Selection-Broadcast Cycle is shared with all specialized modules in a stepwise manner. Here, we focus on the point that the serial processing carried out in consciousness enters each specialized module in chronological order. It is thought that there are specialized modules that record such chronological consciousness and store it as experience memory [26]. We can further suppose that such experience memory can be recalled if a similar situation arises. If so, it would become possible to speed up or predict the course of serial processing.
Figure 4 shows an example of a simple Selection-Broadcast Cycle structure with two modules (M1, M2) and one experience memory module (M3). Consciousness(Cycle1), Consciousness(Cycle2), and Consciousness(Cycle3) are each listed in the global workspace in chronological order. Since these consciousnesses are broadcast in chronological order, they of course flow into the experience memory module as well. The experience memory module retains them as experiences. Then, when Consciousness(Cycle1) is broadcast again, the experience memory module can output Consciousness(Cycle2) and Consciousness(Cycle3) as recalled memories. This means that it is possible to reach the output of Consciousness(Cycle3) in two cycles, whereas it would have taken three cycles to reach it in the past. As described above, it is thought that the Selection-Broadcast Cycle will enable faster serial processing and prediction. This is similar to the concept of âchunkingâ [27] in cognitive science, and if learned schemas and procedures are stored as a kind of âchunkâ, then when faced with a similar task next time, that chunk can be called up all at once to quickly progress with the processing.
This mechanism not only increases processing speed, but also promotes inference and anticipation of actions. In other words, while referring to past thought processes, it is possible to make predictions such as âthere is a possibility that new information will be lacking at this stageâ or âit would be better to activate the sensorimotor module before the logical inference module in the next stepâ, and it is possible to adjust the order of module calls and resource allocation in advance based on these predictions. As a result, each step in the variable serial processing is no longer a simple âtrial and errorâ process, but rather a planned and efficient process that makes full use of past accumulated knowledge. The meta-cognitive decisions made during this process, such as âwhich module should be activated at what timeâ and âwhen should top-down information be updatedâ, are also optimized through the use of overall information sharing and memory via the Selection-Broadcast Cycle. In this way, by having a system in place that can record and utilize a record of serial processing, it is hoped that the cognitive architecture based on GWT will not only speed up, but also acquire advanced problem-solving capabilities that incorporate reasoning and prediction with an eye on the next move.
There have been several implementations of agent systems that apply experience memory as knowledge (e.g., reasoning and prediction) [28, 29]. For instance, Franklin and colleagues [30] have demonstrated a framework called LIDA (Learning Intelligent Distribution Agent), which builds on GWT to incorporate conscious content into various cognitive modules, including an episodic memory module. In LIDA-based implementations, information that reaches consciousness is not only broadcast to specialized modules but is also chronologically recorded in an episodic (or experience) memory. When a similar situation occurs, the system recalls the sequence of recorded conscious events and applies them as learned knowledge.
<details>
<summary>x5.png Details</summary>

### Visual Description
\n
## Diagram: System Flow
### Overview
The image depicts a system flow diagram with several components and connections. The diagram illustrates a process involving an external input, two modules (M1 and M2), and a gateway (GW), with a feedback loop. The diagram is enclosed within a green rectangular border, with a dashed grey line on the left side.
### Components/Axes
The diagram contains the following components:
* **GW:** Gateway (represented as a circle)
* **M1:** Module 1 (represented as a rectangle)
* **M2:** Module 2 (represented as a rectangle)
* **External Input1:** Input source (labeled text)
The connections between the components are represented by arrows of different colors:
* **Green Arrow:** Connects the external environment (left side) to M1.
* **Red Arrow:** Forms a feedback loop from M2 to the external environment and then to GW.
* **Red Arrow:** Connects GW to M1.
* **Black Arrow:** Connects M1 to M2.
### Detailed Analysis or Content Details
The diagram shows the following flow:
1. **External Input1** provides input to **M1** via a green arrow.
2. **M1** passes data to **M2** via a black arrow.
3. **M2** sends data to the external environment via a red arrow.
4. The external environment sends data back to **GW** via a red arrow.
5. **GW** sends data to **M1** via a red arrow.
The entire system is contained within a green rectangular border. A dashed grey line is present on the left side of the border, potentially indicating a boundary or external interface.
### Key Observations
The diagram highlights a closed-loop system with feedback. The feedback loop involves M2, the external environment, and GW. The gateway (GW) appears to act as a central control or processing unit, receiving input from the external environment and providing input to M1. The two modules, M1 and M2, seem to perform sequential processing steps.
### Interpretation
This diagram likely represents a control system or a data processing pipeline. The external input initiates a process that flows through M1 and M2, with the output of M2 influencing the system through a feedback loop. The gateway (GW) likely regulates or modifies the process based on the feedback received. The green border suggests a defined system boundary, while the dashed line might indicate an interface with other systems or the external world. The diagram is abstract and doesn't provide specific details about the functions of each component or the nature of the data being processed. It focuses on the overall flow and relationships between the components.
</details>
Figure 5: Flow of real-time intervention in the GWT-based example
### III-C Immediate Real-Time Adaptation
The selective broadcast cycle allows real-time intervention in the results of intermediate processing by external input. Figure 5 shows a simple scenario in which external intervention occurs within a Selection-Broadcast Cycle process with two modules (M1, M2). As shown, external inputs can influence the global workspaceâs serial processing at any point. For example, if a specialized module detects new, highly significant information, this information can immediately enter the global workspace through the Selection process, which then disseminates it to all other modules via Broadcast. This quick route ensures that unnecessary waiting times and message passing are greatly reduced, and real-time system responsiveness is greatly improved.
In practical robotics scenarios, such flexible intervention mechanisms have notable advantages. For instance, imagine a robot performing an assembly task using multiple sensory modules (visual, tactile, auditory). Suppose the robotâs tactile sensor suddenly detects an unexpected slip or instability in its grip. With the Selection-Broadcast Cycle, this critical information is rapidly promoted into the global workspace, interrupting the ongoing processing sequence. Consequently, other modules (e.g., motor control, vision processing, or reinforcement learning) immediately receive this alert and can swiftly initiate corrective actions. This immediate broadcast enables the system to promptly reconsider and revise its gripping strategy from both top-down (strategic re-planning) and bottom-up (sensor-driven adjustments) perspectives, substantially improving safety, precision, and robustness in real-time.
## IV DISCUSSION
Traditional discussions of GWTâs intelligence have predominantly emphasized the process on static, supervised settings, which rely heavily on pre-labeled data sets, explicit instructions, and predefined tasks (e.g., ensemble learning, transfer learning, self-attention, predictive coding). In such scenarios, intelligence manifests primarily as a systemâs ability to accurately replicate patterns and knowledge derived from historical, structured data. However, the real-world application of artificial intelligence increasingly demands a shift toward dynamic, unsupervised settings, where tasks, environments, and goals continuously evolve, often without explicit guidance or labeled examples.
In dynamic, unsupervised scenarios, intelligent systems face fundamentally different challenges. Rather than relying on historical labels or fixed benchmarks, these systems must autonomously discover meaningful patterns, adapt swiftly to changing contexts, and continuously learn from ongoing experiences. In this paper, we discussed the strengths of GWT in such real-time processing by focusing on Selection-Broadcast Cycle. We explained that this Selection-Broadcast Cycle realizes flexible processing, is capable of being accelerated, and is a mechanism that can respond immediately to real-time changes. Thus, by highlighting the advantages of the Selection-Broadcast Cycle, this paper extends traditional conceptions of GWT intelligence into the realm of dynamic, unsupervised learning, opening new pathways toward the development of more robust, adaptive, and autonomous artificial intelligence systems capable of thriving in complex real-time environments. Future research could further explore practical implementations and empirical evaluations to validate these theoretical insights and expand the applicability of GWT-based architectures in diverse, real-world scenarios.
Furthermore, although GWT seems well-suited for thriving in the real-time world, one potential way to enhance its adaptability further could involve multiple consciousness (GWT) processes operating in parallel. This parallelization could facilitate the simultaneous exploration of diverse solutions, enhance adaptability by rapidly responding to varied and unpredictable changes, and effectively distribute cognitive load, thereby potentially surpassing the limitations inherent to a single, centralized consciousness structure. Such a mechanism might represent the collective intelligence observed in groups of humans, suggesting that human societies themselves could represent natural exemplars of parallel consciousness networks capable of robust, adaptive decision-making in complex and dynamic environments. For example, Taniguchi [31] is researching the dynamics of such group intelligence and language development.
## V LIMITATIONS AND FUTURE WORK
While the proposed Selection-Broadcast Cycle structure inspired by the Global Workspace Theory (GWT) provides a compelling theoretical framework for adaptive, real-time cognitive architectures, several critical limitations need to be acknowledged and addressed in future work.
One significant limitation of this study is the absence of empirical validation. The advantages of the Selection-Broadcast Cycle, such as dynamic thinking, experience-based acceleration, and immediate real-time responsiveness, remain largely theoretical. Currently, the paper does not present experimental results, simulations, or quantitative analyses to substantiate these claims. Therefore, readers must accept the described benefits without direct evidence of improved adaptability or efficiency compared to other existing methods. To strengthen future iterations of this research, practical implementations such as comparative simulations or robot-based experiments demonstrating fewer task failures or quicker adaptation would be essential.
In particular, the overall effectiveness of the system depends greatly on the quality of the Selection process. There remain important open questions regarding whether such effective and sophisticated selection mechanisms can actually be put to practical use. In static environments, there is an example of Selection process that improves the overall performance of the system by weighting based mainly on internal indicators such as happiness and past experience [32]. In order to put GWT-based architectures to practical use in real-world applications, it is extremely important to achieve robust and adaptive Selection processes, so future research will need to address this implementation issue in dynamic environments as well.
## VI CONCLUSIONS
In this paper, we explored the potential of the Global Workspace Theory (GWT) and, in particular, the Selection-Broadcast Cycle, as an information processing architecture suitable for dynamic, unsupervised real-time environments. Traditional approaches to artificial intelligence often rely heavily on structured, labeled data, where intelligence primarily involves replicating known patterns. However, real-world applications require systems that can continuously adapt and respond to evolving tasks, environments, and goals. In this context, we highlighted the Selection-Broadcast Cycleâs strengths: its flexibility to rearrange module execution order dynamically, its capability for acceleration through experience-driven predictions, and its responsiveness to immediate real-time inputs.
Our hypothesis suggests that a cognitive architecture based on GWT and, specifically, the Selection-Broadcast Cycle, provides a robust framework for dynamic decision-making and rapid adaptation in complex environments. The ability to rearrange processing sequences dynamically, accelerate learning through experience-based memory, and intervene swiftly in response to changing conditions positions GWT-based architectures to effectively handle the challenges posed by real-time intelligence.
A critical unresolved question remains the practical feasibility of implementing robust and adaptive Selection mechanisms in real-world systems. Future research must address this challenge, potentially through integrating machine learning techniques and advanced evaluative frameworks, to further validate and extend the applicability of GWT-based architectures. By tackling these challenges, we can move closer to developing truly autonomous, flexible artificial intelligence systems capable of thriving in the complexities and uncertainties of the real-time world.
## References
- [1] D. Hassabis, D. Kumaran, C. Summerfield, and M. Botvinick, âNeuroscience-inspired artificial intelligence,â Neuron, vol. 95, no. 2, pp. 245â258, 2017.
- [2] M. K. Ho and T. L. Griffiths, âCognitive science as a source of forward and inverse models of human decisions for robotics and control,â Annual Review of Control, Robotics, and Autonomous Systems, vol. 5, no. 1, pp. 33â53, 2022.
- [3] I. Kotseruba and J. K. Tsotsos, â40 years of cognitive architectures: core cognitive abilities and practical applications,â Artificial Intelligence Review, vol. 53, no. 1, pp. 17â94, 2020.
- [4] A. Ajay, S. Han, Y. Du, S. Li, A. Gupta, T. Jaakkola, J. Tenenbaum, L. Kaelbling, A. Srivastava, and P. Agrawal, âCompositional foundation models for hierarchical planning,â Advances in Neural Information Processing Systems, vol. 36, pp. 22 304â22 325, 2023.
- [5] B. Liu, X. Li, J. Zhang, J. Wang, T. He, S. Hong, H. Liu, S. Zhang, K. Song, K. Zhu, et al., âAdvances and challenges in foundation agents: From brain-inspired intelligence to evolutionary, collaborative, and safe systems,â arXiv preprint arXiv:2504.01990, 2025.
- [6] B. J. Baars, âGlobal workspace theory of consciousness: toward a cognitive neuroscience of human experience,â Progress in brain research, vol. 150, pp. 45â53, 2005.
- [7] S. Dehaene and L. Naccache, âTowards a cognitive neuroscience of consciousness: basic evidence and a workspace framework,â Cognition, vol. 79, no. 1-2, pp. 1â37, 2001.
- [8] G. A. Mashour, P. Roelfsema, J.-P. Changeux, and S. Dehaene, âConscious processing and the global neuronal workspace hypothesis,â Neuron, vol. 105, no. 5, pp. 776â798, 2020.
- [9] A. Juliani, K. Arulkumaran, S. Sasai, and R. Kanai, âOn the link between conscious function and general intelligence in humans and machines,â Transactions on Machine Learning Research, 2022, survey Certification.
- [10] R. VanRullen and R. Kanai, âDeep learning and the global workspace theory,â Trends in Neurosciences, vol. 44, no. 9, pp. 692â704, 2021.
- [11] T. Lesort, V. Lomonaco, A. Stoian, D. Maltoni, D. Filliat, and N. DĂaz-RodrĂguez, âContinual learning for robotics: Definition, framework, learning strategies, opportunities and challenges,â Information Fusion, vol. 58, pp. 52â68, 2020.
- [12] K. Shaheen, M. A. Hanif, O. Hasan, and M. Shafique, âContinual learning for real-world autonomous systems: Algorithms, challenges and frameworks,â Journal of Intelligent & Robotic Systems, vol. 105, no. 1, p. 9, 2022.
- [13] S. Dehaene, L. Naccache, L. Cohen, D. L. Bihan, J.-F. Mangin, J.-B. Poline, and D. RiviĂšre, âCerebral mechanisms of word masking and unconscious repetition priming,â Nature neuroscience, vol. 4, no. 7, pp. 752â758, 2001.
- [14] R. Gaillard, S. Dehaene, C. Adam, S. ClĂ©menceau, D. Hasboun, M. Baulac, L. Cohen, and L. Naccache, âConverging intracranial markers of conscious access,â PLoS biology, vol. 7, no. 3, p. e1000061, 2009.
- [15] I. Ito, T. Ito, J. Suzuki, and K. Inui, âInvestigating the effectiveness of multiple expert models collaboration,â in Findings of the Association for Computational Linguistics: EMNLP 2023, 2023, pp. 14 393â14 404.
- [16] G. A. Wiggins, âCrossing the threshold paradox: Modelling creative cognition in the global workspace,â in International Conference on Computational Creativity, 2012, pp. 180â187.
- [17] R. Polikar, âEnsemble learning,â Ensemble machine learning: Methods and applications, pp. 1â34, 2012.
- [18] G. K. Stojanov and B. Indurkhya, âCreativity and cognitive development: The role of perceptual similarity and analogy.â in AAAI Spring symposium: Creativity and (Early) cognitive development, 2013.
- [19] C. Tan, F. Sun, T. Kong, W. Zhang, C. Yang, and C. Liu, âA survey on deep transfer learning,â in Artificial Neural Networks and Machine LearningâICANN 2018: 27th International Conference on Artificial Neural Networks, Rhodes, Greece, October 4-7, 2018, Proceedings, Part III 27. Springer, 2018, pp. 270â279.
- [20] R. F. J. Dossa, K. Arulkumaran, A. Juliani, S. Sasai, and R. Kanai, âDesign and evaluation of a global workspace agent embodied in a realistic multimodal environment,â Frontiers in Computational Neuroscience, vol. 18, p. 1352685, 2024.
- [21] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ć. Kaiser, and I. Polosukhin, âAttention is all you need,â Advances in neural information processing systems, vol. 30, pp. 6000â6010, 2017.
- [22] K. Friston and S. Kiebel, âPredictive coding under the free-energy principle,â Philosophical transactions of the Royal Society B: Biological sciences, vol. 364, no. 1521, pp. 1211â1221, 2009.
- [23] J. Wei, X. Wang, D. Schuurmans, M. Bosma, F. Xia, E. Chi, Q. V. Le, D. Zhou, et al., âChain-of-thought prompting elicits reasoning in large language models,â Advances in neural information processing systems, vol. 35, pp. 24 824â24 837, 2022.
- [24] T. Shanahan, âDeductive and inductive arguments,â The Internet Encyclopedia of Philosophy, ISSN 2161-0002, https://iep.utm.edu/, accessed on 2025-03-05.
- [25] J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, âUnpaired image-to-image translation using cycle-consistent adversarial networks,â in Proceedings of the IEEE international conference on computer vision, 2017, pp. 2223â2232.
- [26] S. Franklin, B. J. Baars, U. Ramamurthy, and M. Ventura, âThe role of consciousness in memory,â Brains, Minds and Media, vol. 2005, no. 1, 2005.
- [27] F. Gobet, P. C. Lane, S. Croker, P. C. Cheng, G. Jones, I. Oliver, and J. M. Pine, âChunking mechanisms in human learning,â Trends in cognitive sciences, vol. 5, no. 6, pp. 236â243, 2001.
- [28] J. Laird, K. Kinkade, S. Mohan, and J. Xu, âCognitive robotics using the soar cognitive architecture,â AAAI Workshop - Technical Report, pp. 46â54, 01 2012.
- [29] L. Martin, J. H. Rosales, K. Jaime, and F. Ramos, âAffective episodic memory system for virtual creatures: The first step of emotion-oriented memory,â Computational Intelligence and Neuroscience, vol. 2021, no. 1, p. 7954140, 2021.
- [30] S. Franklin, T. Madl, S. Dâmello, and J. Snaider, âLida: A systems-level architecture for cognition, emotion, and learning,â IEEE Transactions on Autonomous Mental Development, vol. 6, no. 1, pp. 19â41, 2013.
- [31] T. Taniguchi, âCollective predictive coding hypothesis: Symbol emergence as decentralized bayesian inference,â Frontiers in Robotics and AI, vol. 11, p. 1353870, 2024.
- [32] E. C. Garrido-MerchĂĄin, M. Molina, and F. M. Mendoza-Soto, âA global workspace model implementation and its relations with philosophy of mind,â Journal of Artificial Intelligence and Consciousness, vol. 9, no. 01, pp. 1â28, 2022.