## Chapter 45
## Robots as powerful allies for the study of embodied cognition from the bottom up
## Matej Hoffmann and Rolf Pfeifer
## Introduction
The study of human cognition- and human intelligence- has a long history and has kept scientists from various disciplines- philosophy, psychology, linguistics, neuroscience, artificial intelligence, and robotics- busy for many years. While there is no agreement on its definition, there is wide consensus that it is a highly complex subject matter that will require, depending on the particular position or stance, a multiplicity of methods for its investigation. Whereas, for example, psychology and neuroscience favor empirical studies on humans, artificial intelligence has proposed computational approaches, viewing cognition as information processing, as algorithms over representations. Over the last few decades, overwhelming evidence has been accumulated showing that the pure computational view is severely limited and that it must be extended to incorporate embodiment, i.e., the agent's somatic setup and its interaction with the real world, and, because they are real physical systems, robots became the tools of choice to study cognition. There have been a plethora of pertinent studies, but they all have their own intrinsic limitations. In this chapter, we demonstrate that a robotic approach, combined with information theory and a developmental perspective, promises insights into the nature of cognition that would be hard to obtain otherwise.
We start by introducing 'low- level' behaviors that function without control in the traditional sense; we then move to sensorimotor processes that incorporate reflex- based loops (involving neural processing). We discuss 'minimal cognition' and show how the role of embodiment can be quantified using information theory, and we introduce the
so- called SMCs, or sensorimotor contingencies, which can be viewed as the very basic building blocks of cognition. Finally, we expand on how humanoid robots can be productively exploited to make inroads in the study of human cognition.
## Behavior Through Interaction
What cognitive scientists are regularly forgetting is that complex coordinated behaviors- for example, walking, running over uneven terrain, swimming, avoiding obstacles- can often be realized with no or minimal involvement of cognition/ representation/ computation. This is possible because of the properties of the body and the interaction with the environment, that is, the embodied and embedded nature of the agent. Robotics is well suited for providing existence proofs of this kind and then to further analyze these phenomena. We will only briefly present some of the most notable case studies.
## Low- Level Behavior: Mechanical Feedback Loops
A classical illustration of behavior in complete absence of a 'brain' is the passive dynamic walker (McGeer 1990): a minimal robot that can walk without any sensors, motors, or control electronics. It loosely resembles a human, with two legs, no torso, and two arms attached to the 'hips, ' but its ability to walk is exclusively due to the downward slope of the incline on which it walks and the mechanical parameters of the walker (mainly leg segment lengths, mass distribution, foot shape, and frictional characteristics). The walking movement is entirely the result of finely tuned mechanics on the right kind of surface. A motivation for this research is also to show how human walking is possible with minimal energy use and only limited central control. However, most of the problems that animals or robots are faced with in the real world cannot be solved solely by passive interaction of the physical body with the environment. Typically, active involvement by means of muscles/ motors is required. Furthermore, the actuation pattern needs to be specified by the agent, 1 and hence a controller of some sort is required. Nevertheless, it turns out that if the physical interaction of the body with the environment is exploited, the control program can be very simple. For example, the passive dynamic walker can be modified by adding a couple of actuators and sensors and a reflex- based controller, resulting in the expansion of its niche to level ground while keeping the control effort and energy expenditure to a minimum (Collins et al. 2005).
However, in the real world, the ground is often not level and frequent corrective action needs to be taken. It turns out that often the very same mechanical system can
1 In this chapter, we will use 'agent' to describe humans, animals, or robots.
generate this corrective response. This phenomenon is known as self- stabilization and is a result of a mechanical feedback loop. To use dynamical systems terminology, certain trajectories (such as walking with a particular gait) have attracting properties and small perturbations are automatically corrected, without control- or one could say that 'control' is inherent in the mechanical system. 2 Blickhan et al. (2007) review self- stabilizing properties of biological muscles in a paper entitled 'Intelligence by Mechanics'; Koditschek et al. (2004) analyze walking insects and derive inspiration for the design of a hexapod robot with unprecedented mobility (RHex- e.g., Saranli et al. 2001).
## Sensorimotor Intelligence
Mechanical feedback loops constitute the most basic illustration of the contribution of embodiment and embeddedness to behavior. The immediate next level can be probably attributed to direct, reflex- like, sensorimotor loops. Again, robots can serve to study the mechanisms of 'reactive' intelligence. Grey Walter (Walter 1953), the pioneer of this approach, built electronic machines with a minimal 'brain' that displayed phototacticlike behavior. This was picked up by Valentino Braitenberg (Braitenberg 1986) who designed a whole series of two- wheeled vehicles of increasing complexity. Even the most primitive ones, in which sensors are directly connected to motors (exciting or inhibiting them), display sophisticated behaviors. Although the driving mechanisms are simple and entirely deterministic, the interaction with the real world, which brings in noise, gives rise to complex behavioral patterns that are hard to predict.
This line was picked up by Rodney Brooks, who added an explicit anti- representationalist perspective in response to the in- the- meantime- firmly- established cognitivistic paradigm (e.g., Fodor 1975; Pylyshyn 1984) and 'good old- fashioned artificial intelligence' (GOFAI) (Haugeland 1985). Brooks openly attacked the GOFAI position in the seminal articles 'Intelligence without Reason' (Brooks 1991a) and 'Intelligence without Representation' (Brooks 1991b), and proposed behavior- based robotics instead. Through building robots that interact with the real world, such as insect robots (Brooks 1989), he realized that 'when we examine very simple level intelligence we find that explicit representations and models of the world simply get in the way. It turns out to be better to use the world as its own model' (Brooks 1991b). Inspired by biological evolution, Brooks created a decentralized control architecture consisting of different layers; every layer is a more or less simple coupling of sensors to motors. The levels operate in parallel but are built in a hierarchy (hence the term subsumption architecture ; Brooks 1986). The individual modules in the architecture may have internal states (the agents are thus not purely reactive any more); however, Brooks argued against calling the internal states representations (Brooks 1991b).
2 The description is idealized- in reality, a walking machine would fall into the category of 'hybrid dynamical systems,' where the notions of attractivity and stability are more complicated.
## Minimal Embodied Cognition
In the case studies described in the previous section, the agents were either mere physical machines or they relied on simple direct sensorimotor loops only- resembling reflex arcs of the biological realm. They were reactive agents constrained to the 'hereand- now' time scale, with no capacity for learning from experience and also no possibility of predicting the future course of events. Although remarkable behaviors were sometimes demonstrated, there are intrinsic limitations.
The introduction of first instances of internal simulation, which goes beyond the 'here- and- now' time scale, is considered the hallmark of cognition by some (e.g., Clark and Grush 1999). This could be a simple forward model (as present already in insectssee Webb 2004) that provides the prediction of a future sensory state given the current state and a motor command (efference copy). Forward models could provide a possible explanation of the evolutionary origin of first simulation/ emulation circuitry 3 and of environmentally decoupled thought- the agent employing primitive 'models' before or instead of directly operating on the world.
Early emulating agents would then constitute the most minimal case of what Dennett calls a Popperian creature- a creature capable of some degree of off- line reasoning and hence able (in Karl Popper's memorable phrase) to 'let its hypotheses die in its stead' (Dennett 1995, p. 375). (Clark and Grush 1999, p. 7)
Importantly, we are still far from any abstract models or symbolic reasoning. Instead, we are dealing with the sensorimotor space and the possibility for the agent to extract regularities in it and later exploit this experience in accordance with its goals. For example, the agent can learn that given a certain visual stimulation, say, from a cup, a particular motor action (reach and grasp) will lead to a pattern of sensory stimulation (in humans: we can feel the cup in the hand). The sensorimotor space plays a key part here and it is critically shaped by the embodiment of the agent and its embedding in the environment: a specific motor signal only leads to a distinct result if embedded in the proper physical setup. If you change the shape and muscles of the arm, the motor signal will not result in a successful grasp.
## Quantifying the Effect of Embodiment Using Information Theory
For cognitive development of an agent, the 'quality' of the sensorimotor space determines what can be learned. First, the type of sensory receptors- their mechanism
3 See Grush (2004) for the similarities and differences between emulation theory (Grush 2004) and simulation theory (Jeannerod 2001).
of transduction- determines what kind of signals the agent's brain or controller will be receiving from the environment. Furthermore, the shape and placement of these sensors will perform an additional transformation of the information that is available in the environment.
For example, different species of insects have evolved different non- homogeneous arrangements of the light- sensitive cells in their eyes, providing an advantageous nonlinear transformation of the input for a particular task. One example is exploiting egomotion together with motion parallax to gauge distance to objects in the environment and eventually facilitate obstacle avoidance. Using a robot modeled after the facet eye of a housefly, Franceschini et al. (1992) showed that the nonlinear arrangement of the facets- more dense in the front than on the side- compensates for the motion parallax and allows uniform motion detection circuitry to be used in the entire eye, which makes it easy for the robot to avoid obstacles with little computation. These findings were confirmed in experiments with artificial evolution on real robots (Lichtensteiger 2004). Artificial eyes with designs inspired by arthropods include Song et al. (2013) and Floreano et al. (2013).
It is not always possible to pinpoint the specific transformation of sensory signals that is facilitated by the morphology as in the previous case. A more general tool is provided by the methods of information theory. Information is used in the Shannon sense here- to quantify statistical patterns in observed variables. The structure or amount of information induced by particular sensor morphology could be captured by different measures, for example, entropy. However, information (structure) in the sensory variables tells only half of the story (a 'passive perception' one in this case), because organisms interact with their environments in a closed- loop fashion: sensory inputs are transformed into motor outputs, which in turn determine what is sensed next. Therefore, the 'raw material' for cognition is constituted by the sensorimotor variables and it is thus crucial to study relationships between sensors and motors, as illustrated by the sensorimotor contingencies (see next section). Furthermore, time is no less important a variable. Lungarella and Sporns (2006) provide an excellent example of the use of information theoretic measures in this context. In a series of experiments with a movable camera system, they could show that, for example, the entropy in the visual field is decreased if the camera is tracking a moving visual target (a red ball) compared to the condition where the movement of the ball and the camera were uncorrelated. This is intuitively plausible, because if the object is kept in the center of the visual field, there is more 'order, ' i.e., less entropy. A collection of case studies on informationtheoretic implications of embodiment in locomotion, grasping, and visual perception is presented by Hoffmann and Pfeifer (2011).
## Sensorimotor Contingencies
Sensorimotor contingencies (SMCs) were originally presented in the influential article by O'Regan and Noë (2001) as the structure of the rules governing sensory changes produced by various motor actions. The SMCs, according to O'Regan and Noë, are the
key 'raw material' upon which perception, cognition, and eventually consciousness operates. Furthermore, they sketch a possible hierarchy ranging from modality- related (or apparatus- related) SMCs to object- related SMCs. The former, the modality- related SMCs, would capture the immediate effect that certain actions (or movements) have on sensory stimulation. Clearly, these would be sensory modality specific (e.g., head movement will induce a different change in the SMCs of the visual and auditory modalities- turning the head will change the visual stimulation almost entirely, whereas changes in the acoustic system will be minimal) and would strongly depend on the sensory morphology. Therefore, this concept is strongly related to what we have discussed in the previous sections: (1) different sensory morphology importantly affects the information flow induced in the sensory receptors and hence also the corresponding SMCs; (2) the effect of action is already constitutively included in the SMC notion itself.
Although conceptually very powerful, the notion of SMCs was not articulated concretely enough in O'Regan and Noë (2001) such that it could be expressed mathematically or directly transferred into a robot implementation, for example. Bührmann et al. (2013) have proposed a formal dynamical systems account of SMCs. They devised a dynamical system description for the environment and the agent, which is in turn split into body, internal state (such as neural activity), motor, and sensory dynamics. Bührmann et al. are making a distinction between sensorimotor (SM) environment, SM habitat, SM coordination, and SM strategy. The SM environment is the relation between motor actions and changes in sensory states, independent of the agent's internal (neural) dynamics. The other notions- from SM habitat to SM strategies- add internal dynamics to the picture. SM habitat refers to trajectories in the sensorimotor space, but subject to constraints given by the internal dynamics that are responsible for generating motor commands, which may depend on previous sensory states as well- an example of closed- loop control. SM coordination then further reduces the set of possible SM trajectories to those 'that contribute functionally to a task. ' For example, specific patterns of squeezing an object in order to assess its hardness would be SM coordination patterns serving object discrimination. Finally, SM strategies take, in addition, 'reward' or 'value' for the agent into account.
As wonderfully illustrated by Beer and Williams (2015), the dynamical systems and information theory are two complementary mathematical lenses through which brainbody-environment systems can be studied. While acknowledging the merits of both frameworks as 'intuition, theory, and experimental pumps' (Beer and Williams 2015), it is probably fair to say that compared to dynamical systems, information theory has been thus far more successfully applied to the analysis of real systems of higher dimensionality. This is true for both natural systems- in particular, brains (Garofalo et al. 2009; Quiroga and Panzeri 2009)- and artificial systems. Thus, to study sensorimotor contingencies in a real robot beyond the simple simulated agents of Bührmann et al. (2013) and Beer and Williams (2015), we chose to use the lens of information theory. Following up on related studies of e.g., Olsson et al. (2004), we conducted a series of studies in a real quadrupedal robot with rich nonlinear dynamics and a collection of sensors from different modalities (Hoffmann et al. 2012; Hoffmann et al. 2014; Schmidt et al. 2013) (see Box 45.1). We have applied the notion of 'transfer entropy'
## Box 45.1 Sensorimotor contingencies in a quadruped robot
Figure 45.1. Robot 'Puppy' and sensorimotor contingencies.
<details>
<summary>Image 1 Details</summary>

### Visual Description
## Diagram: Sensor-motor contingencies in a quadruped robot
### Overview
This image presents a diagram illustrating sensor-motor contingencies in a quadruped robot. It consists of a photograph of the robot, followed by six scatter plots showing relationships between sensor and motor signals, and a bar chart comparing classification accuracy for different data configurations. The overall goal appears to be demonstrating how sensory data and action influence the robot's performance.
### Components/Axes
The image is divided into seven sections labeled (a) through (g).
* **(a)**: A photograph of a quadruped robot with annotations indicating the location of 4x hip encoders, 4x motors, and 4x pressure sensors.
* **(b)-(f)**: Scatter plots with "Motor" on the y-axis and "Hip" and "Foot" on the x-axis. The x-axis is labeled "[bits]" and ranges from approximately 0.045 to 0.51, depending on the plot.
* **(g)**: A bar chart with "Classification accuracy (%)" on the x-axis, ranging from 40% to 100%. The y-axis represents different data configurations: "sensory data only", "sensory data + action", "sensory data + action from 2 epochs", and "sensory data + action from 3 epochs". A legend indicates "Mean" (blue) and "Best" (red) performance.
### Detailed Analysis or Content Details
**Section (a): Robot Photograph**
The photograph shows a quadruped robot. The annotations highlight:
* 4x hip encoders (located near the hip joints)
* 4x motors (presumably driving the leg joints)
* 4x pressure sensors (location not precisely specified, but likely on the feet)
**Sections (b)-(f): Scatter Plots**
Each scatter plot shows the relationship between motor signals and hip/foot sensor data. Lines connect corresponding data points for each leg.
* **(b)**: The lines generally slope upwards, indicating a positive correlation between motor signals and sensor readings. The data points are scattered, with a range of values.
* **(c)**: Similar to (b), lines slope upwards, but the data points are more clustered. The x-axis range is 0.35 to 0.51.
* **(d)**: Lines slope upwards, with a similar distribution to (c). The x-axis range is 0.35 to 0.51.
* **(e)**: Lines slope upwards, with a similar distribution to (c) and (d). The x-axis range is 0.35 to 0.51.
* **(f)**: Lines slope upwards, with a similar distribution to (c), (d), and (e). The x-axis range is 0.35 to 0.51.
**Section (g): Bar Chart**
The bar chart compares classification accuracy for different data configurations. Each configuration has two bars: one for the "Mean" and one for the "Best" accuracy.
* **Sensory data only**: Mean ≈ 55%, Best ≈ 65%
* **Sensory data + action**: Mean ≈ 75%, Best ≈ 85%
* **Sensory data + action from 2 epochs**: Mean ≈ 85%, Best ≈ 95%
* **Sensory data + action from 3 epochs**: Mean ≈ 90%, Best ≈ 98%
### Key Observations
* The scatter plots (b)-(f) consistently show a positive correlation between motor signals and sensor readings.
* The bar chart demonstrates a significant improvement in classification accuracy when action data is added to sensory data.
* Increasing the number of epochs (time steps) used for action data further improves classification accuracy.
* The "Best" accuracy is consistently higher than the "Mean" accuracy for all data configurations, indicating variability in performance.
### Interpretation
The data suggests that incorporating action information alongside sensory data is crucial for accurately modeling the robot's sensor-motor contingencies. The improvement in classification accuracy with each epoch suggests that the robot's behavior becomes more predictable and controllable as it learns from its interactions with the environment. The positive correlation observed in the scatter plots indicates that the robot's motor commands are consistently related to its sensory feedback. The difference between "Mean" and "Best" accuracy suggests that there is still room for improvement in the model's ability to generalize across different conditions or individual legs.
The diagram illustrates a key principle in robotics and control: effective control requires a closed-loop system that integrates sensory feedback and motor commands. The use of multiple epochs suggests a learning or adaptation process, where the robot refines its control strategy over time. The specific values of classification accuracy provide a quantitative measure of the robot's performance, allowing for comparison between different data configurations and potential optimization of the control system.
</details>
Experiments were conducted on the quadrupedal robot Puppy (Figure 45.1a), which has four servomotors in the hips together with encoders measuring the angle at the joint, four encoders in the passive compliant knees, and four pressure sensors on the feet. We used the notion of 'transfer entropy' from information theory, which can be used to measure
directed information flows between time series. In our case, the time series were collected from individual motor and sensory channels and the information transfer was calculated for every pair of channels two times, once in every direction (say, from hind right motor to front right knee encoder and also in the opposite direction). Loosely speaking, transfer entropy from channel A to channel B measures how well the future state of channel B can be predicted knowing the current state of channel A (see Schmidt et al. 2013 for details).
First, we wanted to investigate the 'sensorimotor structure, ' i.e., the relative strengths of relationships between different sensors and motors, which is intrinsic to the robot's embodiment (body + sensor morphology only). To this end, random motor commands were applied and the relationships between motor and sensory variables were studied, closely resembling the notion of SM environment (Bührmann et al. 2013). The strongest information flows between pairs of channels were extracted and are shown overlaid over the schematic of the Puppy robot (dashed lines) in panel B. The transfer entropy is encoded as thickness and gray level of the arrows. The strongest flow occurs from the motor signals to their respective hip joint angles, which is clear because the motors directly drive the respective hip joints. The motors have a smaller influence on the knee angles (stronger in the hind legs) and on the feet pressure sensors- on the respective legs where the motor is mounted, thus illustrating that body topology was successfully extracted (at the same time, the flows from the hind leg motors and hips to the front knees highlight that the functional relationships are different than the static body structure; see also Schatz and Oudeyer 2009). These patterns are analogous to the modality- related SMCs; just as we can predict what will be the sensory changes induced by moving the head, the robot can predict the effects of moving the hind leg, say.
In a second step, we studied the relationships in the sensorimotor space when the robot was running with specific coordinated periodic movement patterns or gaits. The results for two selected gaits- turn left and bound right * - are shown in panels C and D, respectively. The flows from motors to the hip joints, which would again dominate, were left out of the visualization. The plots clearly demonstrate the important effect of specific action patterns in two ways. First, they markedly differ from the random motor command situation: the dominant flows are different and, in addition, the magnitude of the information flows is bigger (the number of bits- note the different range of the color bar compared to B), illustrating how much information structure is induced by the 'neural pattern generator. ' Second, they also significantly differ between themselves. The 'turn left' gait in panel C reveals the dominant action of the right leg and in particular the knee joint. In the 'bound right' gait in D, the motor signals are predictive of the sensory stimulation in the hind knees and also the left foot. The gaits were obtained by optimizing the robot's performance for speed or for turning and thus correspond to patterns that are functionally relevant for the robot and can even be said to carry 'value. ' Thus, in the perspective of Bührmann et al. (2013), our findings about the sensorimotor space using the gaits can be interpreted as studying the SM coordination or even SM strategy of the quadruped robot.
Finally, next to the embodiment or morphology (shape of the body and limbs, type and placement of sensors and effectors, etc.) and the brain (the neural dynamics responsible for generating the coordinated motor command sequences), the SMCs are co-determined by the environment as well. All the results thus far came from sensorimotor data collected from the robot running on a plastic foil ground (low friction). Panels E and F depict how the information flows for the bound right gait are modulated when the robot runs on a different ground (E- Styrofoam, F- rubber). The overall pattern is similar to D, but the flows to the left foot disappear, and eventually flows to the left knee joint become dominant. This
is because the posture of the robot changed: the left foot contacts the ground at a different angle now, inducing less stimulation in the pressure sensor. Also, as the friction increases (from the foil over Styrofoam to rubber), the push- off during stance of the left hind leg becomes stronger, resulting in more pronounced bending of the knee. Finally, since the high- friction ground poses more resistance to the robot's movements, the trajectories are less smooth and the overall information flow drops.
While all the components (body, brain, environment) have a profound effect on the overall sensorimotor space, our analysis reveals that in this case, the gait used (as prescribed primarily by the 'neural/ brain' dynamics) is a more important factor than the environment (the ground)- the latter seems to modulate the basic structure of information flows induced by the gait. This has important consequences for the agent when it is to learn something about its environment and perform perceptual categorization, for example. In order to investigate this quantitatively, we have presented the robot with a terrain (the surface/ ground it was running on) classification task. Relying on sensory information alone leads to significantly worse terrain classification results than when the gait is explicitly taken into account in the classification process (Hoffmann, Stepanova, and Reinstein 2014). Furthermore, in line with the predictions of the sensorimotor contingency theory, longer sensorimotor sequences are necessary for object perception (Maye and Engel 2012). That is, while in short sequences (motor command, sensory consequence), modality- related SMCs (panel B) will be dominant, longer interactions will allow objects the agent is interacting with to stand out. Using data from our robot, this is convincingly demonstrated in panel G. The first row shows classification results when using data from one sensory epoch (two seconds of locomotion) collapsed across all gaits, i.e., without the action context. Subsequent rows report results where classification was performed separately for each gait and increasingly longer interaction histories were available. 'Mean' values represent the mean performance; 'best' are classification results from the gait that facilitated perception the most (see Hoffmann et al. 2012 for details).
* 'Turn left' was a movement pattern dominated by the action of the right hind leg that was pushing the robot forward and left. Regarding 'bound right, ' bounding gait is a running gait used by small mammals. It is similar to gallop, and features a flight phase, but is characterized by synchronous action of every pair of legs. However, in this study, we used lower speeds without an aerial phase. In addition, the symmetry of the motor signals was slightly disrupted, resulting in a right- turning motion.
from information theory, which can be used to characterize sensorimotor flows in the robot- for example, how strongly sensors are affected by motor commands- and we tried to isolate the effects of the body, motor programs (gaits), and environment in the agent's sensorimotor space. Finally, we tested the predictions of SMC theory regarding object discrimination. In our investigations, we have chosen the situated perspectiveanalyzing only the relationships between sensory and motor variables that would also be available to the agent itself. However, information- theoretic methods can also be productively applied to study relationships between internal and external variables, such as between sensory or neuronal states and some properties of an external object (e.g., its size, Beer and Williams 2015; or any other property that can be expressed numerically). Using this approach, one can obtain important insights into the operation and temporal
evolution of categorization, for example. Performing this in the ground discrimination scenario on the quadrupedal robot constitutes our future work.
While the studies on 'minimally cognitive agents' are of fundamental importance and lead to valuable insights for our understanding of intelligent behavior, the ultimate target is, of course, human cognition. Toward this end, one may want to resort to more sophisticated tools, for example, humanoid robots.
## Human- like Cognition in Robots
In the previous section, we showed how robots can be beneficial in operationalizing, formalizing, and quantifying ideas, concepts, and theories that are important for understanding cognition but that are often not articulated in sufficient detail. An obvious implication of this analysis is that the kind of cognition that emerges will be highly dependent on the body of the agent, its sensorimotor apparatus, and the environment it is interacting with. Thus, to target human cognition, the robot's morphology- shape, type of sensors, and their distribution, materials, actuators- should resemble that of humans as closely as possible. Now we have to be realistic: approximating humans very closely would imply mimicking their physiology, the sensors in the body, and the inner organs, the muscles with comparable biological instantiation, and the bloodstream that supplies the body with energy and oxygen. Only then could the robot experience the true concept, e.g., of being thirsty or out of breath, hearing the heart pumping, blushing, or the feeling of quenching the thirst while drinking a cold beer in the summer. So, even if, on the surface, a robot might be almost indistinguishable from a human (like, for example, Hiroshi Ishiguro's recent humanoid 'Erica'), we have to be aware of the fundamental differences: comparatively very few muscles and tendons, no actuators that can get sore when overused, no sensors for pain, only low- density haptic sensors, no sweat glands in the skin, and so on and so forth. Thus, 'Erica' will have a very impoverished concept of drinking or feeling hot. In other words, we have to make substantial abstractions.
Just as an aside, making abstractions is nothing bad- in fact, it is one of the most crucial ingredients of any scientific explanation because it forces us to focus on the essentials, ignoring whatever is considered irrelevant (the latter most likely being the majority of things that we could potentially take into account). Thus, the specifics of the robot's cognition- its concepts, its body schema- will clearly diverge from that of humans, but the underlying principles will, at a certain level of abstraction, be the same. For example, it will have its own sensorimotor contingencies, it will form cross- modal associations through Hebbian learning, and it will explore its environment using its sensorimotor setup. So if the robot says 'glass, ' this will relate to very different specific sensorimotor experiences, but if the robot can recognize, fill, and hand a 'glass' to a human for drinking, it makes sense to say that the robot has acquired the concept of 'glass. '
Because the acquisition of concepts is based on sensorimotor contingencies, which in turn require actions on the part of the agent, and because the patterns of sensory stimulation are associated with the respective motor signals, the robot platforms of choice will ideally be tendon- driven- just like humans who use muscles and tendons for
movements. Given our discussion on abstraction earlier, we can also study concept acquisition in robots that have motors in the joints- we just have to be aware of the concrete differences. Still, the principles governing the robot's cognition can be very similar to that of humans (see Box 45.2 for examples of different types of humanoid robots).
## BOX 45.2 Humanoid embodiment for modeling cognition
<details>
<summary>Image 2 Details</summary>

### Visual Description
\n
## Photograph: Human-Robot Interaction
### Overview
The image depicts two human individuals interacting with a humanoid robot. Both individuals are leaning forward, appearing to be attempting to communicate with the robot, possibly through speech. The setting appears to be an indoor space, potentially a gallery or laboratory, with a blurred artwork visible in the background.
### Components/Axes
There are no axes or legends present in this image. The key components are:
* **Robot:** A humanoid robot with a white, spherical head and a metallic body. The robot has visible joints and appears to be constructed from multiple components.
* **Person 1:** A woman with short, dark hair, wearing a white shirt and a black blazer. She has a tattoo near her ear.
* **Person 2:** A woman with long, dark hair and glasses, wearing a patterned jacket.
* **Background:** A blurred artwork, appearing to be a painting or print, with muted colors.
* **Label:** The letter "(a)" is present in the top-left corner of the image.
### Detailed Analysis or Content Details
The image does not contain numerical data or quantifiable measurements. The observable details are:
* **Robot's Head:** The robot's head is smooth and white, with two large, dark circular features resembling eyes. There is a small, dark circular feature below the "eyes", possibly a microphone or sensor.
* **Robot's Body:** The robot's body is constructed from a network of metallic components, including joints, wires, and structural elements. The hands are white and appear to be designed for grasping.
* **Person 1's Position:** The woman on the left is leaning forward with her mouth open, as if speaking to the robot. Her body is angled towards the robot.
* **Person 2's Position:** The woman on the right is also leaning forward, with her face close to the robot's head. She is wearing glasses and appears to be focused on the robot.
* **Lighting:** The lighting is relatively even, with some shadows cast by the individuals and the robot.
### Key Observations
The primary observation is the interaction between the humans and the robot. The body language of both individuals suggests they are attempting to communicate with the robot. The setting and the robot's design suggest a research or experimental context.
### Interpretation
The image likely illustrates a study or demonstration of human-robot interaction. The individuals are likely testing the robot's ability to understand and respond to human speech or other forms of communication. The blurred background suggests the environment is not the primary focus, but rather the interaction itself. The presence of the label "(a)" suggests this is part of a larger series of images or a figure within a research paper. The image highlights the increasing integration of robots into human environments and the ongoing efforts to develop more natural and intuitive forms of human-robot communication. The robot's design, while humanoid, is clearly mechanical, emphasizing its artificial nature and the challenges of creating truly seamless interaction.
</details>
<details>
<summary>Image 3 Details</summary>

### Visual Description
\n
## Photograph: Humanoid Robot in Dynamic Pose
### Overview
The image depicts a full-body view of a humanoid robot in a dynamic, leaning pose against a green screen background. The robot's internal structure is partially exposed, revealing a complex network of wiring, actuators, and mechanical components. The robot appears to be mid-motion, with one arm raised and the body bent at the waist.
### Components/Axes
There are no axes or legends present in this image. The only label is "(c)" in the top-left corner, likely indicating a copyright or figure designation.
### Detailed Analysis or Content Details
The robot is constructed from a combination of metallic and composite materials. The head and limbs are covered in a smooth, grey material, likely plastic or a composite. The torso is more open, revealing a black central structure surrounded by a white ring. Numerous cables and wires are visible, connecting various components. The legs are articulated with multiple joints, and the feet are white and relatively flat. The robot's pose suggests flexibility and a degree of balance control. The green screen background is a uniform shade of green.
* **Head:** Smooth, grey, humanoid shape with visible features (eyes, nose).
* **Torso:** Black central structure with a white ring around it. Exposed wiring and mechanical components.
* **Arms:** Articulated with multiple joints. One arm is raised, the other is bent.
* **Legs:** Articulated with multiple joints.
* **Feet:** White, flat.
* **Background:** Uniform green color.
* **Label:** "(c)" in the top-left corner.
### Key Observations
The robot's exposed internal structure is a key feature. It highlights the complexity of its design and the integration of various mechanical and electrical components. The dynamic pose suggests advanced control systems and balance capabilities. The green screen background indicates the image may be intended for compositing or visual effects.
### Interpretation
This image likely serves to showcase the capabilities of a humanoid robot, particularly its mechanical design and range of motion. The exposed internal structure is not necessarily indicative of a finished product, but rather a demonstration of the underlying technology. The use of a green screen suggests the robot may be intended for use in virtual environments or simulations. The image does not provide quantitative data, but rather a qualitative representation of the robot's physical characteristics and potential functionality. The "(c)" label suggests the image is protected by copyright. The robot's design appears to prioritize functionality and articulation over aesthetic appeal, suggesting it is intended for research or practical applications rather than consumer use.
</details>
<details>
<summary>Image 4 Details</summary>

### Visual Description
\n
## Photograph: Robotic Leg Structure
### Overview
The image depicts a close-up view of the mechanical structure of a robotic leg, likely part of a quadrupedal robot. The leg is complex, featuring multiple joints, actuators, and wiring. The overall impression is one of advanced engineering and intricate design.
### Components/Axes
There are no axes or scales present in this image. The visible components include:
* **Leg Segments:** Multiple white, articulated segments forming the leg structure.
* **Actuators:** Cylindrical actuators (likely electric motors or pneumatic cylinders) connecting the leg segments.
* **Wiring:** A dense network of colored wires connecting various components.
* **Control Boards/Electronics:** Small circuit boards and electronic components visible at joints and the base of the leg.
* **Camera:** A small black camera is mounted on the upper portion of the leg.
* **Logos/Labels:** Several logos and labels are present on the leg structure.
### Detailed Analysis or Content Details
The image shows a highly detailed robotic leg structure. Here's a breakdown of the visible labels and markings:
* **Top Center:** "ai lab" is prominently displayed.
* **Top Right:** "robotics" is visible.
* **Top Center-Right:** "stormind" is visible.
* **Bottom Right:** "protopipe" is visible.
* **Bottom Center:** A logo resembling a stylized "S" within a circle is present.
* **Bottom Left:** A logo with a shield shape and text is visible.
* **Top Left:** "Power" is visible.
The leg appears to have at least three major joints, allowing for a wide range of motion. The actuators are arranged to provide power and control to these joints. The wiring is bundled and routed along the leg segments, connecting the actuators and control boards. The camera is positioned to provide visual feedback or sensing capabilities.
### Key Observations
* The leg is constructed from lightweight materials, likely aluminum or composite materials.
* The wiring is meticulously organized, suggesting a focus on reliability and maintainability.
* The presence of multiple control boards indicates a distributed control architecture.
* The camera suggests the robot has some level of visual perception.
* The logos indicate collaboration between multiple research groups or companies.
### Interpretation
This image showcases a sophisticated robotic leg design, likely intended for a mobile robot capable of navigating complex terrain. The presence of the "ai lab" and "robotics" labels suggests this is a research project focused on advanced robotics and artificial intelligence. The intricate mechanical design and dense wiring indicate a high degree of control and precision. The camera suggests the robot is equipped with visual sensing capabilities, potentially for autonomous navigation or object recognition. The collaboration between different entities (as indicated by the logos) suggests a multidisciplinary approach to the development of this robotic system. The overall design emphasizes functionality, robustness, and advanced control capabilities. The image does not provide quantitative data, but it offers a detailed visual representation of a complex robotic system.
</details>
<details>
<summary>Image 5 Details</summary>

### Visual Description
\n
## Illustration: Infant with Rash Distribution
### Overview
The image is a full-body illustration of an infant in a fetal position, covered in numerous small, reddish-brown spots representing a rash. The illustration appears to be for medical or educational purposes, demonstrating the distribution pattern of a skin condition. There are no axes, legends, or numerical data present. The image is labeled "(d)" in the top-left corner.
### Components/Axes
There are no axes or legends. The primary component is the infant figure and the distribution of the rash. The background is a pale yellow color.
### Detailed Analysis or Content Details
The rash appears to be distributed across the entire body, with varying densities.
* **Head & Face:** The highest concentration of spots is on the face, particularly around the nose, cheeks, and forehead. The spots extend onto the scalp.
* **Torso:** The torso is heavily covered, with a relatively even distribution of spots.
* **Limbs:** The arms and legs also exhibit a significant number of spots, though perhaps slightly less dense than the torso. The spots extend down to the feet and hands.
* **Hands & Feet:** The hands and feet are covered in spots.
* **Positioning:** The infant is curled in a fetal position, with the knees drawn up towards the chest and the hands near the face.
There is no quantifiable data regarding the size, shape, or number of spots. The illustration provides a visual representation of the rash's distribution, but no precise measurements or counts.
### Key Observations
The rash appears to be widespread, affecting nearly all areas of the infant's body. The highest concentration is on the face, suggesting a potential starting point or preference for that area. The distribution pattern is relatively uniform across the torso and limbs.
### Interpretation
The image likely depicts a case of a viral or bacterial skin infection, or an allergic reaction causing a widespread rash. The distribution pattern could be indicative of a specific condition, such as measles, rubella, chickenpox, or a drug eruption. The concentration on the face suggests the rash may have originated there or that the face is particularly susceptible. The illustration serves as a visual aid for medical professionals or students to recognize and understand the characteristic presentation of such conditions. Without additional clinical information, it is impossible to determine the exact cause of the rash. The image is descriptive, not analytical, and provides no data beyond the visual distribution of the rash.
</details>
<details>
<summary>Image 6 Details</summary>

### Visual Description
\n
## Photograph: Robotic Humanoid
### Overview
The image depicts a humanoid robot, appearing to be a child-sized model. The robot is predominantly black with a pale, smooth head. It is positioned in a neutral stance, with its hands clasped in front of it. The background is a neutral gray. The image is labeled "(e)" in the top-left corner.
### Components/Axes
There are no axes or legends present in this image. The image is a photograph, not a chart or diagram.
### Detailed Analysis or Content Details
The robot's head is rounded and smooth, with two horizontal red lines representing the eyes. A red line also indicates the mouth. The head is attached to a black neck and torso. The arms are articulated with visible joints and covered in black material. The hands are metallic and have multiple fingers. The robot's body appears to be constructed from a combination of metal and plastic components. There are visible wires and mechanical parts, particularly around the joints and hands. The robot's overall appearance is somewhat unsettling due to its realistic yet artificial features.
### Key Observations
The robot's design suggests a focus on mimicking human form, but with a clear indication of its mechanical nature. The red lines for facial features are a simplified representation of eyes and a mouth. The robot's hands are particularly detailed, with individual fingers and joints. The overall condition of the robot appears to be well-maintained.
### Interpretation
This image likely represents a research or development project in the field of robotics, specifically humanoid robots. The robot's design suggests an attempt to create a machine that can interact with humans in a more natural way. The simplified facial features may be intentional, to avoid the "uncanny valley" effect, where robots that look too human can evoke feelings of unease. The visible mechanical components highlight the robot's artificial nature, reminding the viewer that it is a machine, not a living being. The label "(e)" suggests this is part of a series of images or a larger study. The image does not provide any quantitative data, but it offers a visual representation of the current state of humanoid robot technology. It demonstrates the progress made in creating robots that resemble humans in form, but also acknowledges the challenges of achieving a truly realistic and comfortable human-robot interaction.
</details>
Figure 45.2. Humanoid robots.
<details>
<summary>Image 7 Details</summary>

### Visual Description
\n
## Photograph: Portrait of a Humanoid Robot
### Overview
The image is a full-shot photograph of a humanoid robot, appearing to be female, posed in a studio setting. The robot is dressed in clothing resembling human attire. The background is a solid black. There is a label "(f)" in the top-left corner.
### Components/Axes
There are no axes or legends present in this image. The image consists solely of visual information.
### Detailed Analysis or Content Details
The robot has brown hair styled with bangs and curls. Her skin tone appears pale. She is wearing a black button-down shirt, a grey sweater, and a purple scarf draped around her shoulders. The scarf has a textured, lace-like pattern. The robot's facial expression is neutral, with a slight gaze directed towards the viewer. The lighting is soft and even, highlighting the details of the robot's features and clothing. The robot appears to be constructed with a high degree of realism, mimicking human features closely.
### Key Observations
The image showcases the advancements in robotics and the creation of humanoid robots that closely resemble humans in appearance. The robot's clothing and hairstyle contribute to the illusion of a human being. The overall composition is simple, focusing attention on the robot itself.
### Interpretation
The photograph likely serves to demonstrate the capabilities of robotic engineering and artificial intelligence. The robot's realistic appearance raises questions about the blurring lines between humans and machines. The image could be used in a context related to robotics research, AI development, or the ethical implications of creating human-like robots. The label "(f)" suggests this image is part of a larger series or figure set within a research paper or presentation. The image does not contain any quantifiable data, but rather serves as a visual representation of a technological achievement. The focus on a realistic human appearance suggests an intent to explore human-robot interaction or to create robots that can seamlessly integrate into human society.
</details>
<details>
<summary>Image 8 Details</summary>

### Visual Description
\n
## Diagram: Satellite Illustration
### Overview
The image depicts a schematic illustration of a satellite, likely a communications or observation satellite, in space. The satellite features large solar panels and a central body. A red sphere is highlighted within the satellite structure, and a red circular orbit is shown around the satellite. The background is a pale yellow.
### Components/Axes
There are no explicit axes or scales present in the image. The diagram focuses on the visual representation of the satellite's components and its orbital path. The key components are:
* **Satellite Body:** The central structure of the satellite, appearing as a cylindrical shape.
* **Solar Panels:** Large, rectangular panels extending from the satellite body, covered in a grid-like pattern.
* **Highlighted Sphere:** A red sphere located within the satellite body.
* **Orbit:** A red circular path surrounding the satellite.
* **Support Structures:** Connecting the solar panels to the satellite body.
### Detailed Analysis or Content Details
The satellite appears to be oriented vertically in the image. The solar panels are positioned symmetrically on either side of the satellite body. The red sphere is located approximately in the center of the satellite body. The orbit is a perfect circle around the satellite.
There is no numerical data or specific measurements provided in the image. The image is purely illustrative.
### Key Observations
The red sphere and orbit are visually emphasized, suggesting they are the focal points of the diagram. The satellite's design indicates a focus on solar power generation. The lack of any labels or scales suggests this is a conceptual illustration rather than a technical blueprint.
### Interpretation
The diagram likely represents a simplified model of a satellite in orbit. The highlighted sphere could represent a critical component, such as a power source, communication hub, or scientific instrument. The orbit indicates the satellite's path around a celestial body (likely Earth, though not shown). The image is intended to convey the basic structure and function of a satellite without providing detailed technical specifications. The emphasis on the red sphere suggests it is a key element for understanding the satellite's purpose or operation. The diagram is a conceptual illustration, and does not contain any factual data.
</details>
<details>
<summary>Image 9 Details</summary>

### Visual Description
\n
## Diagram: Illustration of a Human Figure with Spherical Objects
### Overview
The image depicts a stylized, grayscale illustration of a human figure in a dynamic pose, seemingly throwing or launching a spherical object. Two distinct spherical objects are present: one red and one composed of numerous blue dots. The figure is rendered in a somewhat abstract manner, with the body outlined and filled with a speckled texture.
### Components/Axes
There are no explicit axes or labels present in the image. The key components are:
* **Human Figure:** A grayscale silhouette in a throwing motion.
* **Red Sphere:** Located at the end of the figure's extended arm, appearing to be the object being thrown.
* **Blue Dot Sphere:** A cluster of blue dots forming a spherical shape near the figure's lower body.
* **Background:** A pale yellow/beige color.
### Detailed Analysis or Content Details
The image does not contain numerical data or precise measurements. The following observations can be made:
* **Figure Pose:** The figure is positioned with one arm extended forward and upward, suggesting a throwing action. The other arm is bent, and the legs are in a striding position.
* **Red Sphere:** The red sphere is approximately 1/4 the size of the figure's head. It is positioned at the end of the extended arm, indicating it is the object being propelled.
* **Blue Dot Sphere:** The sphere composed of blue dots is located near the figure's feet, and is roughly the same size as the figure's head. The dots are densely packed, creating the illusion of a solid sphere.
* **Figure Texture:** The figure's body is filled with a speckled texture, which adds visual interest but does not convey specific information.
### Key Observations
The image lacks quantitative data. The primary observation is the depiction of a dynamic action – throwing or launching an object. The contrast between the red and blue spheres may be significant, potentially representing different elements or forces.
### Interpretation
The image appears to be a symbolic representation rather than a literal depiction. It could represent concepts such as:
* **Action and Momentum:** The throwing motion suggests energy, force, and the transfer of momentum.
* **Cause and Effect:** The red sphere being thrown could represent a cause, while the blue sphere might represent an effect or consequence.
* **Energy Release:** The act of throwing could symbolize the release of energy or potential.
* **Human Interaction with Objects:** The image illustrates a human interacting with and manipulating objects in their environment.
Without additional context, the precise meaning of the image remains open to interpretation. The use of color (red vs. blue) and the contrasting textures could be intended to convey specific symbolic meanings. The image is more illustrative than data-driven, and its value lies in its potential to evoke ideas and concepts rather than present factual information.
</details>
<details>
<summary>Image 10 Details</summary>

### Visual Description
\n
## Photograph: Robot Holding Tablet
### Overview
The image depicts a humanoid robot, likely a NAO robot, holding a tablet device. The robot is standing and appears to be presenting or interacting with the tablet screen. The background is a plain white wall and a dark surface, likely a table or floor.
### Components/Axes
There are no axes or legends present in the image. The primary components are the robot itself and the tablet it is holding. A label "(g)" is present in the top-left corner of the image.
### Detailed Analysis or Content Details
The robot is predominantly white with purple eyes. It has a smooth, rounded head and body. The robot's hands are open, as if gesturing towards the tablet. The tablet screen displays a grid of colored circles. The colors visible are approximately:
* Red (multiple instances)
* Pink (multiple instances)
* Blue (multiple instances)
* Green (multiple instances)
* Yellow (multiple instances)
* Orange (multiple instances)
The arrangement appears to be a 3x3 grid, though some circles are partially obscured. The robot's left hand has a small, circular device attached to it.
### Key Observations
The image focuses on the interaction between a robotic entity and a digital interface. The colored circles on the tablet suggest a potential application involving color recognition, selection, or a game-like interface. The robot's posture and open hands indicate a presentation or demonstration scenario.
### Interpretation
The image likely illustrates a human-robot interaction scenario, potentially within an educational or research context. The tablet's interface suggests the robot is capable of processing visual information (colors) and responding to user input or presenting information. The robot's design and the presence of the tablet indicate a focus on social robotics and the development of robots that can interact with humans in a natural and intuitive way. The image does not provide any quantitative data or specific details about the robot's capabilities or the tablet's functionality. It is a visual representation of a potential application of robotics technology. The label "(g)" suggests this image is part of a larger series or a figure within a document.
</details>
A large number of humanoid robots have been developed over the last decades and many of them can, one way or other, be used to study human cognition. Given that all of them to date are very different from real humans- each of them, implicitly or explicitly, embodies certain types of abstractions- there is no universal platform, but they have all been developed with specific goals in mind. Here we present a few examples and discuss the ways in which they are employed in trying to ferret out the principles of human cognition. The categories shown in Figure 45.2 are musculoskeletal robots (Roboy and Kenshiro), 'baby' robots with sensorized skins (iCub and fetus simulators), and social interaction robots (Erica and Pepper).
In order to use the robots for learning their own complex dynamics and for building up a body schema, both Roboy and Kenshiro (Nakanishi et al. 2012) need to be equipped with many sensors so that they can 'experience' the effect of a particular actuation pattern. Given rich sensory feedback, using the principle that every action leads to sensory stimulation, both these robots can, in principle, employ motor babbling in order to learn how to move. Especially for Kenshiro, with his very large number of muscles, learning is a must. A very important step in this direction is the work of Richter et al. (2016), who have combined a musculoskeletal robotics toolkit (Myorobotics) with a scalable neuromorphic computing platform (SpiNNaker) and demonstrated control of a musculoskeletal joint with a simulated cerebellum.
Finally, if the interest is social interaction, it might be more productive to use robots like Erica or Pepper. Both Erica and Pepper are somewhat limited in their sensorimotor abilities (especially haptics), but are endowed with speech understanding and generation facilities; they can recognize faces and emotions; and they can realistically display any kind of facial expression.
## Musculoskeletal robots: Roboy and Kenshiro
Figure 45.2a. Roboy overview: The musculoskeletal design can be clearly observed. At this point, Roboy has 48 'muscles. ' Eight are dedicated to each of the shoulder joints. This can no longer be sensibly programmed: learning is a necessity. Currently, Roboy serves as a research platform for the EU/ FET Human Brain Project to study, among other things, the effect of brain lesions on the musculoskeletal system. Because it has the ability to express a vast spectrum of emotions, it can also be employed to investigate human-robot interaction, and as an entertainment platform.
Credit: © Embassy of Switzerland in the United States of America.
Figure 45.2b. Close- up of the muscle- tendon system. Although the shoulder joint is distinctly dissimilar to a human one- for example, it doesn't have a shoulder bladeit is controlled by eight muscles, which require substantial skills in order to move properly: which muscles have to be actuated to what extent in order to achieve a desired movement?
Credit: © Erik Tham/ Corbis Documentary/ Getty Images.
Figure 45.2c. Kenshiro's musculoskeletal setup. The musculoskeletal design is clearly visible. At this point, Kenshiro has 160 'muscles'- 50 in the legs, 76 in the trunk, 12 in the
shoulder, and 22 in the neck. In terms of musculoskeletal system, it is the one robot that most closely resembles the human. So, if learning of the dynamics in this system is the goal, Kenshiro will be the robot of choice. Note that although Kenshiro is 'closest' to a human in this respect, it is still subject to enormous abstractions. Currently, Kenshiro serves as a research platform at the University of Tokyo to investigate tendon- controlled systems with very many degrees of freedom (Nakanishi et al. 2012).
Credit: Photo courtesy Yuki Asano.
## 'Baby' robots with sensitive skins
Figure 45.2d. Fetus simulator. A musculoskeletal model of human fetus at 32 weeks of gestation has been constructed and coupled with a brain model comprising 2.6 million spiking neurons (Yamada et al. 2016). The figure shows the tactile sensor distribution, which was based on human two- point discrimination data.
Reproduced from Yasunori Yamada, Hoshinori Kanazawa, Sho Iwasaki, Yuki Tsukahara, Osuke Iwata, Shigehito Yamada, and Yasuo Kuniyoshi, An Embodied Brain Model of the Human Foetus, Scientific Reports , 6 (27893), Figure 1d, doi:10.1038/ srep27893 © 2016 Yasunori Yamada, Hoshinori Kanazawa, Sho Iwasaki, Yuki Tsukahara, Osuke Iwata, Shigehito Yamada, and Yasuo Kuniyoshi. This work is licensed under the Creative Commons Attribution 4.0 International License (CC BY 4.0). It is attributed to the authors Yasunori Yamada, Hoshinori Kanazawa, Sho Iwasaki, Yuki Tsukahara, Osuke Iwata, Shigehito Yamada, and Yasuo Kuniyoshi.
Figure 45.2e. The iCub baby humanoid robot. The iCub (Metta et al. 2010) has the size of a roughly four- year-old child and corresponding sensorimotor capacities: 53 degrees of freedom (electrical motors), two stereo cameras in a biomimetic arrangement, and over 4,000 tactile sensors covering its body. The panel shows the robot performing self- touch and corresponding activations in the tactile arrays of the left forearm and right index finger.
## Social interaction robots: Erica and Pepper
Figure 45.2f. Erica, the latest creation of Prof. Hiroshi Ishiguro, was designed specifically with the goal of imitating human speech and body language patterns, in order to have 'highly natural' conversations. It also serves as a tool to study human-robot interaction, and social interaction in general. Moreover, because of its close resemblance to humans, the 'uncanny valley'- the fact that people get uneasy when the robots are too humanlike- hypothesis can be further explored and analyzed (see, e.g., Rosenthal- von der Pütten, Marieke, and Weiss 2014, where the Geminoid HI- 1 modeled after Prof. Ishiguro was used).
Credit: Photo courtesy of Hiroshi Ishiguro Laboratory, ATR and Osaka University.
Figure 45.2g. Pepper, a robot developed by Aldebaran (now Softbank Robotics), although much simpler (and much cheaper!) than Erica, is used successfully on the one hand to study social interaction, for entertainment, and to perform certain tasks (such as selling Nespresso machines to customers in Japan).
## The Role of Development
A very powerful approach to deepen our understanding of cognition, and one that has been around for a long time in psychology and neuroscience, is to study ontogenetic development. During the past two decades or so, this idea has been adopted by the robotics community and has led to a thriving research field dubbed 'developmental robotics.' Now, a crucial part of ontogenesis takes place in the uterus. There, tactile sense is the first to develop (Bernhardt 1987) and may thus play a key role in the organism's learning about first sensorimotor contingencies, in particular, those pertaining to its own body (e.g., hand- to- mouth behaviors). Motivated by this fact, Mori and Kuniyoshi (2010) developed a musculoskeletal fetal simulator with over 1,500 tactile receptors, and studied the effect of their distribution on the emergence of sensorimotor behaviors. Importantly, with a natural (non-homogeneous) distribution, the fetus developed 'normal' kicking and jerking movements (i.e., similar to those observed in a human fetus), whereas with a homogeneous allocation it did not develop any of these behaviors. Yamada et al. (2016), using a similar fetal simulator and a large spiking neural network brain model, have further studied the effects of intrauterine (vs. extrauterine) sensorimotor experiences on cortical learning of body representations. A physical version- the fetusoid- is currently under development (Mori et al. 2015). Somatosensory (tactile and proprioceptive) inputs continue to be of key importance also in early infancy when 'infants engage in exploration of their own body as it moves and acts in the environment. They babble and touch their own body, attracted and actively involved in investigating the rich intermodal redundancies, temporal contingencies, and spatial congruence of self- perception' (Rochat 1998, p. 102). The iCub baby humanoid robot (Metta et al. 2010) (Box 45.2E), equipped with a wholebody tactile array (Maiolino et al. 2013) comprising over 4,000 elements, is an ideal platform to study these processes. The study of Roncone et al. (2014) on self- calibration using self- touch is a first step in this direction.
## Applications Of Human- Like Robots
Finally, this research strand- employing humanoid robots to study human cognitionhas also important applications. In traditional domains and conventional tasks- such as pick- and- place operations in an industrial environment- current factory automation robots are doing just fine. However, robots are starting to leave these constrained domains, entering environments that are far less structured and are starting to share their living space with humans. As a consequence, they need to dynamically adapt to unpredictable interactions and guarantee their own as well as others' safety at every moment. In such cases, more human- like characteristics- both physical and 'mental'are desirable. Box 45.3 illustrates how more brain- like body representations can help robots to become more autonomous, robust, and safe. The possibilities for future applications of robots with cognitive capacities are enormous, especially in the rapidly
## BOX 45.3 Body schema in humans vs. robots
Figure 45.3. Characteristics of body representations.
<details>
<summary>Image 11 Details</summary>

### Visual Description
\n
## Diagram: Biological Body Representations & Robot Performance
### Overview
The image is a conceptual diagram illustrating the relationship between modeling biological body representations and improving robot performance. It uses a 2x2 matrix with axes representing "fixed/plastic" and "centralized/distributed", and "implicit/explicit" and "multimodal/amodal or unimodal". The diagram features images of a monkey and a human brain on the left, a robot in the center, and a mechanical arm diagram with equations on the right. Arrows indicate a flow or transition from biological representations to robot capabilities.
### Components/Axes
* **Vertical Axis:** "fixed" (top) to "plastic" (bottom).
* **Horizontal Axis 1:** "centralized" (left) to "distributed" (right).
* **Horizontal Axis 2:** "implicit" (left) to "explicit" (right).
* **Bottom Axis:** "multimodal" (left) to "amodal or unimodal" (right).
* **Label 1:** "1. Modeling mechanisms of biological body representations" (top-right, in red text).
* **Label 2:** "2. Better performance of robots - autonomy, robustness, safety" (bottom-right, in red text).
* **Images:** Monkey, Human Brain, Robot, Mechanical Arm.
* **Arrow:** A large red arrow pointing from the left (monkey/brain) towards the robot.
* **Equations:** `Px = cosθ1 (a3 cos(θ2 + θ3) + a2 cosθ2)`
`Py = sinθ1 (a3 cos(θ2 + θ3) + a2 cosθ2)`
`Pz = a3 sin(θ2 + θ3) + a2 sinθ2 + a1`
### Detailed Analysis / Content Details
The diagram is structured around a conceptual space defined by the axes.
* **Top-Left Quadrant:** Contains an image of a monkey. This quadrant is labeled as "fixed" and "centralized".
* **Bottom-Left Quadrant:** Contains an image of a human brain. This quadrant is labeled as "plastic" and "distributed".
* **Center:** A white robot figure is positioned in the center of the diagram, with the red arrow originating from the monkey/brain area and pointing towards it.
* **Top-Right Quadrant:** Contains a diagram of a mechanical arm with labeled segments (a1, a2, a3) and angles (θ1, θ2, θ3). This quadrant is labeled as "fixed" and "explicit".
* **Bottom-Right Quadrant:** This quadrant is labeled as "plastic" and "explicit".
* **Mechanical Arm Diagram:** The arm consists of three segments labeled a1, a2, and a3, connected by joints with angles θ1, θ2, and θ3. The end of the arm is labeled with coordinates Px, Py, and Pz. The equations provided calculate the x, y, and z coordinates of the end effector (P) based on the segment lengths (a1, a2, a3) and joint angles (θ1, θ2, θ3).
### Key Observations
* The diagram suggests a progression or mapping from biological body representations (monkey and brain) to robotic systems.
* The mechanical arm diagram and associated equations represent an "explicit" and "fixed" representation of body mechanics.
* The red arrow visually emphasizes the transfer of knowledge or principles from biological systems to robotic design.
* The axes define a conceptual space for categorizing different approaches to body representation and control.
### Interpretation
The diagram illustrates a research direction focused on leveraging insights from biological body representations to improve robot performance. The positioning of the monkey and brain in the "implicit" and "distributed" quadrants suggests that biological systems rely on complex, distributed processing and implicit knowledge. The goal, as indicated by Label 2, is to translate these principles into robots to enhance their autonomy, robustness, and safety. The mechanical arm diagram represents a more traditional, explicit, and fixed approach to robot control. The equations provide a concrete example of how robot kinematics can be mathematically defined. The diagram implies that by understanding the mechanisms underlying biological body representations, we can develop more sophisticated and capable robotic systems. The diagram is conceptual and does not present quantitative data, but rather a qualitative framework for understanding the relationship between biology and robotics. The use of the 2x2 matrix suggests a categorization of different approaches, and the arrow indicates a desired direction of progress.
</details>
Credit: Monkey photo source: Einar Fredriksen/ Flickr/ Attribution- ShareAlike 4.0 International (CC BY- SA 4.0)
Credit: Brain image source: Hugh Guiney/ Attribution- ShareAlike 3.0 Unported (CC BY- SA 3.0)
Credit: Line drawing and equations source: Reproduced with the permission of Dr. Hugh Jack from http:// www.engineeronadisk.com
Credit: iCub Robot source: © iCub Facility- IIT, 2017
A typical example of a traditional robot and its mathematical model is depicted in the upper right of Figure 45.3. The robot is an arm consisting of three segments with three joints between the base and the final part- the end- effector. Its model is below the robot- the forward kinematics equations that relate configuration of the robot (joint positions θ 1 , θ 2, θ 3 ) to the Cartesian position of the end- effector (p x , p y , p z ). The model has the following characteristics: (1) it is explicit- there is a one- to- one correspondence between its body and the model (a 1 in the model is the length of the first arm segment, for example); (2) it is unimodalthe equations directly describe physical reality; one sensory modality (proprioceptionjoint angle values) is needed to get the correct mapping in the current robot state; (3) it is centralized- there is only one model that describes the whole robot; (4) it is fixed- normally, this mapping is set and does not change during the robot operation. Other models/ mappings are typically needed for robot operation, such as inverse kinematics, differential kinematics, or models of dynamics (dealing with forces and torques), but they would all share the abovementioned characteristics (see Hoffmann et al. 2010 for a survey).
As pointed out earlier, animals and humans have different bodies than robots; they also have very different ways of representing them in their brains. The panel in the lower left shows the rhesus macaque and below some of the key areas of its brain that deal with body representations (see, e.g., Graziano and Botvinick 2002). There is ample evidence that these representations differ widely from the ones traditionally used in robotics- namely, 'the body in the brain' would be (1) implicitly represented- there would hardly be a 'place' or
a 'circuit' encoding, say, the length of a forearm; such information is most likely only indirectly available and possibly in relation to other variables; (2) multimodal- drawing mainly from somatosensory (tactile and proprioceptive) and visual, but also vestibular (inertial) and closely coupled to motor information; (3) distributed- there are numerous distinct, but partially overlapping and interacting representations that are dynamically recruited depending on context and task; (4) plastic- adapting over both long (ontogenesis) and short time scales, as adaptation to tool use (e.g., Iriki et al. 1996) or various body illusions testify (e.g., humans start feeling ownership over a rubber hand after minutes of synchronous tactile stimulations of the hand replica and their real hand under a table; Botvinick and Cohen 1998).
The iCub robot 'walking' from the top right to the bottom left in the figure is illustrating two things. First, in order to be able to model the mechanisms of biological body representations, the traditional robotic models are of little use- a radically different approach needs to be taken. Second, by making the robot models more brain- like, we hope to inherit some of the desirable properties typical of how humans and animals master their highly complex bodies. Autonomy and robustness or resilience are one such case. It is not realistic to think that conditions, including the body, will stay constant over time and a model given to the robot by the manufacturer will always work. Inaccuracies will creep in due to wear and tear and possibly even more dramatic changes can occur (e.g., a joint becomes blocked). Humans and animals display a remarkable capacity for dealing with such changes: their models dynamically adapt to muscle fatigue, for example, or temporarily incorporate objects like tools after working with them, or reallocate 'brain territory' to different body parts in case of amputation of a limb. Robots thus also need to perform continuous self- modeling (Bongard et al. 2006) in order to cope with such changes. Finally, unlike factory robots that blindly execute their trajectories and thus need to operate in cages, humans and animals use multimodal information to extend the representation of their bodies to the space immediately surrounding them (also called peripersonal space). They construct a 'margin of safety, ' a virtual 'bubble' around their bodies that allows them to respond to potential threats such as looming objects, warranting safety for them and also their surroundings (e.g., Graziano and Cooke 2006). This is highly desirable in robots as well, and can transform them from dangerous machines to collaborators possessing whole- body awareness like we do. First steps along these lines in the iCub were presented by Roncone et al. (2016).
growing area of service robotics, where robots perform tasks in human environments. Rather than accomplishing them autonomously, they often do it in cooperation with humans, which constitutes a big trend in the field. In cooperative tasks, it is of course crucial that the robots understand the common goals and the intentions of the humans in order to be successful. In other words, they require substantial cognitive skills. We have barely started exploiting the vast potential of these types of cognitive machines.
## Conclusion
Our analysis so far has demonstrated that robots fit squarely into the embodied and pragmatic (action- oriented) turn in cognitive sciences (e.g., Engel et al. 2013), which
implies that whole behaving systems rather than passive subjects in brain scanners need to be studied. Robots provide the necessary grounding to computational models of the brain by incorporating the indispensable brain- body-environment coupling (Pezzulo et al. 2011). The advantage of synthetic methodology, or 'understanding by building' (Pfeifer and Bongard 2007), is that one learns a lot in the process of building the robot and instantiating the behavior of interest. The theory one wants to test thus automatically becomes explicit, detailed, and complete. Robots become virtual experimental laboratories retaining all the virtues of 'theories expressed as simulations' (Cangelosi and Parisi 2002), but bring the additional advantage that there is no 'reality gap': there is real physics and real sensory stimulation, which lends more credibility to the analysis if embodiment is at center stage.
We are convinced that robots are the right tools to help us understand the embodied, embedded, and extended nature of cognition because their makeup- physical artifacts with sensors and actuators interacting with their environment- automatically warrants the necessary ingredients. It seems that they are particularly suited for investigations of cognition from bottom up (Pfeifer et al. 2014), where development under particular constraints in brain-body-environment coupling is crucial (e.g., Thelen and Smith 1994). It also becomes possible to simulate conditions that one would not be able to test in humans or animals- think of the simulation of fetal ontogenesis while manipulating the distribution of tactile receptors (Mori and Kuniyoshi 2010). Furthermore, many additional variables (such as internal states of the robot) become easily accessible and lend themselves to quantitative analysis, such as using methods from information theory. Therefore, the combination of a robot with sensorimotor capacities akin to humans, the possibility of emulating the robot's growth and development, and finally the ease of access to all internal variables that can be subject to rigorous quantitative investigations create a very powerful tool to help us understand cognition.
We want to close with some thoughts on whether it is possible to realize- next to embodied, embedded, and extended- enactive robots as well. Most researchers in embodied AI/ cognitive robotics automatically adopt the perspective of extended functionalism (Clark 2008; Wheeler 2011), whereby the boundaries of cognitive systems can be extended beyond the agent's brain and even skin- including the body and environment. However, it has been pointed out by the proponents of enactive cognitive science (Di Paolo 2010; Froese and Ziemke 2009) that in order to fully understand cognition in its entirety, embedding the agent in a closed- loop sensorimotor interaction with the environment is necessary, yet may not be sufficient in order to induce important properties of biological agents such as intentional agency. In other words, one should not only study instances of individual closed sensorimotor loops as models of biological agents- that would be the recommendation of Webb (2009)- but one should also try to endow the models (robots in this case) with similar properties and constraints that biological organisms are facing. In particular, it has been argued that life and cognition are tightly interconnected (Maturana 1980; Thompson 2007), and a particular organization of living systems- which can be characterized by autopoiesis (Maturana 1980) or metabolism, for example- is crucial for the agent to truly acquire meaning in its interactions with the world. While these requirements are very hard to satisfy with the artificial systems of
today, Di Paolo (2010) proposes a way out: robots need not metabolize, but they should be subject to so- called precarious conditions. That is, the success of a particular instantiation of sensorimotor loops or neural vehicles in the agent is to be measured against some viability criterion that is intrinsic to the organization of the agent (e.g., loss of battery charge, overheating leading to electronic board problems resulting in loss of mobility, etc.). The control structure may develop over time, but the viability constraint needs to be satisfied, otherwise the agent 'dies' (McFarland and Boesser 1993). In a similar vein, in order to move from embodied to enactive AI, Froese and Ziemke (2009) propose to extend the design principles for autonomous agents of Pfeifer and Scheier (2001), requiring the agents to generate their own systemic identity and regulate their sensorimotor interaction with the environment in relation to a viability constraint. The unfortunate implication, however, is that research along these lines will in the short term most likely not produce useful artifacts. On the other hand, this approach may eventually give rise to truly autonomous robots with unimaginable application potential.
## Acknowledgments
M.H. was supported by a Marie Curie Intra European Fellowship (iCub Body Schema 625727) within the 7th European Community Framework Programme and the Czech Science Foundation under Project GA17- 15697Y.
## References
- Beer, R.D. and Williams, P .L. (2015). Information processing and dynamics in minimally cognitive agents. Cognitive Science , 39, 1- 38.
- Bernhardt, J. (1987). Sensory capabilities of the fetus. MCN: The American Journal of Maternal/ Child Nursing , 12(1), 44- 7.
- Blickhan, R., Seyfarth, A., Geyer, H., Grimmer, S., Wagner, H., and Günther, M. et al. (2007). Intelligence by mechanics. Philosophical transactions. Series A , 365, 199- 220.
- Bongard, J., Zykov, V ., and Lipson, H. (2006). Resilient machines through continuous selfmodeling. Science , 314, 1118- 21.
- Botvinick, M. and Cohen, J. (1998). Rubber hands 'feel' touch that eyes see. Nature , 391(6669), 756.
- Braitenberg, V. (1986). Vehicles- experiments in synthetic psychology . Cambridge, MA: MIT Press.
- Brooks, R. (1986). A robust layered control system for a mobile robot. IEEE Journal of Robotics and Automation , 2(1), 14- 23.
- Brooks, R.A. (1989). A robot that walks: emergent behaviors from a carefully evolved network. Neural Computation , 1, 153- 62.
- Brooks, R.A. (1991a). Intelligence without reason. In: J. Myopoulos (ed.), Proceedings of the Twelfth International Joint Conference on Artificial Intelligence (vol. 1). San Francisco, USA: Morgan Kaufmann, pp. 569-95.
- Brooks, R.A. (1991b). Intelligence without representation. Artificial Intelligence , 47, 139- 59.
- Bührmann, T., Di Paolo, E., and Barandiaran, X. (2013). A dynamical systems account of sensorimotor contingencies. Frontiers in Psychology , 4, 285.
- Cangelosi, A. and Parisi, D. (2002). Computer simulation: a new scientific approach to the study of language evolution. In: Simulating the evolution of language . London: Springer Science & Business Media, pp. 3- 28.
- Clark, A. (2008). Supersizing the mind: embodiment, action, and cognitive extension . New York: Oxford University Press.
- Clark, A. and Grush, R. (1999). Towards cognitive robotics. Adaptive Behaviour , 7(1), 5- 16.
- Collins, S., Ruina, A., Tedrake, R., and Wisse, M. (2005). Efficient bipedal robots based on passive dynamic walkers. Science , 307, 1082- 5.
- Dennett, D. (1995). Darwin's dangerous idea . New York: Simon & Schuster.
- Di Paolo, E. (2010). Robotics inspired in the organism. Intellectica , 53- 54, 129- 62.
- Engel, A.K., Maye, A., Kurthen, M., and König, P . (2013). Where's the action? The pragmatic turn in cognitive science. Trends in Cognitive Sciences , 17(5), 202- 9.
- Floreano, D., Pericet- Camara, R., Viollet, S., Ruffier, F., Brückner, A., Leitel, R. et al. (2013). Miniature curved artificial compound eyes. Proceedings of the National Academy of Sciences , 110(23), 9267- 72.
- Fodor, J. (1975). The language of thought . Cambridge, MA: Harvard University Press.
- Franceschini, N., Pichon, J., and Blanes, C. (1992). From insect vision to robot vision. Philosophical transactions of the Royal Society of London. Series B , Biological sciences, 337, 283- 94.
- Froese, T. and Ziemke, T. (2009). Enactive artificial intelligence: investigating the systemic organization of life and mind. Artificial Intelligence , 173(3), 466- 500.
- Garofalo, M., Nieus, T., Massobrio, P ., and Martinoia, S. (2009). Evaluation of the performance of information theory- based methods and cross- correlation to estimate the functional connectivity in cortical networks. PLoS ONE , 4(8), e6482.
- Graziano, M. and Botvinick, M. (2002). How the brain represents the body: insights from neurophysiology and psychology. In: W . Prinz and B. Hommel (eds.), Common mechanisms in perception and action: attention and performance . New York: Oxford University Press, 136- 57.
- Graziano, M. and Cooke, D. (2006). Parieto- frontal interactions, personal space, and defensive behavior. Neuropsychologia , 44(6), 845- 59.
- Grush, R. (2004). The emulation theory of representation- motor control, imagery, and perception. Behavioral and Brain Sciences, 27, 377- 442.
- Haugeland, J. (1985). Artificial intelligence: the very idea . Cambridge, MA: MIT Press.
- Hoffmann, M., Marques, H., Arieta, A., Sumioka, H., Lungarella, M., and Pfeifer, R. (2010). Body schema in robotics: a review. IEEE Transactions on Autonomous Mental Development , 2(4), 304- 24.
- Hoffmann, M. and Pfeifer, R. (2011). The implications of embodiment for behavior and cognition: animal and robotic case studies. In: W . Tschacher and C. Bergomi (eds.), The implications of embodiment: cognition and communication . Exeter: Imprint Academic, pp. 31- 58.
- Hoffmann, M., Schmidt, N.M., Pfeifer, R., Engel, A.K., and Maye, A. (2012). Using sensorimotor contingencies for terrain discrimination and adaptive walking behavior in the quadruped robot Puppy . In: T. Ziemke, C. Balkenius, and J. Hallam (eds.), From animals to animats 12 . SAB 2012. Lecture Notes in Computer Science (vol. 7426). Berlin, Heidelberg: Springer, pp. 54- 64.
- Hoffmann, M., Stepanova, K., and Reinstein, M. (2014). The effect of motor action and different sensory modalities on terrain classification in a quadruped robot running with multiple gaits. Robotics and Autonomous Systems , 62(12), 1790- 8.
- Iriki, A., Tanaka, M., and Iwamura, Y. (1996). Coding of modified body schema during tool use by macaque postcentral neurones. Neuroreport , 7 , 2325- 30.
- Jeannerod, M. (2001). Neural simulation of action: a unifying mechanism for motor cognition. NeuroImage , 14, 103- 9.
- Koditschek, D.E., Full, R.J., and Buehler, M. (2004). Mechanical aspects of legged locomotion control. Arthropod Structure and Development , 33, 251- 72.
- Lichtensteiger, L. (2004). On the interdependence of morphology and control for intelligent behav ior [PhD dissertation]. Zurich: University of Zurich.
- Lungarella, M. and Sporns, O. (2006). Mapping information flow in sensorimotor networks. PLoS Computational Biology , 2, 1301- 12.
- Maiolino, P ., Maggiali, M., Cannata, G., Metta, G., and Natale, L. (2013). A flexible and robust large scale capacitive tactile system for robots. Sensors Journal, IEEE , 13(10), 3910- 7.
- Maturana, H.a.V.F. (1980). Autopoiesis and cognition: the realization of the living . Dordrecht: D. Reidel Publishing.
- Maye, A. and Engel, A.K. (2012). Time scales of sensorimotor contingencies. In: H. Zhang, A. Hussain, D. Liu, and Z. Wang (eds.), Advances in brain inspired cognitive systems . BICS 2012. Lecture Notes in Computer Science (vol. 7366). Berlin, Heidelberg: Springer, 240- 9.
- McGeer, T. (1990). Passive dynamic walking. The International Journal of Robotics Research , 9(2), 62- 82.
- Metta, G., Natale, L., Noei, F., Sandini, G., Vernon, D., Fadiga L. et al. (2010). The iCub humanoid robot: an open- systems platform for research in cognitive development. Neural Networks , 23(8- 9), 1125- 34.
- Mori, H., Akutsu, D., and Asada, M. (2015). Fetusoid35: a robot research platform for neural development of both fetuses and preterm infants and for developmental care. In: A. Duff, N.F. Lepora, A. Mura, T.J. Prescott, and P .F.M.J. Verschure (eds.), Biomimetic and biohybrid systems . Living Machines 2014. Lecture Notes in Computer Science (vol. 8608). New York: Springer International Publishing, pp. 411- 13.
- Mori, H. and Kuniyoshi, Y. (2010). A human fetus development simulation: self- organization of behaviors through tactile sensation. In: 2010 IEEE 9th International Conference on Development and Learning . doi:10.1109/ DEVLRN.2010.5578860
- Nakanishi, Y., Asano, Y., Kozuki, T., Mizoguchi, H., Motegi, Y., Osada, M. et al. (2012). Design concept of detail musculoskeletal humanoid 'Kenshiro'- toward a real human body musculoskeletal simulator. In: 2012 12th IEEE- RAS International Conference on Humanoid Robots (Humanoids) . doi:10.1109/HUMANOIDS.2012.6651491
- Olsson, L., Nehaniv, C.L., and Polani, D. (2004). Sensory channel grouping and structure from uninterpreted sensory data. In: Proceedings. 2004 NASA/ DoD Conference on Evolvable Hardware , 2004 . doi:10.1109/ EH.2004.1310825
- O'Regan, J.K. and Noë, A. (2001). A sensorimotor account of vision and visual consciousness. Behavioral and Brain Sciences , 24, 939- 1031.
- Pezzulo, G., Barsalou, L.W ., Cangelosi, A., Fischer, M.H., McRae, K., and Spivey, M.J. (2011). The mechanics of embodiment: a dialog on embodiment and computational modeling. Frontiers in Psychology , 2, 5.
- Pfeifer, R. and Bongard, J.C. (2007). How the body shapes the way we think: a new view of intelligence . Cambridge, MA: MIT Press.
- Pfeifer, R., Iida, F., and Lungarella, M. (2014). Cognition from the bottom up: on biological inspiration, body morphology, and soft materials. Trends in Congnitive Sciences , 18(8), 404- 13. Pfeifer, R. and Scheier, C. (2001). Understanding intelligence . Cambridge, MA: MIT Press.
- Pylyshyn, Z. (1984). Computation and cognition: toward a foundation for cognitive science . Cambridge, MA: MIT Press.
- Quiroga, R.Q., and Panzeri, S. (2009). Extracting information from neuronal populations: information theory and decoding approaches. Nature Reviews Neuroscience , 10(3), 173- 85.
- Richter, C., Jentzsch, S., Hostettler, R., Garrido, J.A., Ros, E., Knoll, A. et al. (2016). Musculoskeletal robots: scalability in neural control. IEEE Robotics & Automation Magazine, 23(4), 128-37. doi:10.1109/ MRA.2016.2535081
- Rochat, P . (1998). Self- perception and action in infancy. Experimental Brain Research , 123, 102- 9.
- Roncone, A., Hoffmann, M., Pattacini, U., Fadiga, L., and Metta, G. (2016). Peripersonal space and margin of safety around the body: learning tactile- visual associations in a humanoid robot with artificial skin. PLoS ONE , 11(10), e0163713.
- Roncone, A., Hoffmann, M., Pattacini, U., and Metta, G. (2014). Automatic kinematic chain calibration using artificial skin: self- touch in the iCub humanoid robot. In: 2014 IEEE International Conference on Robotics and Automation (ICRA) . doi:10.1109/ ICRA.2014.6907178
- Rosenthal- von der Pütten, A.M., Marieke, A., and Weiss, A. (2014). The uncanny in the wild: analysis of unscripted human- android interaction in the field. International Journal of Social Robotics , 6(1), 67- 83.
- Saranli, U., Buehler, M., and Koditschek, D. (2001). RHex: a simple and highly mobile hexapod robot. The International Journal of Robotics Research , 20, 616- 31.
- Schatz, T. and Oudeyer, P .Y. (2009). Learning motor dependent Crutchfield's information distance to anticipate changes in the topology of sensory body maps. 2009 IEEE 8th International Conference on Development and Learning . doi:10.1109/ DEVLRN.2009.5175526
- Schmidt, N., Hoffmann, M., Nakajima, K., and Pfeifer, R. (2013). Bootstrapping perception using information theory: case studies in a quadruped robot running on different grounds. Advances in Complex Systems , 16(2- 3), 1250078.
- Song, Y.M. et al. (2013). Digital cameras with designs inspired by the arthropod eye. Nature , 497(7447), 95- 9.
- Thelen, E. and Smith, L. (1994). A dynamic systems approach to the development of cognition and action . Cambridge, MA: MIT Press.
- Thompson, E. (2007). Mind in life: biology, phenomenology, and the sciences of mind . Cambridge, MA: MIT Press.
- Walter, G.W. (1953). The living brain . New York: Norton & Co.
- Webb, B. (2004). Neural mechanisms for prediction: do insects have forward models? Trends in Neurosciences , 27(5), 278- 82.
- Webb, B. (2009). Animals versus animats: or why not model the real iguana? Adaptive Behavior , 17 , 269- 86.
- Wheeler, M. (2011). Embodied cogntion and the extended mind. In: J. Garvey (ed.), The Continuum companion to philosophy of mind . London: Continuum, pp. 220- 36.
- Yamada, Y., Kanazawa, H., Iwasaki, S., Tsukahara, Y., Iwata, O., Yamada, S. et al. (2016). An embodied brain model of the human foetus. Scientific Reports, 6. doi:10.1038/srep27893