## NEURAL ARCHITECTURE SEARCH WITH REINFORCEMENT LEARNING
∗
Barret Zoph , Quoc V. Le Google Brain
{ barretzoph,qvl } @google.com
## ABSTRACT
Neural networks are powerful and flexible models that work well for many difficult learning tasks in image, speech and natural language understanding. Despite their success, neural networks are still hard to design. In this paper, we use a recurrent network to generate the model descriptions of neural networks and train this RNN with reinforcement learning to maximize the expected accuracy of the generated architectures on a validation set. On the CIFAR-10 dataset, our method, starting from scratch, can design a novel network architecture that rivals the best human-invented architecture in terms of test set accuracy. Our CIFAR-10 model achieves a test error rate of 3 . 65 , which is 0 . 09 percent better and 1.05x faster than the previous state-of-the-art model that used a similar architectural scheme. On the Penn Treebank dataset, our model can compose a novel recurrent cell that outperforms the widely-used LSTM cell, and other state-of-the-art baselines. Our cell achieves a test set perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than the previous state-of-the-art model. The cell can also be transferred to the character language modeling task on PTB and achieves a state-of-the-art perplexity of 1.214.
## 1 INTRODUCTION
The last few years have seen much success of deep neural networks in many challenging applications, such as speech recognition (Hinton et al., 2012), image recognition (LeCun et al., 1998; Krizhevsky et al., 2012) and machine translation (Sutskever et al., 2014; Bahdanau et al., 2015; Wu et al., 2016). Along with this success is a paradigm shift from feature designing to architecture designing, i.e., from SIFT (Lowe, 1999), and HOG (Dalal & Triggs, 2005), to AlexNet (Krizhevsky et al., 2012), VGGNet (Simonyan & Zisserman, 2014), GoogleNet (Szegedy et al., 2015), and ResNet (He et al., 2016a). Although it has become easier, designing architectures still requires a lot of expert knowledge and takes ample time.
Figure 1: An overview of Neural Architecture Search.
<details>
<summary>Image 1 Details</summary>

### Visual Description
## Diagram: Reinforcement Learning Architecture for Neural Architecture Search
### Overview
The diagram illustrates a reinforcement learning (RL) framework for neural architecture search (NAS). It depicts a closed-loop system where a controller (RNN) samples neural network architectures, trains child networks, and updates itself based on performance feedback. The process involves probabilistic sampling, gradient computation, and policy optimization.
### Components/Axes
1. **Controller (RNN)**
- Represented as a pink rectangle on the left.
- Labeled explicitly as "The controller (RNN)".
- Receives input from the gradient computation step and outputs architecture sampling probabilities.
2. **Child Network Training**
- Represented as a blue rectangle on the right.
- Labeled "Trains a child network with architecture A to get accuracy R".
- Outputs accuracy metric `R` for the sampled architecture `A`.
3. **Arrows and Transitions**
- **Top Arrow**: "Sample architecture A with probability p"
- Connects the controller to the child network training block.
- Indicates probabilistic sampling of architectures.
- **Bottom Arrow**: "Compute gradient of p and scale it by R to update the controller"
- Connects the child network training block back to the controller.
- Describes the policy gradient update mechanism.
### Detailed Analysis
- **Controller (RNN)**:
- Acts as the policy network in RL.
- Outputs a probability distribution `p` over possible architectures `A`.
- Updated via gradient ascent using the product of the gradient of `p` and the accuracy `R`.
- **Child Network Training**:
- Trains a network with architecture `A` sampled from `p`.
- Evaluates performance via accuracy `R`.
- Provides scalar feedback (`R`) to the controller.
- **Gradient Computation**:
- The gradient of the sampling probability `p` is computed.
- Scaled by the accuracy `R` to adjust the controller’s policy.
- This step implements the REINFORCE algorithm for policy optimization.
### Key Observations
1. **Closed-Loop System**: The diagram shows a cyclical process where the controller iteratively improves its architecture-sampling policy based on child network performance.
2. **Probabilistic Sampling**: Architectures are not deterministically selected but sampled stochastically, enabling exploration of the search space.
3. **Gradient Scaling**: The update rule explicitly ties the controller’s policy gradient to the child network’s accuracy, emphasizing reward-driven learning.
4. **No Numerical Values**: The diagram lacks specific numerical data (e.g., exact values of `p` or `R`), focusing instead on the algorithmic flow.
### Interpretation
This diagram represents a **policy gradient-based NAS framework**. The controller (RNN) learns to sample architectures that maximize expected accuracy by:
1. Sampling architectures probabilistically (`p`).
2. Training child networks to evaluate their performance (`R`).
3. Updating the policy via gradient ascent using `∇p * R`.
The absence of numerical data suggests this is a high-level conceptual diagram, emphasizing the interaction between components rather than implementation details. The use of an RNN as the controller implies sequential decision-making, where the policy may depend on historical sampling outcomes. The gradient scaling by `R` ensures that only architectures leading to higher accuracy reinforce the controller’s policy, aligning with reinforcement learning principles.
**Notable Design Choices**:
- The pink/blue color coding distinguishes the controller (policy) from the training process (environment).
- The bidirectional flow highlights the feedback loop central to RL.
- The explicit mention of "probability p" and "gradient of p" underscores the stochastic and differentiable nature of the search process.
</details>
This paper presents Neural Architecture Search, a gradient-based method for finding good architectures (see Figure 1) . Our work is based on the observation that the structure and connectivity of a
∗ Work done as a member of the Google Brain Residency program ( g.co/brainresidency .)
neural network can be typically specified by a variable-length string. It is therefore possible to use a recurrent network - the controller - to generate such string. Training the network specified by the string - the 'child network' - on the real data will result in an accuracy on a validation set. Using this accuracy as the reward signal, we can compute the policy gradient to update the controller. As a result, in the next iteration, the controller will give higher probabilities to architectures that receive high accuracies. In other words, the controller will learn to improve its search over time.
Our experiments show that Neural Architecture Search can design good models from scratch, an achievement considered not possible with other methods. On image recognition with CIFAR-10, Neural Architecture Search can find a novel ConvNet model that is better than most human-invented architectures. Our CIFAR-10 model achieves a 3.65 test set error, while being 1.05x faster than the current best model. On language modeling with Penn Treebank, Neural Architecture Search can design a novel recurrent cell that is also better than previous RNN and LSTM architectures. The cell that our model found achieves a test set perplexity of 62.4 on the Penn Treebank dataset, which is 3.6 perplexity better than the previous state-of-the-art.
## 2 RELATED WORK
Hyperparameter optimization is an important research topic in machine learning, and is widely used in practice (Bergstra et al., 2011; Bergstra & Bengio, 2012; Snoek et al., 2012; 2015; Saxena & Verbeek, 2016). Despite their success, these methods are still limited in that they only search models from a fixed-length space. In other words, it is difficult to ask them to generate a variable-length configuration that specifies the structure and connectivity of a network. In practice, these methods often work better if they are supplied with a good initial model (Bergstra & Bengio, 2012; Snoek et al., 2012; 2015). There are Bayesian optimization methods that allow to search non fixed length architectures (Bergstra et al., 2013; Mendoza et al., 2016), but they are less general and less flexible than the method proposed in this paper.
Modern neuro-evolution algorithms, e.g., Wierstra et al. (2005); Floreano et al. (2008); Stanley et al. (2009), on the other hand, are much more flexible for composing novel models, yet they are usually less practical at a large scale. Their limitations lie in the fact that they are search-based methods, thus they are slow or require many heuristics to work well.
Neural Architecture Search has some parallels to program synthesis and inductive programming, the idea of searching a program from examples (Summers, 1977; Biermann, 1978). In machine learning, probabilistic program induction has been used successfully in many settings, such as learning to solve simple Q&A (Liang et al., 2010; Neelakantan et al., 2015; Andreas et al., 2016), sort a list of numbers (Reed & de Freitas, 2015), and learning with very few examples (Lake et al., 2015).
The controller in Neural Architecture Search is auto-regressive, which means it predicts hyperparameters one a time, conditioned on previous predictions. This idea is borrowed from the decoder in end-to-end sequence to sequence learning (Sutskever et al., 2014). Unlike sequence to sequence learning, our method optimizes a non-differentiable metric, which is the accuracy of the child network. It is therefore similar to the work on BLEU optimization in Neural Machine Translation (Ranzato et al., 2015; Shen et al., 2016). Unlike these approaches, our method learns directly from the reward signal without any supervised bootstrapping.
Also related to our work is the idea of learning to learn or meta-learning (Thrun & Pratt, 2012), a general framework of using information learned in one task to improve a future task. More closely related is the idea of using a neural network to learn the gradient descent updates for another network (Andrychowicz et al., 2016) and the idea of using reinforcement learning to find update policies for another network (Li & Malik, 2016).
## 3 METHODS
In the following section, we will first describe a simple method of using a recurrent network to generate convolutional architectures. We will show how the recurrent network can be trained with a policy gradient method to maximize the expected accuracy of the sampled architectures. We will present several improvements of our core approach such as forming skip connections to increase model complexity and using a parameter server approach to speed up training. In the last part of
the section, we will focus on generating recurrent architectures, which is another key contribution of our paper.
## 3.1 GENERATE MODEL DESCRIPTIONS WITH A CONTROLLER RECURRENT NEURAL NETWORK
In Neural Architecture Search, we use a controller to generate architectural hyperparameters of neural networks. To be flexible, the controller is implemented as a recurrent neural network. Let's suppose we would like to predict feedforward neural networks with only convolutional layers, we can use the controller to generate their hyperparameters as a sequence of tokens:
Figure 2: How our controller recurrent neural network samples a simple convolutional network. It predicts filter height, filter width, stride height, stride width, and number of filters for one layer and repeats. Every prediction is carried out by a softmax classifier and then fed into the next time step as input.
<details>
<summary>Image 2 Details</summary>

### Visual Description
## Diagram: Neural Network Layer Architecture
### Overview
The image depicts a schematic representation of a neural network layer transformation process. It illustrates the flow of data and parameters between adjacent layers (Layer N-1 → Layer N → Layer N+1) with explicit annotations for key architectural components. The diagram uses color-coded boxes to represent different hyperparameters and directional arrows to show data flow.
### Components/Axes
1. **Horizontal Flow**:
- Left-to-right progression from Layer N-1 to Layer N+1
- Dashed arrows indicate parameter inheritance between layers
- Solid arrows show data transformation flow
2. **Parameter Boxes** (color-coded):
- **Blue**: "Number of Filters" (appears at Layer N-1 and Layer N+1)
- **Pink**: "Filter Height" (Layer N-1 and Layer N+1)
- **Yellow**: "Filter Width" (Layer N-1 and Layer N+1)
- **Green**: "Stride Height" (Layer N)
- **Purple**: "Stride Width" (Layer N)
3. **Layer Structure**:
- Layer N-1: Input layer with filter parameters
- Layer N: Intermediate layer with stride parameters
- Layer N+1: Output layer with inherited filter parameters
### Detailed Analysis
- **Filter Parameters**:
- Filter dimensions (height/width) remain consistent between Layer N-1 and N+1
- Stride parameters (height/width) are only specified for Layer N
- Number of filters appears to be preserved across layers (blue boxes)
- **Spatial Relationships**:
- Parameter boxes are vertically stacked in the order:
1. Number of Filters
2. Filter Height
3. Filter Width
4. Stride Height
5. Stride Width
6. Number of Filters
7. Filter Height
- Arrows connect boxes in a cascading pattern, suggesting hierarchical dependencies
### Key Observations
1. **Consistency in Filter Parameters**: Filter dimensions (height/width) are maintained across non-adjacent layers (N-1 and N+1)
2. **Stride Isolation**: Stride parameters only appear in the intermediate layer (N), suggesting they govern downsampling/upsampling operations
3. **Filter Count Preservation**: The number of filters remains constant between input and output layers, implying no dimensional reduction in feature maps
4. **Dashed vs Solid Arrows**: Dashed arrows indicate parameter inheritance, while solid arrows show active data transformation
### Interpretation
This diagram illustrates the fundamental operations in convolutional neural network (CNN) architecture:
1. **Filter Application**: The number, height, and width of filters determine feature extraction capabilities
2. **Stride Control**: Stride parameters in Layer N control spatial downsampling/upsampling between layers
3. **Layer Transformation**: The flow shows how input features (Layer N-1) are transformed through convolutional operations (Layer N) to produce output features (Layer N+1)
4. **Architectural Constraints**: The preservation of filter count and dimensions suggests this represents a standard convolutional block without pooling or stride-based dimensionality changes
The diagram emphasizes the importance of filter configuration in maintaining feature map dimensions while allowing spatial manipulation through stride parameters. The color-coding helps distinguish between static filter properties (blue/yellow/pink) and dynamic stride controls (green/purple).
</details>
In our experiments, the process of generating an architecture stops if the number of layers exceeds a certain value. This value follows a schedule where we increase it as training progresses. Once the controller RNN finishes generating an architecture, a neural network with this architecture is built and trained. At convergence, the accuracy of the network on a held-out validation set is recorded. The parameters of the controller RNN, θ c , are then optimized in order to maximize the expected validation accuracy of the proposed architectures. In the next section, we will describe a policy gradient method which we use to update parameters θ c so that the controller RNN generates better architectures over time.
## 3.2 TRAINING WITH REINFORCE
The list of tokens that the controller predicts can be viewed as a list of actions a 1: T to design an architecture for a child network. At convergence, this child network will achieve an accuracy R on a held-out dataset. We can use this accuracy R as the reward signal and use reinforcement learning to train the controller. More concretely, to find the optimal architecture, we ask our controller to maximize its expected reward, represented by J ( θ c ) :
$$J ( \theta _ { c } ) = E _ { P ( a _ { 1 \colon T } ; \theta _ { c } ) } [ R ]$$
Since the reward signal R is non-differentiable, we need to use a policy gradient method to iteratively update θ c . In this work, we use the REINFORCE rule from Williams (1992):
$$\bigtriangledown _ { \theta _ { c } } J ( \theta _ { c } ) = \sum _ { t = 1 } ^ { T } E _ { P ( a _ { 1 \colon T } ; \theta _ { c } ) } \left [ \bigtriangledown _ { \theta _ { c } } \log P ( a _ { t } | a _ { ( t - 1 ) \colon 1 } ; \theta _ { c } ) R \right ]$$
An empirical approximation of the above quantity is:
$$\frac { 1 } { m } \sum _ { k = 1 } ^ { m } \sum _ { t = 1 } ^ { T } \bigtriangledown _ { \theta _ { c } } \log P ( a _ { t } | a _ { ( t - 1 ) \colon 1 } ; \theta _ { c } ) R _ { k }$$
Where m is the number of different architectures that the controller samples in one batch and T is the number of hyperparameters our controller has to predict to design a neural network architecture.
The validation accuracy that the k -th neural network architecture achieves after being trained on a training dataset is R k .
The above update is an unbiased estimate for our gradient, but has a very high variance. In order to reduce the variance of this estimate we employ a baseline function:
$$\frac { 1 } { m } \sum _ { k = 1 } ^ { m } \sum _ { t = 1 } ^ { T } \bigtriangledown _ { \theta _ { c } } \log P ( a _ { t } | a _ { ( t - 1 ) \colon 1 } ; \theta _ { c } ) ( R _ { k } - b )$$
As long as the baseline function b does not depend on the on the current action, then this is still an unbiased gradient estimate. In this work, our baseline b is an exponential moving average of the previous architecture accuracies.
Accelerate Training with Parallelism and Asynchronous Updates: In Neural Architecture Search, each gradient update to the controller parameters θ c corresponds to training one child network to convergence. As training a child network can take hours, we use distributed training and asynchronous parameter updates in order to speed up the learning process of the controller (Dean et al., 2012). We use a parameter-server scheme where we have a parameter server of S shards, that store the shared parameters for K controller replicas. Each controller replica samples m different child architectures that are trained in parallel. The controller then collects gradients according to the results of that minibatch of m architectures at convergence and sends them to the parameter server in order to update the weights across all controller replicas. In our implementation, convergence of each child network is reached when its training exceeds a certain number of epochs. This scheme of parallelism is summarized in Figure 3.
Figure 3: Distributed training for Neural Architecture Search. We use a set of S parameter servers to store and send parameters to K controller replicas. Each controller replica then samples m architectures and run the multiple child models in parallel. The accuracy of each child model is recorded to compute the gradients with respect to θ c , which are then sent back to the parameter servers.
<details>
<summary>Image 3 Details</summary>

### Visual Description
## Diagram: Distributed Computing Architecture for Parameter Replication and Accuracy Distribution
### Overview
The diagram illustrates a hierarchical distributed system architecture with three primary components: **Parameter Servers** (pink), **Controller Replicas** (green), and **Child Replicas** (blue). Arrows indicate data flow, with Parameter Servers feeding parameters to Controller Replicas, which then distribute "Accuracy R" to Child Replicas. The structure suggests a master-slave or distributed training framework.
### Components/Axes
1. **Parameter Servers** (Top Layer):
- Labeled as "Parameter Server 1," "Parameter Server 2," ..., "Parameter Server S."
- Positioned at the top of the hierarchy, connected via bidirectional arrows to all Controller Replicas.
- Represent centralized parameter storage or aggregation nodes.
2. **Controller Replicas** (Middle Layer):
- Labeled as "Controller Replica 1," "Controller Replica 2," ..., "Controller Replica K."
- Positioned centrally, connected to Parameter Servers (incoming arrows) and Child Replicas (outgoing arrows).
- Act as intermediaries for parameter distribution and accuracy computation.
3. **Child Replicas** (Bottom Layer):
- Labeled as "Child Replica 1," "Child Replica 2," ..., "Child Replica m" under each Controller Replica.
- Positioned at the bottom, receiving "Accuracy R" from their respective Controller Replica.
- Likely represent worker nodes performing localized computations.
4. **Data Flow**:
- **Parameters (θ_c)**: Flow from Parameter Servers to Controller Replicas (solid black arrows).
- **Accuracy R**: Flow from Controller Replicas to Child Replicas (dashed black arrows).
### Detailed Analysis
- **Parameter Servers**:
- Total of **S servers** (exact count unspecified, denoted as "S").
- Each server connects to **all K Controller Replicas**, indicating full parameter replication across controllers.
- Example: Parameter Server 1 → Controller Replica 1, 2, ..., K.
- **Controller Replicas**:
- Total of **K replicas** (exact count unspecified, denoted as "K").
- Each Controller Replica connects to **m Child Replicas** (exact count unspecified, denoted as "m").
- Example: Controller Replica 1 → Child Replica 1, 2, ..., m.
- **Child Replicas**:
- Total of **m replicas per Controller** (exact count unspecified, denoted as "m").
- Each Child Replica receives "Accuracy R" from its parent Controller Replica.
### Key Observations
1. **Hierarchical Structure**:
- Parameter Servers → Controller Replicas → Child Replicas forms a top-down hierarchy.
- Suggests a centralized parameter management system with decentralized execution.
2. **Redundancy**:
- Multiple Parameter Servers (S) and Controller Replicas (K) imply fault tolerance.
- Child Replicas (m) under each Controller enable parallel processing.
3. **Data Flow Symmetry**:
- Parameters flow unidirectionally from Parameter Servers to Controllers.
- Accuracy R flows unidirectionally from Controllers to Child Replicas.
### Interpretation
This architecture aligns with **distributed machine learning frameworks** (e.g., parameter server-based training). Key insights:
- **Parameter Servers** act as a centralized repository for shared model parameters (θ_c), critical for synchronization in distributed training.
- **Controller Replicas** aggregate parameters and compute "Accuracy R," likely representing validation metrics or loss values.
- **Child Replicas** execute localized training or inference tasks, using parameters from their Controller Replica.
- The use of multiple replicas (K, m) suggests scalability and resilience against node failures.
The diagram emphasizes **asynchronous communication** (dashed arrows for Accuracy R) and **full parameter replication** (solid arrows for θ_c), common in systems prioritizing consistency over latency. The absence of numerical values implies a conceptual model rather than a performance benchmark.
</details>
## 3.3 INCREASE ARCHITECTURE COMPLEXITY WITH SKIP CONNECTIONS AND OTHER LAYER TYPES
In Section 3.1, the search space does not have skip connections, or branching layers used in modern architectures such as GoogleNet (Szegedy et al., 2015), and Residual Net (He et al., 2016a). In this section we introduce a method that allows our controller to propose skip connections or branching layers, thereby widening the search space.
To enable the controller to predict such connections, we use a set-selection type attention (Neelakantan et al., 2015) which was built upon the attention mechanism (Bahdanau et al., 2015; Vinyals et al., 2015). At layer N , we add an anchor point which has N -1 content-based sigmoids to indicate the previous layers that need to be connected. Each sigmoid is a function of the current hiddenstate of the controller and the previous hiddenstates of the previous N -1 anchor points:
$$P ( L a y e r \, j \, i s \, a n \, i n p u t \, t o l a y e r \, i ) = s i g m o i d ( v ^ { T } \tanh ( W _ { p r e v } * h _ { j } + W _ { c u r r } * h _ { i } ) ) ,$$
where h j represents the hiddenstate of the controller at anchor point for the j -th layer, where j ranges from 0 to N -1 . We then sample from these sigmoids to decide what previous layers to be used as inputs to the current layer. The matrices W prev , W curr and v are trainable parameters. As
these connections are also defined by probability distributions, the REINFORCE method still applies without any significant modifications. Figure 4 shows how the controller uses skip connections to decide what layers it wants as inputs to the current layer.
Figure 4: The controller uses anchor points, and set-selection attention to form skip connections.
<details>
<summary>Image 4 Details</summary>

### Visual Description
## Diagram: Neural Network Architecture with Skip Connections
### Overview
The diagram illustrates a multi-layer neural network architecture with explicit skip connections between layers. It shows the flow of data through sequential processing units (layers) and demonstrates how information is preserved across non-adjacent layers via dashed "N-1 skip connections." The architecture includes configurable parameters for filters, strides, and anchor points.
### Components/Axes
1. **Layers**:
- Layer N-1 (leftmost)
- Layer N (middle)
- Layer N+1 (rightmost)
2. **Processing Units**:
- Each layer contains identical processing blocks arranged horizontally
- Blocks are color-coded with specific parameters:
- **Blue**: Number of Filters
- **White**: Anchor Point
- **Pink**: Filter Height
- **Yellow**: Filter Width
- **Green**: Stride Height
- **Purple**: Stride Width
- **Dashed Lines**: N-1 Skip Connections
3. **Flow Direction**:
- Primary data flow: Left to right through sequential layers
- Skip connections: Dashed arrows from Layer N-1 to Layer N+1
### Detailed Analysis
1. **Layer Structure**:
- All layers share identical block configurations
- Block sequence per layer:
1. Number of Filters (blue)
2. Anchor Point (white)
3. Filter Height (pink)
4. Filter Width (yellow)
5. Stride Height (green)
6. Stride Width (purple)
7. Anchor Point (white)
8. Number of Filters (blue)
2. **Skip Connection Pattern**:
- Dashed lines connect:
- Layer N-1's "Number of Filters" → Layer N+1's "Number of Filters"
- Layer N-1's "Anchor Point" → Layer N+1's "Anchor Point"
- Creates residual pathway bypassing Layer N
3. **Parameter Relationships**:
- Filter dimensions (height/width) and stride dimensions (height/width) appear in paired blocks
- Anchor points are positioned between filter/stride parameters
- Number of filters appears at both input and output of each layer
### Key Observations
1. **Architectural Symmetry**:
- Identical block configurations across all layers suggest uniform processing
- Skip connections create diagonal shortcuts between equivalent layers
2. **Dimensionality Control**:
- Stride parameters (green/purple) likely control spatial resolution reduction
- Filter parameters (pink/yellow) determine feature map size
3. **Information Preservation**:
- Skip connections enable gradient flow through unchanged dimensions
- Critical for training deep networks without vanishing gradients
### Interpretation
This architecture demonstrates a hybrid approach combining:
1. **Sequential Processing**: Standard CNN-like feature extraction through filters and strides
2. **Residual Learning**: Skip connections allow networks to learn residual functions
3. **Anchor Point Integration**: Suggests object detection or spatial anchoring capabilities
The design prioritizes:
- **Training Stability**: Through skip connections maintaining gradient pathways
- **Feature Reusability**: Identical layer configurations across depths
- **Spatial Control**: Explicit stride and filter dimension parameters
Notable patterns include the consistent parameter ordering across layers and the strategic placement of skip connections between equivalent processing stages. The architecture appears optimized for both feature extraction depth and information preservation across layers.
</details>
In our framework, if one layer has many input layers then all input layers are concatenated in the depth dimension. Skip connections can cause 'compilation failures' where one layer is not compatible with another layer, or one layer may not have any input or output. To circumvent these issues, we employ three simple techniques. First, if a layer is not connected to any input layer then the image is used as the input layer. Second, at the final layer we take all layer outputs that have not been connected and concatenate them before sending this final hiddenstate to the classifier. Lastly, if input layers to be concatenated have different sizes, we pad the small layers with zeros so that the concatenated layers have the same sizes.
Finally, in Section 3.1, we do not predict the learning rate and we also assume that the architectures consist of only convolutional layers, which is also quite restrictive. It is possible to add the learning rate as one of the predictions. Additionally, it is also possible to predict pooling, local contrast normalization (Jarrett et al., 2009; Krizhevsky et al., 2012), and batchnorm (Ioffe & Szegedy, 2015) in the architectures. To be able to add more types of layers, we need to add an additional step in the controller RNN to predict the layer type, then other hyperparameters associated with it.
## 3.4 GENERATE RECURRENT CELL ARCHITECTURES
In this section, we will modify the above method to generate recurrent cells. At every time step t , the controller needs to find a functional form for h t that takes x t and h t -1 as inputs. The simplest way is to have h t = tanh( W 1 ∗ x t + W 2 ∗ h t -1 ) , which is the formulation of a basic recurrent cell. A more complicated formulation is the widely-used LSTM recurrent cell (Hochreiter & Schmidhuber, 1997).
The computations for basic RNN and LSTM cells can be generalized as a tree of steps that take x t and h t -1 as inputs and produce h t as final output. The controller RNN needs to label each node in the tree with a combination method (addition, elementwise multiplication, etc.) and an activation function ( tanh , sigmoid , etc.) to merge two inputs and produce one output. Two outputs are then fed as inputs to the next node in the tree. To allow the controller RNN to select these methods and functions, we index the nodes in the tree in an order so that the controller RNN can visit each node one by one and label the needed hyperparameters.
Inspired by the construction of the LSTM cell (Hochreiter & Schmidhuber, 1997), we also need cell variables c t -1 and c t to represent the memory states. To incorporate these variables, we need the controller RNN to predict what nodes in the tree to connect these two variables to. These predictions can be done in the last two blocks of the controller RNN.
To make this process more clear, we show an example in Figure 5, for a tree structure that has two leaf nodes and one internal node. The leaf nodes are indexed by 0 and 1, and the internal node is indexed by 2. The controller RNN needs to first predict 3 blocks, each block specifying a combination method and an activation function for each tree index. After that it needs to predict the last 2 blocks that specify how to connect c t and c t -1 to temporary variables inside the tree. Specifically,
Figure 5: An example of a recurrent cell constructed from a tree that has two leaf nodes (base 2) and one internal node. Left: the tree that defines the computation steps to be predicted by controller. Center: an example set of predictions made by the controller for each computation step in the tree. Right: the computation graph of the recurrent cell constructed from example predictions of the controller.
<details>
<summary>Image 5 Details</summary>

### Visual Description
## Diagram: Neural Network Architecture with Hierarchical Tree Indices
### Overview
The image depicts a technical diagram of a neural network architecture with hierarchical tree indices and element-wise operations. It combines a tree-based structure on the left, a sequence of operations in the middle, and a graph representation of data flow on the right. The diagram uses color-coded blocks and arrows to illustrate computational steps and data dependencies.
### Components/Axes
1. **Left Section (Tree Indices)**:
- **Tree Index 2**: Root node splitting into two branches.
- **Tree Index 0** and **Tree Index 1**: Child nodes of Tree Index 2.
- Inputs: `h_{t-1}` (previous hidden state) and `x_t` (current input).
- Outputs: `h_t` (current hidden state).
2. **Middle Section (Operations)**:
- **Operations**:
- **Add** (orange), **Tanh** (red), **Elem Mult** (blue), **ReLU** (pink), **Sigmoid** (green).
- **Flow**:
- Data flows from Tree Index 0 and 1 through sequential operations (e.g., Add → Tanh → Elem Mult → ReLU → Sigmoid).
- **Cell Inject**: Final operation before output.
3. **Right Section (Graph Representation)**:
- **Nodes**: Represent operations (e.g., `sigmoid`, `elem_mult`, `relu`, `tanh`, `add`).
- **Edges**: Arrows indicate data flow between operations.
- **Inputs/Outputs**:
- Inputs: `x_t`, `h_{t-1}`.
- Outputs: `h_t`, `c_t` (cell state).
4. **Legend**:
- **Colors**:
- Orange = Add, Red = Tanh, Blue = Elem Mult, Pink = ReLU, Green = Sigmoid.
- **Placement**: Located on the far right, aligned with the graph.
### Detailed Analysis
- **Tree Index Hierarchy**:
- Tree Index 2 splits into Tree Index 0 and 1, suggesting parallel processing paths.
- Each tree index processes `h_{t-1}` and `x_t` to produce intermediate states.
- **Operation Sequence**:
- **Tree Index 0**:
- `h_{t-1}` and `x_t` are added, passed through Tanh, then element-wise multiplied with another input.
- Result is activated via ReLU.
- **Tree Index 1**:
- Similar operations but with different input combinations (e.g., `h_{t-1}` and `x_t` added, then Tanh applied).
- **Cell Inject**: Combines outputs from Tree Index 0 and 1, followed by a final ReLU activation.
- **Graph Representation**:
- Nodes are labeled with operations (e.g., `sigmoid`, `elem_mult`).
- Arrows show dependencies: e.g., `x_t` feeds into `elem_mult`, which connects to `relu`.
### Key Observations
1. **Hierarchical Structure**: The tree indices enable parallel computation paths, likely for feature extraction or attention mechanisms.
2. **Operation Flow**:
- Non-linear transformations (Tanh, ReLU, Sigmoid) are interspersed with linear operations (Add, Elem Mult).
- Element-wise multiplication (`elem_mult`) suggests gating mechanisms (e.g., GRU/LSTM-like).
3. **Outputs**:
- `h_t` (hidden state) and `c_t` (cell state) indicate a recurrent or stateful architecture.
4. **Legend Consistency**: Colors in the graph match the legend (e.g., pink nodes correspond to ReLU operations).
### Interpretation
This diagram represents a **recurrent neural network (RNN)** or **transformer-like architecture** with hierarchical processing. The tree indices allow the model to capture multi-scale dependencies in the input data (`x_t`), while the sequence of operations (Add, Tanh, Elem Mult, ReLU, Sigmoid) introduces non-linearity and gating for robust feature learning. The graph on the right visualizes how data propagates through the network, emphasizing the role of element-wise operations in modulating information flow. The use of `c_t` (cell state) suggests memory retention, akin to LSTM/GRU cells, enabling the model to handle sequential data effectively. The hierarchical tree structure may improve interpretability or efficiency by organizing computations into modular sub-networks.
</details>
according to the predictions of the controller RNN in this example, the following computation steps will occur:
- The controller predicts Add and Tanh for tree index 0, this means we need to compute a 0 = tanh( W 1 ∗ x t + W 2 ∗ h t -1 ) .
- The controller predicts ElemMult and ReLU for tree index 1, this means we need to compute a 1 = ReLU ( ( W 3 ∗ x t ) ( W 4 ∗ h t -1 ) ) .
- The controller predicts 0 for the second element of the 'Cell Index', Add and ReLU for elements in 'Cell Inject', which means we need to compute a new 0 = ReLU( a 0 + c t -1 ) . Notice that we don't have any learnable parameters for the internal nodes of the tree.
- The controller predicts ElemMult and Sigmoid for tree index 2, this means we need to compute a 2 = sigmoid( a new 0 a 1 ) . Since the maximum index in the tree is 2, h t is set to a 2 .
- The controller RNN predicts 1 for the first element of the 'Cell Index', this means that we should set c t to the output of the tree at index 1 before the activation, i.e., c t = ( W 3 ∗ x t ) ( W 4 ∗ h t -1 ) .
In the above example, the tree has two leaf nodes, thus it is called a 'base 2' architecture. In our experiments, we use a base number of 8 to make sure that the cell is expressive.
## 4 EXPERIMENTS AND RESULTS
We apply our method to an image classification task with CIFAR-10 and a language modeling task with Penn Treebank, two of the most benchmarked datasets in deep learning. On CIFAR-10, our goal is to find a good convolutional architecture whereas on Penn Treebank our goal is to find a good recurrent cell. On each dataset, we have a separate held-out validation dataset to compute the reward signal. The reported performance on the test set is computed only once for the network that achieves the best result on the held-out validation dataset. More details about our experimental procedures and results are as follows.
## 4.1 LEARNING CONVOLUTIONAL ARCHITECTURES FOR CIFAR-10
Dataset: In these experiments we use the CIFAR-10 dataset with data preprocessing and augmentation procedures that are in line with other previous results. We first preprocess the data by whitening all the images. Additionally, we upsample each image then choose a random 32x32 crop of this upsampled image. Finally, we use random horizontal flips on this 32x32 cropped image.
Search space: Our search space consists of convolutional architectures, with rectified linear units as non-linearities (Nair & Hinton, 2010), batch normalization (Ioffe & Szegedy, 2015) and skip connections between layers (Section 3.3). For every convolutional layer, the controller RNN has to select a filter height in [1, 3, 5, 7], a filter width in [1, 3, 5, 7], and a number of filters in [24, 36, 48,
64]. For strides, we perform two sets of experiments, one where we fix the strides to be 1, and one where we allow the controller to predict the strides in [1, 2, 3].
Training details: The controller RNN is a two-layer LSTM with 35 hidden units on each layer. It is trained with the ADAM optimizer (Kingma & Ba, 2015) with a learning rate of 0.0006. The weights of the controller are initialized uniformly between -0.08 and 0.08. For the distributed training, we set the number of parameter server shards S to 20, the number of controller replicas K to 100 and the number of child replicas m to 8, which means there are 800 networks being trained on 800 GPUs concurrently at any time.
Once the controller RNN samples an architecture, a child model is constructed and trained for 50 epochs. The reward used for updating the controller is the maximum validation accuracy of the last 5 epochs cubed. The validation set has 5,000 examples randomly sampled from the training set, the remaining 45,000 examples are used for training. The settings for training the CIFAR-10 child models are the same with those used in Huang et al. (2016a). We use the Momentum Optimizer with a learning rate of 0.1, weight decay of 1e-4, momentum of 0.9 and used Nesterov Momentum (Sutskever et al., 2013).
During the training of the controller, we use a schedule of increasing number of layers in the child networks as training progresses. On CIFAR-10, we ask the controller to increase the depth by 2 for the child models every 1,600 samples, starting at 6 layers.
Results: After the controller trains 12,800 architectures, we find the architecture that achieves the best validation accuracy. We then run a small grid search over learning rate, weight decay, batchnorm epsilon and what epoch to decay the learning rate. The best model from this grid search is then run until convergence and we then compute the test accuracy of such model and summarize the results in Table 1. As can be seen from the table, Neural Architecture Search can design several promising architectures that perform as well as some of the best models on this dataset.
| Model | Depth | Parameters | Error rate (%) |
|-----------------------------------------------------|----------|--------------|------------------|
| Network in Network (Lin et al., 2013) | - | - | 8.81 |
| All-CNN (Springenberg et al., 2014) | - | - | 7.25 |
| Deeply Supervised Net (Lee et al., 2015) | - | - | 7.97 |
| Highway Network (Srivastava et al., 2015) | - | - | 7.72 |
| Scalable Bayesian Optimization (Snoek et al., 2015) | - | - | 6.37 |
| FractalNet (Larsson et al., 2016) | 21 | 38.6M | 5.22 |
| with Dropout/Drop-path | 21 | 38.6M | 4.60 |
| ResNet (He et al., 2016a) | 110 | 1.7M | 6.61 |
| ResNet (reported by Huang et al. (2016c)) | 110 | 1.7M | 6.41 |
| ResNet with Stochastic Depth (Huang et al., 2016c) | 110 | 1.7M | 5.23 |
| Wide ResNet (Zagoruyko &Komodakis, 2016) | 16 | 11.0M | 4.81 |
| | 28 | 36.5M | 4.17 |
| ResNet (pre-activation) (He et al., 2016b) | 164 1001 | 1.7M 10.2M | 5.46 |
| DenseNet ( L = 40 ,k = 12) Huang et al. (2016a) | | 1.0M | 4.62 5.24 |
| | 40 | | |
| DenseNet ( L = 100 ,k = 12) Huang et al. (2016a) | 100 | 7.0M | 4.10 |
| DenseNet ( L = 100 ,k = 24) Huang et al. (2016a) | 100 | 27.2M | 3.74 |
| DenseNet-BC ( L = 100 ,k = 40) Huang et al. (2016b) | 190 | 25.6M | 3.46 |
| Neural Architecture Search v1 no stride or pooling | 15 | 4.2M | 5.50 |
| Neural Architecture Search v2 predicting strides | 20 | 2.5M | 6.01 |
| Neural Architecture Search v3 max pooling | 39 | 7.1M | 4.47 |
| Neural Architecture Search v3 max pooling + more | 39 | | 3.65 |
| filters | | 37.4M | |
Table 1: Performance of Neural Architecture Search and other state-of-the-art models on CIFAR-10.
First, if we ask the controller to not predict stride or pooling, it can design a 15-layer architecture that achieves 5.50% error rate on the test set. This architecture has a good balance between accuracy and depth. In fact, it is the shallowest and perhaps the most inexpensive architecture among the top performing networks in this table. This architecture is shown in Appendix A, Figure 7. A notable feature of this architecture is that it has many rectangular filters and it prefers larger filters at the top layers. Like residual networks (He et al., 2016a), the architecture also has many one-step skip connections. This architecture is a local optimum in the sense that if we perturb it, its performance becomes worse. For example, if we densely connect all layers with skip connections, its performance becomes slightly worse: 5.56%. If we remove all skip connections, its performance drops to 7.97%.
In the second set of experiments, we ask the controller to predict strides in addition to other hyperparameters. As stated earlier, this is more challenging because the search space is larger. In this case, it finds a 20-layer architecture that achieves 6.01% error rate on the test set, which is not much worse than the first set of experiments.
Finally, if we allow the controller to include 2 pooling layers at layer 13 and layer 24 of the architectures, the controller can design a 39-layer network that achieves 4.47% which is very close to the best human-invented architecture that achieves 3.74%. To limit the search space complexity we have our model predict 13 layers where each layer prediction is a fully connected block of 3 layers. Additionally, we change the number of filters our model can predict from [24, 36, 48, 64] to [6, 12, 24, 36]. Our result can be improved to 3.65% by adding 40 more filters to each layer of our architecture. Additionally this model with 40 filters added is 1.05x as fast as the DenseNet model that achieves 3.74%, while having better performance. The DenseNet model that achieves 3.46% error rate (Huang et al., 2016b) uses 1x1 convolutions to reduce its total number of parameters, which we did not do, so it is not an exact comparison.
## 4.2 LEARNING RECURRENT CELLS FOR PENN TREEBANK
Dataset: Weapply Neural Architecture Search to the Penn Treebank dataset, a well-known benchmark for language modeling. On this task, LSTM architectures tend to excel (Zaremba et al., 2014; Gal, 2015), and improving them is difficult (Jozefowicz et al., 2015). As PTB is a small dataset, regularization methods are needed to avoid overfitting. First, we make use of the embedding dropout and recurrent dropout techniques proposed in Zaremba et al. (2014) and (Gal, 2015). We also try to combine them with the method of sharing Input and Output embeddings, e.g., Bengio et al. (2003); Mnih & Hinton (2007), especially Inan et al. (2016) and Press & Wolf (2016). Results with this method are marked with 'shared embeddings.'
Search space: Following Section 3.4, our controller sequentially predicts a combination method then an activation function for each node in the tree. For each node in the tree, the controller RNN needs to select a combination method in [ add, elem mult ] and an activation method in [ identity, tanh, sigmoid, relu ] . The number of input pairs to the RNN cell is called the 'base number' and set to 8 in our experiments. When the base number is 8, the search space is has approximately 6 × 10 16 architectures, which is much larger than 15,000, the number of architectures that we allow our controller to evaluate.
Training details: The controller and its training are almost identical to the CIFAR-10 experiments except for a few modifications: 1) the learning rate for the controller RNN is 0.0005, slightly smaller than that of the controller RNN in CIFAR-10, 2) in the distributed training, we set S to 20, K to 400 and m to 1, which means there are 400 networks being trained on 400 CPUs concurrently at any time, 3) during asynchronous training we only do parameter updates to the parameter-server once 10 gradients from replicas have been accumulated.
In our experiments, every child model is constructed and trained for 35 epochs. Every child model has two layers, with the number of hidden units adjusted so that total number of learnable parameters approximately match the 'medium' baselines (Zaremba et al., 2014; Gal, 2015). In these experiments we only have the controller predict the RNN cell structure and fix all other hyperparameters. The reward function is c (validation perplexity) 2 where c is a constant, usually set at 80.
After the controller RNN is done training, we take the best RNN cell according to the lowest validation perplexity and then run a grid search over learning rate, weight initialization, dropout rates
and decay epoch. The best cell found was then run with three different configurations and sizes to increase its capacity.
Results: In Table 2, we provide a comprehensive list of architectures and their performance on the PTB dataset. As can be seen from the table, the models found by Neural Architecture Search outperform other state-of-the-art models on this dataset, and one of our best models achieves a gain of almost 3.6 perplexity. Not only is our cell is better, the model that achieves 64 perplexity is also more than two times faster because the previous best network requires running a cell 10 times per time step (Zilly et al., 2016).
Table 2: Single model perplexity on the test set of the Penn Treebank language modeling task. Parameter numbers with ‡ are estimates with reference to Merity et al. (2016).
| Model | Parameters | Test Perplexity |
|--------------------------------------------------------------|--------------|-------------------|
| Mikolov &Zweig (2012) - KN-5 | 2M ‡ | 141 . 2 |
| Mikolov &Zweig (2012) - KN5 + cache | 2M ‡ | 125 . 7 |
| Mikolov &Zweig (2012) - RNN | 6M ‡ | 124 . 7 |
| Mikolov &Zweig (2012) - RNN-LDA | 7M ‡ | 113 . 7 |
| Mikolov &Zweig (2012) - RNN-LDA + KN-5 + cache | 9M ‡ | 92 . 0 |
| Pascanu et al. (2013) - Deep RNN | 6M | 107 . 5 |
| Cheng et al. (2014) - Sum-Prod Net | 5M ‡ | 100 . 0 |
| Zaremba et al. (2014) - LSTM (medium) | 20M | 82 . 7 |
| Zaremba et al. (2014) - LSTM (large) | 66M | 78 . 4 |
| Gal (2015) - Variational LSTM (medium, untied) | 20M | 79 . 7 |
| Gal (2015) - Variational LSTM (medium, untied, MC) | 20M | 78 . 6 |
| Gal (2015) - Variational LSTM (large, untied) | 66M | 75 . 2 |
| Gal (2015) - Variational LSTM (large, untied, MC) | 66M | 73 . 4 |
| Kim et al. (2015) - CharCNN | 19M | 78 . 9 |
| Press &Wolf (2016) - Variational LSTM, shared embeddings | 51M | 73 . 2 |
| Merity et al. (2016) - Zoneout + Variational LSTM (medium) | 20M | 80 . 6 |
| Merity et al. (2016) - Pointer Sentinel-LSTM (medium) | 21M | 70 . 9 |
| Inan et al. (2016) - VD-LSTM + REAL (large) | 51M | 68 . 5 |
| Zilly et al. (2016) - Variational RHN, shared embeddings | 24M | 66 . 0 |
| Neural Architecture Search with base 8 | 32M | 67 . 9 |
| Neural Architecture Search with base 8 and shared embeddings | 25M | 64 . 0 |
| Neural Architecture Search with base 8 and shared embeddings | 54M | 62 . 4 |
The newly discovered cell is visualized in Figure 8 in Appendix A. The visualization reveals that the new cell has many similarities to the LSTM cell in the first few steps, such as it likes to compute W 1 ∗ h t -1 + W 2 ∗ x t several times and send them to different components in the cell.
Transfer Learning Results: To understand whether the cell can generalize to a different task, we apply it to the character language modeling task on the same dataset. We use an experimental setup that is similar to Ha et al. (2016), but use variational dropout by Gal (2015). We also train our own LSTM with our setup to get a fair LSTM baseline. Models are trained for 80K steps and the best test set perplexity is taken according to the step where validation set perplexity is the best. The results on the test set of our method and state-of-art methods are reported in Table 3. The results on small settings with 5-6M parameters confirm that the new cell does indeed generalize, and is better than the LSTM cell.
Additionally, we carry out a larger experiment where the model has 16.28M parameters. This model has a weight decay rate of 1 e -4 , was trained for 600K steps (longer than the above models) and the test perplexity is taken where the validation set perplexity is highest. We use dropout rates of 0.2 and 0.5 as described in Gal (2015), but do not use embedding dropout. We use the ADAM optimizer with a learning rate of 0.001 and an input embedding size of 128. Our model had two layers with 800 hidden units. We used a minibatch size of 32 and BPTT length of 100. With this setting, our model achieves 1.214 perplexity, which is the new state-of-the-art result on this task.
Finally, we also drop our cell into the GNMT framework (Wu et al., 2016), which was previously tuned for LSTM cells, and train an WMT14 English → German translation model. The GNMT
Table 3: Comparison between our cell and state-of-art methods on PTB character modeling. The new cell was found on word level language modeling.
| RNN Cell Type | Parameters | Test Bits Per Character |
|----------------------------------------------------------|--------------|---------------------------|
| Ha et al. (2016) - Layer Norm HyperLSTM | 4.92M | 1.25 |
| Ha et al. (2016) - Layer Norm HyperLSTM Large Embeddings | 5.06M | 1.233 |
| Ha et al. (2016) - 2-Layer Norm HyperLSTM | 14.41M | 1.219 |
| Two layer LSTM | 6.57M | 1.243 |
| Two Layer with New Cell | 6.57M | 1.228 |
| Two Layer with New Cell | 16.28M | 1.214 |
network has 8 layers in the encoder, 8 layers in the decoder. The first layer of the encoder has bidirectional connections. The attention module is a neural network with 1 hidden layer. When a LSTM cell is used, the number of hidden units in each layer is 1024. The model is trained in a distributed setting with a parameter sever and 12 workers. Additionally, each worker uses 8 GPUs and a minibatch of 128. We use Adam with a learning rate of 0.0002 in the first 60K training steps, and SGD with a learning rate of 0.5 until 400K steps. After that the learning rate is annealed by dividing by 2 after every 100K steps until it reaches 0.1. Training is stopped at 800K steps. More details can be found in Wu et al. (2016).
In our experiment with the new cell, we make no change to the above settings except for dropping in the new cell and adjusting the hyperparameters so that the new model should have the same computational complexity with the base model. The result shows that our cell, with the same computational complexity, achieves an improvement of 0.5 test set BLEU than the default LSTM cell. Though this improvement is not huge, the fact that the new cell can be used without any tuning on the existing GNMT framework is encouraging. We expect further tuning can help our cell perform better.
Control Experiment 1 - Adding more functions in the search space: To test the robustness of Neural Architecture Search, we add max to the list of combination functions and sin to the list of activation functions and rerun our experiments. The results show that even with a bigger search space, the model can achieve somewhat comparable performance. The best architecture with max and sin is shown in Figure 8 in Appendix A.
Control Experiment 2 - Comparison against Random Search: Instead of policy gradient, one can use random search to find the best network. Although this baseline seems simple, it is often very hard to surpass (Bergstra & Bengio, 2012). We report the perplexity improvements using policy gradient against random search as training progresses in Figure 6. The results show that not only the best model using policy gradient is better than the best model using random search, but also the average of top models is also much better.
Figure 6: Improvement of Neural Architecture Search over random search over time. We plot the difference between the average of the top k models our controller finds vs. random search every 400 models run.
<details>
<summary>Image 6 Details</summary>

### Visual Description
## Line Graph: Perplexity Improvement Across Model Configurations
### Overview
The image depicts a line graph comparing the perplexity improvement of three model configurations (Top_1, Top_5, and Top_15 unique models) over 25,000 iterations. The y-axis measures "Perplexity Improvement" (0–40), while the x-axis tracks "Iteration" (0–25,000). Three data series are plotted: blue (Top_1), green (Top_5), and red (Top_15).
### Components/Axes
- **X-axis (Iteration)**: Labeled "Iteration," ranging from 0 to 25,000 in increments of 5,000.
- **Y-axis (Perplexity Improvement)**: Labeled "Perplexity Improvement," ranging from 0 to 40 in increments of 5.
- **Legend**: Positioned in the top-left corner, with three entries:
- Blue: Top_1_unique_models
- Green: Top_5_unique_models
- Red: Top_15_unique_models
- **Gridlines**: Dotted lines at every 5-unit interval on both axes.
### Detailed Analysis
1. **Top_15_unique_models (Red Line)**:
- **Trend**: Sharp initial increase from ~20 to ~35 between iterations 0–5,000, followed by a plateau near 30–35.
- **Key Points**:
- Iteration 0: ~20
- Iteration 5,000: ~25
- Iteration 10,000: ~27
- Iteration 15,000: ~30
- Iteration 20,000: ~32
- Iteration 25,000: ~35
2. **Top_5_unique_models (Green Line)**:
- **Trend**: Gradual, steady increase from ~5 to ~18 over 25,000 iterations.
- **Key Points**:
- Iteration 0: ~5
- Iteration 5,000: ~10
- Iteration 10,000: ~12
- Iteration 15,000: ~15
- Iteration 20,000: ~17
- Iteration 25,000: ~18
3. **Top_1_unique_models (Blue Line)**:
- **Trend**: Low and fluctuating, peaking at ~10 near iteration 20,000.
- **Key Points**:
- Iteration 0: ~0
- Iteration 5,000: ~3
- Iteration 10,000: ~5
- Iteration 15,000: ~7
- Iteration 20,000: ~10
- Iteration 25,000: ~10
### Key Observations
- **Top_15 models** achieve the highest perplexity improvement, with a rapid early gain followed by stabilization.
- **Top_5 models** show consistent but slower improvement, reaching ~18 by the end.
- **Top_1 models** exhibit minimal improvement (~10) and volatility, suggesting limited effectiveness.
- All lines plateau after ~15,000 iterations, indicating diminishing returns.
### Interpretation
The data suggests that using more unique models (Top_15) yields significantly better perplexity improvement, particularly in early iterations. This implies that model diversity enhances performance initially, but gains taper off as iterations increase. The Top_1 configuration’s poor performance highlights the importance of model variety. The plateauing trends across all configurations suggest that beyond a certain point, additional iterations do not substantially improve results, possibly due to optimization limits or data saturation.
</details>
## 5 CONCLUSION
In this paper we introduce Neural Architecture Search, an idea of using a recurrent neural network to compose neural network architectures. By using recurrent network as the controller, our method is flexible so that it can search variable-length architecture space. Our method has strong empirical performance on very challenging benchmarks and presents a new research direction for automatically finding good neural network architectures. The code for running the models found by the controller on CIFAR-10 and PTB will be released at https://github.com/tensorflow/models . Additionally, we have added the RNN cell found using our method under the name NASCell into TensorFlow, so others can easily use it.
## ACKNOWLEDGMENTS
We thank Greg Corrado, Jeff Dean, David Ha, Lukasz Kaiser and the Google Brain team for their help with the project.
## REFERENCES
- Jacob Andreas, Marcus Rohrbach, Trevor Darrell, and Dan Klein. Learning to compose neural networks for question answering. In NAACL , 2016.
- Marcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W Hoffman, David Pfau, Tom Schaul, and Nando de Freitas. Learning to learn by gradient descent by gradient descent. arXiv preprint arXiv:1606.04474 , 2016.
- Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. In ICLR , 2015.
- Yoshua Bengio, R´ ejean Ducharme, Pascal Vincent, and Christian Jauvin. A neural probabilistic language model. JMLR , 2003.
- James Bergstra and Yoshua Bengio. Random search for hyper-parameter optimization. JMLR , 2012.
- James Bergstra, R´ emi Bardenet, Yoshua Bengio, and Bal´ azs K´ egl. Algorithms for hyper-parameter optimization. In NIPS , 2011.
- James Bergstra, Daniel Yamins, and David D Cox. Making a science of model search: Hyperparameter optimization in hundreds of dimensions for vision architectures. ICML , 2013.
- Alan W. Biermann. The inference of regular LISP programs from examples. IEEE transactions on Systems, Man, and Cybernetics , 1978.
- Wei-Chen Cheng, Stanley Kok, Hoai Vu Pham, Hai Leong Chieu, and Kian Ming Adam Chai. Language modeling with sum-product networks. In INTERSPEECH , 2014.
- Navneet Dalal and Bill Triggs. Histograms of oriented gradients for human detection. In CVPR , 2005.
- Jeffrey Dean, Greg Corrado, Rajat Monga, Kai Chen, Matthieu Devin, Mark Mao, Andrew Senior, Paul Tucker, Ke Yang, Quoc V. Le, et al. Large scale distributed deep networks. In NIPS , 2012.
- Dario Floreano, Peter D¨ urr, and Claudio Mattiussi. Neuroevolution: from architectures to learning. Evolutionary Intelligence , 2008.
- Yarin Gal. A theoretically grounded application of dropout in recurrent neural networks. arXiv preprint arXiv:1512.05287 , 2015.
- David Ha, Andrew Dai, and Quoc V. Le. Hypernetworks. arXiv preprint arXiv:1609.09106 , 2016.
- Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR , 2016a.
- Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. arXiv preprint arXiv:1603.05027 , 2016b.
- Geoffrey Hinton, Li Deng, Dong Yu, George E. Dahl, Abdel-rahman Mohamed, Navdeep Jaitly, Andrew Senior, Vincent Vanhoucke, Patrick Nguyen, Tara N. Sainath, et al. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal Processing Magazine , 2012.
- Sepp Hochreiter and Juergen Schmidhuber. Long short-term memory. Neural Computation , 1997.
- Gao Huang, Zhuang Liu, and Kilian Q. Weinberger. Densely connected convolutional networks. arXiv preprint arXiv:1608.06993 , 2016a.
- Gao Huang, Zhuang Liu, Kilian Q. Weinberger, and Laurens van der Maaten. Densely connected convolutional networks. arXiv preprint arXiv:1608.06993 , 2016b.
- Gao Huang, Yu Sun, Zhuang Liu, Daniel Sedra, and Kilian Weinberger. Deep networks with stochastic depth. arXiv preprint arXiv:1603.09382 , 2016c.
- Hakan Inan, Khashayar Khosravi, and Richard Socher. Tying word vectors and word classifiers: A loss framework for language modeling. arXiv preprint arXiv:1611.01462 , 2016.
- Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In ICML , 2015.
- Kevin Jarrett, Koray Kavukcuoglu, Yann Lecun, et al. What is the best multi-stage architecture for object recognition? In ICCV , 2009.
- Rafal Jozefowicz, Wojciech Zaremba, and Ilya Sutskever. An empirical exploration of recurrent network architectures. In ICML , 2015.
- Yoon Kim, Yacine Jernite, David Sontag, and Alexander M. Rush. Character-aware neural language models. arXiv preprint arXiv:1508.06615 , 2015.
- Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR , 2015.
- Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. Imagenet classification with deep convolutional neural networks. In NIPS , 2012.
- Brenden M. Lake, Ruslan Salakhutdinov, and Joshua B. Tenenbaum. Human-level concept learning through probabilistic program induction. Science , 2015.
- Gustav Larsson, Michael Maire, and Gregory Shakhnarovich. Fractalnet: Ultra-deep neural networks without residuals. arXiv preprint arXiv:1605.07648 , 2016.
- Yann LeCun, L´ eon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE , 1998.
- Chen-Yu Lee, Saining Xie, Patrick Gallagher, Zhengyou Zhang, and Zhuowen Tu. Deeplysupervised nets. In AISTATS , 2015.
- Ke Li and Jitendra Malik. Learning to optimize. arXiv preprint arXiv:1606.01885 , 2016.
- Percy Liang, Michael I. Jordan, and Dan Klein. Learning programs: A hierarchical Bayesian approach. In ICML , 2010.
- Min Lin, Qiang Chen, and Shuicheng Yan. Network in network. In ICLR , 2013.
- David G. Lowe. Object recognition from local scale-invariant features. In CVPR , 1999.
- Hector Mendoza, Aaron Klein, Matthias Feurer, Jost Tobias Springenberg, and Frank Hutter. Towards automatically-tuned neural networks. In Proceedings of the 2016 Workshop on Automatic Machine Learning , pp. 58-65, 2016.
- Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. Pointer sentinel mixture models. arXiv preprint arXiv:1609.07843 , 2016.
- Tomas Mikolov and Geoffrey Zweig. Context dependent recurrent neural network language model. In SLT , pp. 234-239, 2012.
- Andriy Mnih and Geoffrey Hinton. Three new graphical models for statistical language modelling. In ICML , 2007.
- Vinod Nair and Geoffrey E. Hinton. Rectified linear units improve restricted Boltzmann machines. In ICML , 2010.
- Arvind Neelakantan, Quoc V. Le, and Ilya Sutskever. Neural programmer: Inducing latent programs with gradient descent. In ICLR , 2015.
- Razvan Pascanu, Caglar Gulcehre, Kyunghyun Cho, and Yoshua Bengio. How to construct deep recurrent neural networks. arXiv preprint arXiv:1312.6026 , 2013.
- Ofir Press and Lior Wolf. Using the output embedding to improve language models. arXiv preprint arXiv:1608.05859 , 2016.
- Marc'Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. Sequence level training with recurrent neural networks. arXiv preprint arXiv:1511.06732 , 2015.
- Scott Reed and Nando de Freitas. Neural programmer-interpreters. In ICLR , 2015.
- Shreyas Saxena and Jakob Verbeek. Convolutional neural fabrics. In NIPS , 2016.
- Shiqi Shen, Yong Cheng, Zhongjun He, Wei He, Hua Wu, Maosong Sun, and Yang Liu. Minimum risk training for neural machine translation. In ACL , 2016.
- Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 , 2014.
- Jasper Snoek, Hugo Larochelle, and Ryan P. Adams. Practical Bayesian optimization of machine learning algorithms. In NIPS , 2012.
- Jasper Snoek, Oren Rippel, Kevin Swersky, Ryan Kiros, Nadathur Satish, Narayanan Sundaram, Mostofa Patwary, Mostofa Ali, Ryan P. Adams, et al. Scalable bayesian optimization using deep neural networks. In ICML , 2015.
- Jost Tobias Springenberg, Alexey Dosovitskiy, Thomas Brox, and Martin Riedmiller. Striving for simplicity: The all convolutional net. arXiv preprint arXiv:1412.6806 , 2014.
- Rupesh Kumar Srivastava, Klaus Greff, and J¨ urgen Schmidhuber. Highway networks. arXiv preprint arXiv:1505.00387 , 2015.
- Kenneth O. Stanley, David B. D'Ambrosio, and Jason Gauci. A hypercube-based encoding for evolving large-scale neural networks. Artificial Life , 2009.
- Phillip D. Summers. A methodology for LISP program construction from examples. Journal of the ACM , 1977.
- Ilya Sutskever, James Martens, George Dahl, and Geoffrey Hinton. On the importance of initialization and momentum in deep learning. In ICML , 2013.
- Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. Sequence to sequence learning with neural networks. In NIPS , 2014.
- Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In CVPR , 2015.
- Sebastian Thrun and Lorien Pratt. Learning to learn . Springer Science & Business Media, 2012.
- Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. Pointer networks. In NIPS , 2015.
- Daan Wierstra, Faustino J Gomez, and J¨ urgen Schmidhuber. Modeling systems with internal state using evolino. In GECCO , 2005.
- Ronald J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. In Machine Learning , 1992.
- Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, et al. Google's neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144 , 2016.
- Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. In BMVC , 2016.
- Wojciech Zaremba, Ilya Sutskever, and Oriol Vinyals. Recurrent neural network regularization. arXiv preprint arXiv:1409.2329 , 2014.
- Julian Georg Zilly, Rupesh Kumar Srivastava, Jan Koutn´ ık, and J¨ urgen Schmidhuber. Recurrent highway networks. arXiv preprint arXiv:1607.03474 , 2016.
## A APPENDIX
Figure 7: Convolutional architecture discovered by our method, when the search space does not have strides or pooling layers. FH is filter height, FW is filter width and N is number of filters. Note that the skip connections are not residual connections. If one layer has many input layers then all input layers are concatenated in the depth dimension.
<details>
<summary>Image 7 Details</summary>

### Visual Description
## Neural Network Architecture Diagram: Layer Configuration and Data Flow
### Overview
The image depicts a neural network architecture with a hierarchical structure of fully connected (FH) and convolutional (FW) layers, culminating in a Softmax output layer. The diagram illustrates the flow of data from an input image through multiple processing layers to a classification output.
### Components/Axes
- **Input Layer**: Labeled "Image" at the bottom, serving as the data entry point.
- **Hidden Layers**:
- **Fully Connected (FH) Layers**:
- FH: 3 FW: 3 N: 36
- FH: 3 FW: 3 N: 48
- FH: 5 FW: 5 N: 36
- FH: 5 FW: 5 N: 48
- FH: 7 FW: 7 N: 36
- FH: 7 FW: 7 N: 48
- **Convolutional (FW) Layers**:
- FH: 3 FW: 3 N: 36
- FH: 3 FW: 3 N: 48
- FH: 5 FW: 5 N: 36
- FH: 5 FW: 5 N: 48
- FH: 7 FW: 7 N: 36
- FH: 7 FW: 7 N: 48
- **Output Layer**: Softmax (pink box) at the top, indicating final classification probabilities.
### Detailed Analysis
1. **Layer Configuration**:
- All layers use a combination of filter height (FH), filter width (FW), and neuron count (N).
- Parameters increase in complexity from bottom to top (e.g., FH: 3 → FH: 7).
- Convolutional layers (FW) and fully connected layers (FH) alternate in the architecture.
2. **Data Flow**:
- Arrows indicate unidirectional flow from the input image upward through the network.
- Each layer's output connects to the next layer's input, with no skip connections shown.
3. **Parameter Trends**:
- Filter sizes (FH/FW) increase from 3x3 to 7x7 as data progresses upward.
- Neuron counts (N) vary between 36 and 48 across layers, suggesting dimensionality adjustments.
### Key Observations
- **Hierarchical Complexity**: The network grows in depth and filter size, typical of deep learning architectures for feature extraction.
- **Symmetry**: Convolutional and fully connected layers alternate with mirrored parameter configurations (e.g., FH: 3 FW: 3 N: 36 appears twice).
- **Output Mechanism**: Softmax at the top implies multi-class classification, though class labels are not specified.
### Interpretation
This architecture resembles a hybrid convolutional-fully connected network, optimized for hierarchical feature learning. The increasing filter sizes suggest progressive abstraction of image features, while the alternating layer types balance spatial and global pattern recognition. The Softmax output indicates the network's purpose is classification, though the specific task (e.g., image recognition) is not explicitly stated. The absence of activation functions or regularization details limits understanding of optimization strategies. The diagram emphasizes structural symmetry, which may imply balanced computational load across layers.
</details>
Figure 8: A comparison of the original LSTM cell vs. two good cells our model found. Top left: LSTM cell. Top right: Cell found by our model when the search space does not include max and sin . Bottom: Cell found by our model when the search space includes max and sin (the controller did not choose to use the sin function).
<details>
<summary>Image 8 Details</summary>

### Visual Description
## Diagram: Neural Network Computational Graph
### Overview
The image depicts three interconnected computational graphs representing neural network operations, likely from a recurrent architecture (e.g., LSTM/GRU). Nodes represent mathematical operations/activations, while edges indicate data flow. Three diagrams are stacked vertically, showing progressive complexity with shared components.
### Components/Axes
- **Nodes**:
- **Red**: `tanh` (hyperbolic tangent activation)
- **Blue**: `elem_mult` (element-wise multiplication)
- **Green**: `sigmoid` (sigmoid activation)
- **Orange**: `add` (element-wise addition)
- **Yellow**: `identity` (no-op operation)
- **Pink**: `relu` (rectified linear unit)
- **Labels**:
- Input: `x_t` (current time step input)
- Hidden states: `h_t` (current), `h_t-1` (previous)
- Cell states: `c_t` (current), `c_t-1` (previous)
- **Flow Direction**: Left-to-right (typical for sequential processing).
### Detailed Analysis
1. **Top Diagram**:
- **Structure**:
- `x_t` → `tanh` (red) → `elem_mult` (blue) → `add` (orange) → `h_t` (output).
- `h_t-1` and `c_t-1` feed into `elem_mult` and `add` operations.
- **Key Connections**:
- `tanh` output is element-wise multiplied with `h_t-1` (blue node).
- Result added to `c_t-1` (orange `add` node) to produce `h_t`.
2. **Middle Diagram**:
- **Structure**:
- Expands on top diagram with additional `sigmoid` (green) and `identity` (yellow) nodes.
- `sigmoid` gates modulate `elem_mult` operations.
- **Key Connections**:
- `sigmoid` outputs control element-wise multiplications (e.g., `sigmoid` → `elem_mult` → `add`).
- `identity` nodes preserve values for skip connections.
3. **Bottom Diagram**:
- **Structure**:
- Most complex, with `max` operations (red) and `relu` (pink).
- Introduces parallel paths for gradient computation.
- **Key Connections**:
- `max` operations aggregate gradients across multiple paths.
- `relu` applied to intermediate states for non-linearity.
### Key Observations
- **Skip Connections**: `identity` nodes enable residual connections, preserving gradients during backpropagation.
- **Temporal Dependency**: `h_t-1` and `c_t-1` propagate information across time steps.
- **Color Consistency**: Node colors align with their labels (e.g., all `tanh` nodes are red).
- **Gradient Flow**: `max` operations in the bottom diagram suggest attention mechanisms or gradient clipping.
### Interpretation
This diagram illustrates the forward and backward passes of a recurrent neural network, likely an LSTM cell. The `tanh` and `sigmoid` gates regulate information flow, while `elem_mult` and `add` combine hidden/cell states. The `max` and `relu` operations in the bottom diagram hint at advanced variants (e.g., attention or gradient regulation). The graphs emphasize modularity, with reusable components (e.g., `elem_mult` blocks) and efficient computation through element-wise operations. The absence of explicit numerical values suggests this is a conceptual representation rather than empirical data.
</details>