Automatic neural architecture search is taking an increasingly important role on finding better models. Recent research works have proved the feasibility of automatic NAS, and also found some models that could beat manually designed and tuned models. Some of representative works are NASNet, ENAS, DARTS, Network Morphism, and Evolution. There are new innovations keeping emerging.
However, it takes great efforts to implement NAS algorithms, and it is hard to reuse code base of existing algorithms in new one. To facilitate NAS innovations (e.g., design and implement new NAS models, compare different NAS models side-by-side), an easy-to-use and flexible programming interface is crucial.
With this motivation, our ambition is to provide a unified architecture in NNI, to accelerate innovations on NAS, and apply state-of-art algorithms on real world problems faster.
With the unified interface, there are two different modes for the architecture search. One is the so-called one-shot NAS, where a super-net is built based on search space, and using one shot training to generate good-performing child model. The other is the traditional searching approach, where each child model in search space runs as an independent trial, the performance result is sent to tuner and the tuner generates new child model.
- Supported One-shot NAS Algorithms
- Classic Distributed NAS with NNI experiment
- NNI NAS Programming Interface
NNI supports below NAS algorithms now and is adding more. User can reproduce an algorithm or use it on their own dataset. We also encourage users to implement other algorithms with NNI API, to benefit more people.
Name | Brief Introduction of Algorithm |
---|---|
ENAS | Efficient Neural Architecture Search via Parameter Sharing. In ENAS, a controller learns to discover neural network architectures by searching for an optimal subgraph within a large computational graph. It uses parameter sharing between child models to achieve fast speed and excellent performance. |
DARTS | DARTS: Differentiable Architecture Search introduces a novel algorithm for differentiable network architecture search on bilevel optimization. |
P-DARTS | Progressive Differentiable Architecture Search: Bridging the Depth Gap between Search and Evaluation is based on DARTS. It introduces an efficient algorithm which allows the depth of searched architectures to grow gradually during the training procedure. |
SPOS | Single Path One-Shot Neural Architecture Search with Uniform Sampling constructs a simplified supernet trained with an uniform path sampling method, and applies an evolutionary algorithm to efficiently search for the best-performing architectures. |
CDARTS | Cyclic Differentiable Architecture Search builds a cyclic feedback mechanism between the search and evaluation networks. It introduces a cyclic differentiable architecture search framework which integrates the two networks into a unified architecture. |
One-shot algorithms run standalone without nnictl. Only PyTorch version has been implemented. Tensorflow 2.x will be supported in future release.
Here are some common dependencies to run the examples. PyTorch needs to be above 1.2 to use BoolTensor
.
- NNI 1.2+
- tensorboard
- PyTorch 1.2+
- git
NOTE, we are trying to support various NAS algorithms with unified programming interface, and it's in very experimental stage. It means the current programing interface may be updated in future.
The programming interface of designing and searching a model is often demanded in two scenarios.
- When designing a neural network, there may be multiple operation choices on a layer, sub-model, or connection, and it's undetermined which one or combination performs best. So, it needs an easy way to express the candidate layers or sub-models.
- When applying NAS on a neural network, it needs an unified way to express the search space of architectures, so that it doesn't need to update trial code for different searching algorithms.
NNI proposed API is here. And here is an example of NAS implementation, which bases on NNI proposed interface.
- To report a bug for this feature in GitHub;
- To file a feature or improvement request for this feature in GitHub;
- To know more about Feature Engineering with NNI;
- To know more about Model Compression with NNI;
- To know more about Hyperparameter Tuning with NNI;