Neural architecture search (NAS) faces a challenge in balancing the exploration of expressive,
broad search spaces that enable architectural innovation with the need for efficient evaluation
of architectures to effectively search such spaces. We investigate surrogate model training for
improving search in highly expressive NAS search spaces based on context-free grammars.
We show that i) surrogate models trained either using zero-cost-proxy metrics and neural
graph features (GRAF) or by fine-tuning an off-the-shelf LM have high predictive power
for the performance of architectures both within and across datasets, ii) these surrogates can
be used to filter out bad architectures when searching on novel datasets, thereby significantly
speeding up search and achieving better final performances, and iii) the surrogates can be
further used directly as the search objective for huge speed-ups.
TL;DR: We accelerate NAS in expressive search spaces by using transferrable performance prediction methods.
@misc{qin2025transferrablesurrogatesexpressiveneural,
title={Transferrable Surrogates in Expressive Neural Architecture Search Spaces},
author={Shiwen Qin and Gabriela Kadlecová and Martin Pilát and Shay B. Cohen and Roman Neruda and Elliot J. Crowley and Jovita Lukasik and Linus Ericsson},
year={2025},
eprint={2504.12971},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2504.12971},
}