BINAS: Bilinear Interpretable Neural Architecture Search

Niv Nayman, Yonathan Aflalo, Asaf Noy, Lihi Zelnik-Manor

Research output: Contribution to journalConference articlepeer-review

Abstract

Making neural networks practical often requires adhering to resource constraints such as latency, energy and memory. To solve this we introduce a Bilinear Interpretable approach for constrained Neural Architecture Search (BINAS) that is based on an accurate and simple bilinear formulation of both an accuracy estimator and the expected resource requirement, jointly with a scalable search method with theoretical guarantees. One major advantage of BINAS is providing interpretability via insights about the contribution of different design choices. For example, we find that in the examined search space, adding depth and width is more effective at deeper stages of the network and at the beginning of each resolution stage. BINAS differs from previous methods that typically use complicated accuracy predictors that make them hard to interpret, sensitive to many hyper-parameters, and thus with compromised final accuracy. Our experiments1 show that BINAS generates comparable to or better than state of the art architectures, while reducing the marginal search cost, as well as strictly satisfying the resource constraints.

Original languageEnglish
Pages (from-to)786-801
Number of pages16
JournalProceedings of Machine Learning Research
Volume189
StatePublished - 2022
Event14th Asian Conference on Machine Learning, ACML 2022 - Hyderabad, India
Duration: 12 Dec 202214 Dec 2022

Keywords

  • Computer Vision
  • Deep Learning
  • Neural Architecture Search
  • Optimization

ASJC Scopus subject areas

  • Artificial Intelligence
  • Software
  • Control and Systems Engineering
  • Statistics and Probability

Fingerprint

Dive into the research topics of 'BINAS: Bilinear Interpretable Neural Architecture Search'. Together they form a unique fingerprint.

Cite this