References & Citations
Computer Science > Computer Vision and Pattern Recognition
Title: GhostNetV3: Exploring the Training Strategies for Compact Models
(Submitted on 17 Apr 2024 (v1), last revised 22 Apr 2024 (this version, v2))
Abstract: Compact neural networks are specially designed for applications on edge devices with faster inference speed yet modest performance. However, training strategies of compact models are borrowed from that of conventional models at present, which ignores their difference in model capacity and thus may impede the performance of compact models. In this paper, by systematically investigating the impact of different training ingredients, we introduce a strong training strategy for compact models. We find that the appropriate designs of re-parameterization and knowledge distillation are crucial for training high-performance compact models, while some commonly used data augmentations for training conventional models, such as Mixup and CutMix, lead to worse performance. Our experiments on ImageNet-1K dataset demonstrate that our specialized training strategy for compact models is applicable to various architectures, including GhostNetV2, MobileNetV2 and ShuffleNetV2. Specifically, equipped with our strategy, GhostNetV3 1.3$\times$ achieves a top-1 accuracy of 79.1% with only 269M FLOPs and a latency of 14.46ms on mobile devices, surpassing its ordinarily trained counterpart by a large margin. Moreover, our observation can also be extended to object detection scenarios. PyTorch code and checkpoints can be found at this https URL
Submission history
From: Zhiwei Hao [view email][v1] Wed, 17 Apr 2024 09:33:31 GMT (351kb,D)
[v2] Mon, 22 Apr 2024 02:46:44 GMT (344kb,D)
Link back to: arXiv, form interface, contact.