view in publisher's site

Block change learning for knowledge distillation

Deep neural networks perform well but require high-performance hardware for their use in real-world environments. Knowledge distillation is a simple method for improving the performance of a small network by using the knowledge of a large complex network. Small and large networks are referred to as student and teacher models, respectively. Previous knowledge distillation approaches perform well in a relatively small teacher network (20–30 layers) but poorly in large teacher networks (50 layers). Here, we propose an approach called block change learning that performs local and global knowledge distillation by changing blocks comprised of layers. The method focuses on the knowledge transfer without losing information in a large teacher model, as the approach considers intra-relationships between layers using local knowledge distillation and inter-relationships between corresponding blocks. The results are demonstrated this approach as superior to state-of-the-art methods using feature extraction datasets (Market1501 and DukeMTMC-relD) and object classification datasets (CIFAR-100 and Caltech256). Furthermore, we showed that the performance of the proposed approach was superior to that of a fine-tuning approach using pretrained models.


Download PDF سفارش ترجمه این مقاله این مقاله را خودتان با کمک ترجمه کنید
سفارش ترجمه مقاله و کتاب - شروع کنید

95/12/18 - با استفاده از افزونه دانلود فایرفاکس و کروم٬ چکیده مقالات به صورت خودکار تشخیص داده شده و دکمه دانلود فری‌پیپر در صفحه چکیده نمایش داده می شود.