Applicability of Minifloats for Efficient Calculations in Neural Networks

A. Yu Kondrat’ev, A. I. Goncharenko

Результат исследования: Научные публикации в периодических изданияхстатьярецензирование

Аннотация

The possibility of the inference of neural networks on minifloats has been studied. Calculations using a float16 accumulator for intermediate computing were performed. Performance was tested on the GoogleNet, ResNet-50, and MobileNet-v2 convolutional neural network and the DeepSpeechv01 recurrent network. The experiments showed that the performance of these neural networks with 11-bit minifloats is not inferior to the performance of networks with the float32 standard type without additional training. The results indicate that minifloats can be used to design efficient computers for the inference of neural networks.

Язык оригиналаанглийский
Страницы (с-по)76-80
Число страниц5
ЖурналOptoelectronics, Instrumentation and Data Processing
Том56
Номер выпуска1
DOI
СостояниеОпубликовано - 1 янв 2020

Fingerprint Подробные сведения о темах исследования «Applicability of Minifloats for Efficient Calculations in Neural Networks». Вместе они формируют уникальный семантический отпечаток (fingerprint).

Цитировать