Abstract
The possibility of the inference of neural networks on minifloats has been studied. Calculations using a float16 accumulator for intermediate computing were performed. Performance was tested on the GoogleNet, ResNet-50, and MobileNet-v2 convolutional neural network and the DeepSpeechv01 recurrent network. The experiments showed that the performance of these neural networks with 11-bit minifloats is not inferior to the performance of networks with the float32 standard type without additional training. The results indicate that minifloats can be used to design efficient computers for the inference of neural networks.
Original language | English |
---|---|
Pages (from-to) | 76-80 |
Number of pages | 5 |
Journal | Optoelectronics, Instrumentation and Data Processing |
Volume | 56 |
Issue number | 1 |
DOIs | |
Publication status | Published - 1 Jan 2020 |
Keywords
- data types
- deep learning
- minifloat
- neural networks
- special-purpose computers