The possibility of the inference of neural networks on minifloats has been studied. Calculations using a float16 accumulator for intermediate computing were performed. Performance was tested on the GoogleNet, ResNet-50, and MobileNet-v2 convolutional neural network and the DeepSpeechv01 recurrent network. The experiments showed that the performance of these neural networks with 11-bit minifloats is not inferior to the performance of networks with the float32 standard type without additional training. The results indicate that minifloats can be used to design efficient computers for the inference of neural networks.
|Number of pages||5|
|Journal||Optoelectronics, Instrumentation and Data Processing|
|Publication status||Published - 1 Jan 2020|
- data types
- deep learning
- neural networks
- special-purpose computers