residual number system
O.S. Mezentseva, A.O. Kochev
The modifications of the residual number system (RNS) basic operations algorithms: conversion from positional number system (PNS) to RNS, as well as addition and multiplication for NeuroMatrix hardware platform are offered and considered in the paper.
Algorithms implementations are based on feedforward networks and allow for architecture specific of the neuroprocessor.
This network should perform the following operations:
1. Digit lowering for values obtained by RNS operations performance (DL).
2. Bringing values obtained by digit lowering operation performance into a range of chosen RNS moduli-set (BIR).
DL implementation for NM is performed by several iterations of the bits weighed sum. Number of iteration depends on the chosen
BIR implementation involves following steps:
Calculate a difference between a vector representing RNS moduli-set, and a vector obtained by DL performance;
Perform activation function on a difference vector;
Perform mask operation between a input vector, and a vector representing a difference using the second step result vector as a mask.
The hardwired threshold function is used as activation function.
Conversion from PNS to RNS is similar to DL, but uses different weight matrix and input vector, and performed in just iteration regardless of chosen RNS moduli-set.
Addition and multiplication implementation is reduced to performance of modular addition/multiplication.
The researches have shown the reasonability of using NM hardware components to implement the modular neural networks