?
Encoding and Decoding of Recursive Structures in Neural-Symbolic Systems
One of the ways to join the connectionist approach and the symbolic paradigm is Tensor Product Variable Binding. It was initially devoted to building distributed representation of recursive structures for neural networks to use it as the input. Structures are an essential part of both formal and natural languages and appear in syntactic trees, grammar, semantic interpretation. A human mind smoothly operates with the appearing problems on the neural level, and it is naturally scalable and robust. The question arises of whether it is possible to translate traditional symbolic algorithms to the sub-symbolic level to reuse performance and computational gain of the neural networks for general tasks. However, several aspects of Tensor Product Variable Binding lack attention in public research, especially in building such a neural architecture that performs computations according to the mathematical model without preliminary training. In this paper, those implementation aspects are addressed. A proposed novel design for the decoding network translates a tensor to a corresponding recursive structure with the arbitrary level of nesting. Also, several complex topics about encoding such structures in the distributed representation or tensor are addressed. Both encoding and decoding neural networks are built with the Keras framework’s help and are analyzed from the perspective of applied value. The proposed design continues the series of papers dedicated to building a robust bridge between two computational paradigms: connectionist and symbolic.