Veit Elser, Manish Krishan Lal
The paper introduces a method for training neural networks on Boolean data using Boolean threshold functions, achieving sparse and interpretable models with exact or strong generalization on tasks where traditional methods struggle.
This research presents a novel approach to training neural networks that work with Boolean data, where each node in the network uses values of +1 or -1. Instead of the usual loss minimization, the method uses a set of constraints to guide training. This approach helps create networks that are simpler and more interpretable, often using basic logical operations. The technique is shown to be effective in solving complex tasks like circuit discovery and logic inference, where traditional neural network training methods often fail.