CINXE.COM

TY - JOUR AU - Zeng, Gengsheng L. TI - Deterministic Versus Nondeterministic Optimization Algorithms for the Restricted Boltzmann Machine PY - 2024/11/22 Y2 - 2024/11/23 JF - Journal of Computational and Cognitive Engineering JA - JCCE VL - 3 IS - 4 SE - Research Articles DO - 10.47852/bonviewJCCE42022789 UR - https://doi.org/10.47852/bonviewJCCE42022789 SP - 404-411 AB - A restricted Boltzmann machine is a fully connected shallow neural network. It can be used to solve many challenging optimization problems. The Boltzmann machines are usually considered probability models. Probability models normally use nondeterministic algorithms to solve their parameters. The Hopfield network which is also known as the Ising model is a special case of a Boltzmann machine, in the sense that the hidden layer is the same as the visible layer. The weights and biases from the visible layer to the hidden layer are the same as the weights and biases from the hidden layer to the visible layer. When the Hopfield network is considered a probabilistic model, everything is treated as stochastic (i.e., random) and nondeterministic. An optimization problem in the Hopfield network is considered searching for the samples that have higher probabilities according to a probability density function. This paper proposes a method to consider the Hopfield network as a deterministic model, in which nothing is random, and no stochastic distribution is used. An optimization problem associated with the Hopfield network thus has a deterministic objective function (also known as loss function or cost function) that is the energy function itself. The purpose of the objective function is to assist the Hopfield network to reach a state that has a lower energy. This study suggests that deterministic optimization algorithms can be used for the associated optimization problems. The deterministic algorithm has the same mathematical form for the calculation of a perceptron that consists of a dot product, a bias, and a nonlinear activation function. This paper uses some examples of searching for stable states to demonstrate that the deterministic optimization method may have a faster convergence rate and smaller errors. Received: 8 March 2024 | Revised: 27 April 2024 | Accepted: 14 May 2024 Conflicts of InterestThe author declares that he has no conflicts of interest to this work. Data Availability StatementData sharing is not applicable to this article as no new data were created or analyzed in this study. Author Contribution StatementGengsheng L. Zeng: Conceptualization, Methodology, Software, Validation, Formal analysis, Investigation, Resources, Data curation, Writing – original draft, Writing – review & editing, Visualization, Supervision, Project administration, Funding acquisition. ER -