Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
153 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

On the use of associative memory in Hopfield networks designed to solve propositional satisfiability problems (2307.16807v3)

Published 31 Jul 2023 in nlin.AO, cs.AI, and q-bio.NC

Abstract: Hopfield networks are an attractive choice for solving many types of computational problems because they provide a biologically plausible mechanism. The Self-Optimization (SO) model adds to the Hopfield network by using a biologically founded Hebbian learning rule, in combination with repeated network resets to arbitrary initial states, for optimizing its own behavior towards some desirable goal state encoded in the network. In order to better understand that process, we demonstrate first that the SO model can solve concrete combinatorial problems in SAT form, using two examples of the Liars problem and the map coloring problem. In addition, we show how under some conditions critical information might get lost forever with the learned network producing seemingly optimal solutions that are in fact inappropriate for the problem it was tasked to solve. What appears to be an undesirable side-effect of the SO model, can provide insight into its process for solving intractable problems.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (25)
  1. J. J. Hopfield, “Neural networks and physical systems with emergent collective computational abilities,” PNAS, vol. 79, no. 8, pp. 2554–2558, Apr. 1982.
  2. F. Pulvermüller, R. Tomasello, M. R. Henningsen-Schomers, and T. Wennekers, “Biological constraints on neural network models of cognitive function,” Nat Rev Neurosci, vol. 22, no. 8, pp. 488–502, Aug. 2021.
  3. J. J. Hopfield and D. W. Tank, ““Neural” computation of decisions in optimization problems,” Biol. Cybern., vol. 52, no. 3, pp. 141–152, Jul. 1985.
  4. R. Watson and M. Levin, “The collective intelligence of evolution and development,” Collective Intelligence, vol. 2, no. 2, p. 26339137231168355, Apr. 2023.
  5. R. A. Watson, C. L. Buckley, and R. Mills, “Optimization in “self-modeling” complex adaptive systems,” Complexity, vol. 16, no. 5, pp. 17–26, 2011.
  6. D. Power, “”Distributed associative learning in ecological community networks”, Ph.D. dissertation,” Ph.D. dissertation, University of Southampton, Mar. 2019.
  7. P. Lima, M. Morveli-Espinoza, G. Pereira, and F. Franga, “SATyrus: A SAT-based neuro-symbolic architecture for constraint processing,” in Fifth International Conference on Hybrid Intelligent Systems (HIS’05), Nov. 2005, pp. 6 pp.–.
  8. M. S. M. Kasihmuddin, M. A. Mansor, and S. Sathasivam, “Discrete hopfield neural network in restricted maximum k-satisfiability logic programming,” Sains Malaysiana, vol. 47, no. 6, pp. 1327–1335, Jun. 2018.
  9. G. Pinkas, “Symmetric Neural Networks and Propositional Logic Satisfiability,” Neural Computation, vol. 3, no. 2, pp. 282–291, Jun. 1991.
  10. W. A. T. W. Abdullah, “Logic programming on a neural network,” International Journal of Intelligent Systems, vol. 7, no. 6, pp. 513–519, 1992.
  11. W. Ahmad Tajuddin Wan Abdullah, “The logic of neural networks,” Physics Letters A, vol. 176, no. 3, pp. 202–206, May 1993.
  12. T. J. Sejnowski, “Higher-order Boltzmann machines,” AIP Conference Proceedings, vol. 151, no. 1, pp. 398–403, Aug. 1986.
  13. P. Baldi and S. S. Venkatesh, “Number of stable points for spin-glasses and neural networks of higher orders,” Phys. Rev. Lett., vol. 58, no. 9, pp. 913–916, Mar. 1987.
  14. L. F. Abbott and Y. Arian, “Storage capacity of generalized networks,” Phys. Rev. A, vol. 36, no. 10, pp. 5091–5094, Nov. 1987.
  15. S. Sathasivam and W. A. T. W. Abdullah, “Logic Learning in Hopfield Networks,” Modern Applied Science, vol. 2, no. 3, p. p57, May 2008.
  16. T. W. Neller, Z. Markov, I. Russell, and D. Musicant, “Clue Deduction: An introduction to satisfiability reasoning,” May 2008.
  17. O. Erdem, “Encoding Problems in Boolean Satisfiability,” https://ozanerdem.github.io/jekyll/update/2019/11/17/representation-in-sat.html, Nov. 2019.
  18. D. MacKenzie, “Slaying the Kraken:: The Sociohistory of a Mathematical Proof,” Soc Stud Sci, vol. 29, no. 1, pp. 7–60, Feb. 1999.
  19. N. Weber, “SO for SAT problems,” (available at: https://github.com/nata-web/SO_for_SAT), Jul. 2023.
  20. R. A. Watson, R. Mills, and C. Buckley, “Transformations in the scale of behavior and the global optimization of constraints in adaptive networks,” Adaptive Behavior, vol. 19, no. 4, pp. 227–249, Aug. 2011.
  21. N. Weber, W. Koch, and T. Froese, “Scaling up the self-optimization model by means of on-the-fly computation of weights,” in 2022 IEEE Symposium Series on Computational Intelligence (SSCI), Dec. 2022, pp. 1276–1282.
  22. G. Schvitz, L. Girardin, S. Rüegger, N. B. Weidmann, L.-E. Cederman, and K. S. Gleditsch, “Mapping the International System, 1886-2019: The CShapes 2.0 Dataset,” Journal of Conflict Resolution, vol. 66, no. 1, pp. 144–161, Jan. 2022.
  23. A. Ignatiev, A. Morgado, and J. Marques-Silva, “PySAT: A Python Toolkit for Prototyping with SAT Oracles,” in Theory and Applications of Satisfiability Testing – SAT 2018, ser. Lecture Notes in Computer Science, O. Beyersdorff and C. M. Wintersteiger, Eds.   Cham: Springer International Publishing, 2018, pp. 428–437.
  24. T. Froese, N. Weber, I. Shpurov, and T. Ikegami, “From autopoiesis to self-optimization: Toward an enactive model of biological regulation,” Biosystems, vol. 230, p. 104959, Aug. 2023.
  25. S. Cai, X. Zhang, M. Fleury, and A. Biere, “Better Decision Heuristics in CDCL through Local Search and Target Phases,” Journal of Artificial Intelligence Research, vol. 74, pp. 1515–1563, Aug. 2022.

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com