Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

The Parameterized Complexity of Learning Monadic Second-Order Logic (2309.10489v2)

Published 19 Sep 2023 in cs.LO

Abstract: Within the model-theoretic framework for supervised learning introduced by Grohe and Tur\'an (TOCS 2004), we study the parameterized complexity of learning concepts definable in monadic second-order logic (MSO). We show that the problem of learning an MSO-definable concept from a training sequence of labeled examples is fixed-parameter tractable on graphs of bounded clique-width, and that it is hard for the parameterized complexity class para-NP on general graphs. It turns out that an important distinction to be made is between 1-dimensional and higher-dimensional concepts, where the instances of a k-dimensional concept are k-tuples of vertices of a graph. The tractability results we obtain for the 1-dimensional case are stronger and more general, and they are much easier to prove. In particular, our learning algorithm in the higher-dimensional case is only fixed-parameter tractable in the size of the graph, but not in the size of the training sequence, and we give a hardness result showing that this is optimal. By comparison, in the 1-dimensional case, we obtain an algorithm that is fixed-parameter tractable in both.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (54)
  1. Learning and verifying quantified boolean queries by example. In Richard Hull and Wenfei Fan, editors, Proceedings of the 32nd ACM SIGMOD-SIGACT-SIGART Symposium on Principles of Database Systems, PODS 2013, New York, NY, USA - June 22 - 27, 2013, pages 49–60. ACM, 2013. doi:10.1145/2463664.2465220.
  2. Complexity theoretic hardness results for query learning. Comput. Complex., 7(1):19–53, 1998. doi:10.1007/PL00001593.
  3. Characterizing schema mappings via data examples. ACM Trans. Database Syst., 36(4):23:1–23:48, 2011. doi:10.1145/2043652.2043656.
  4. Parameterized learnability of k-juntas and related problems. In Marcus Hutter, Rocco A. Servedio, and Eiji Takimoto, editors, Algorithmic Learning Theory, 18th International Conference, ALT 2007, Sendai, Japan, October 1-4, 2007, Proceedings, volume 4754 of Lecture Notes in Computer Science, pages 120–134. Springer, 2007. doi:10.1007/978-3-540-75225-7_13.
  5. Regularizing conjunctive features for classification. J. Comput. Syst. Sci., 119:97–124, 2021. doi:10.1016/j.jcss.2021.01.003.
  6. The complexity of reverse engineering problems for conjunctive queries. In Michael Benedikt and Giorgio Orsi, editors, 20th International Conference on Database Theory, ICDT 2017, March 21-24, 2017, Venice, Italy, volume 68 of LIPIcs, pages 7:1–7:17. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2017. doi:10.4230/LIPIcs.ICDT.2017.7.
  7. Learnability and the Vapnik-Chervonenkis dimension. Journal of the ACM, 36(4):929–965, October 1989. doi:10.1145/76359.76371.
  8. Learning path queries on graph databases. In Gustavo Alonso, Floris Geerts, Lucian Popa, Pablo Barceló, Jens Teubner, Martín Ugarte, Jan Van den Bussche, and Jan Paredaens, editors, Proceedings of the 18th International Conference on Extending Database Technology, EDBT 2015, Brussels, Belgium, March 23-27, 2015, pages 109–120. OpenProceedings.org, 2015. doi:10.5441/002/edbt.2015.11.
  9. Learning join queries from user examples. ACM Trans. Database Syst., 40(4):24:1–24:38, 2016. doi:10.1145/2818637.
  10. Interactive mapping specification with exemplar tuples. ACM Trans. Database Syst., 44(3):10:1–10:44, 2019. doi:10.1145/3321485.
  11. Twin-width IV: ordered graphs and matrices. In Stefano Leonardi and Anupam Gupta, editors, STOC ’22: 54th Annual ACM SIGACT Symposium on Theory of Computing, Rome, Italy, June 20 - 24, 2022, pages 924–937. ACM, 2022. doi:10.1145/3519935.3520037.
  12. A parameterized theory of PAC learning. In Brian Williams, Yiling Chen, and Jennifer Neville, editors, Thirty-Seventh AAAI Conference on Artificial Intelligence, AAAI 2023, Thirty-Fifth Conference on Innovative Applications of Artificial Intelligence, IAAI 2023, Thirteenth Symposium on Educational Advances in Artificial Intelligence, EAAI 2023, Washington, DC, USA, February 7-14, 2023, pages 6834–6841. AAAI Press, 2023. URL: https://ojs.aaai.org/index.php/AAAI/article/view/25837.
  13. ICE-based refinement type discovery for higher-order functional programs. J. Autom. Reason., 64(7):1393–1418, 2020. doi:10.1007/s10817-020-09571-y.
  14. Polynomial learnability and Inductive Logic Programming: Methods and results. New Generation Computing, 13(3-4):369–409, December 1995. doi:10.1007/BF03037231.
  15. On the relationship between clique-width and treewidth. SIAM J. Comput., 34(4):825–847, 2005. doi:10.1137/S0097539701385351.
  16. Linear Time Solvable Optimization Problems on Graphs of Bounded Clique Width. Theory of Computing Systems, 33:125–150, 1999.
  17. Upper bounds to the clique width of graphs. Discret. Appl. Math., 101(1-3):77–114, 2000. doi:10.1016/S0166-218X(99)00184-5.
  18. Mathematical Logic. 01 2021. doi:10.1007/978-3-030-73839-6.
  19. Horn-ICE learning for synthesizing invariants and contracts. Proc. ACM Program. Lang., 2(OOPSLA):131:1–131:25, 2018. doi:10.1145/3276501.
  20. Jörg Flum and M. Grohe. Parameterized complexity theory. Texts in theoretical computer science. Springer, Berlin ; New York, 2006.
  21. Martin Fürer. Multi-clique-width. In Christos H. Papadimitriou, editor, 8th Innovations in Theoretical Computer Science Conference, ITCS 2017, January 9-11, 2017, Berkeley, CA, USA, volume 67 of LIPIcs, pages 14:1–14:13. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2017. URL: https://doi.org/10.4230/LIPIcs.ITCS.2017.14, doi:10.4230/LIPICS.ITCS.2017.14.
  22. ICE: A robust framework for learning invariants. In Armin Biere and Roderick Bloem, editors, Computer Aided Verification - 26th International Conference, CAV 2014, Held as Part of the Vienna Summer of Logic, VSL 2014, Vienna, Austria, July 18-22, 2014. Proceedings, volume 8559 of Lecture Notes in Computer Science, pages 69–87. Springer, 2014. doi:10.1007/978-3-319-08867-9_5.
  23. Schema mapping discovery from data instances. J. ACM, 57(2):6:1–6:37, 2010. doi:10.1145/1667053.1667055.
  24. Learning Definable Hypotheses on Trees. In Pablo Barceló and Marco Calautti, editors, 22nd International Conference on Database Theory, ICDT 2019, March 26-28, 2019, Lisbon, Portugal, volume 127 of LIPIcs, pages 24:1–24:18. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2019. doi:10.4230/LIPIcs.ICDT.2019.24.
  25. Learning MSO-definable hypotheses on strings. In Proceedings of the 28th International Conference on Algorithmic Learning Theory, pages 434–451. PMLR, October 2017. ISSN: 2640-3498. URL: https://proceedings.mlr.press/v76/grohe17a.html.
  26. Learning first-order definable concepts over structures of small degree. In 2017 32nd Annual ACM/IEEE Symposium on Logic in Computer Science (LICS), pages 1–12, Reykjavik, Iceland, June 2017. IEEE. doi:10.1109/LICS.2017.8005080.
  27. Learnability and Definability in Trees and Similar Structures. Theory of Computing Systems, 37(1):193–220, January 2004. doi:10.1007/s00224-003-1112-8.
  28. David Haussler. Learning conjunctive concepts in structural domains. Mach. Learn., 4:7–40, 1989. doi:10.1007/BF00114802.
  29. David Haussler. Decision theoretic generalizations of the PAC model for neural net and other learning applications. Inf. Comput., 100(1):78–150, 1992. doi:10.1016/0890-5401(92)90010-D.
  30. Kouichi Hirata. On the hardness of learning acyclic conjunctive queries. In Hiroki Arimura, Sanjay Jain, and Arun Sharma, editors, Algorithmic Learning Theory, 11th International Conference, ALT 2000, Sydney, Australia, December 11-13, 2000, Proceedings, volume 1968 of Lecture Notes in Computer Science, pages 238–251. Springer, 2000. doi:10.1007/3-540-40992-0_18.
  31. An Introduction to Computational Learning Theory. MIT Press, 1994. URL: https://mitpress.mit.edu/books/introduction-computational-learning-theory.
  32. A relational framework for classifier engineering. ACM Trans. Database Syst., 43(3):11:1–11:36, 2018. doi:10.1145/3268931.
  33. Learning mixtures of linear regressions with nearly optimal complexity. In Sébastien Bubeck, Vianney Perchet, and Philippe Rigollet, editors, Conference On Learning Theory, COLT 2018, Stockholm, Sweden, 6-9 July 2018, volume 75 of Proceedings of Machine Learning Research, pages 1125–1144. PMLR, 2018. URL: http://proceedings.mlr.press/v75/li18b.html.
  34. Abstract learning frameworks for synthesis. In Marsha Chechik and Jean-François Raskin, editors, Tools and Algorithms for the Construction and Analysis of Systems - 22nd International Conference, TACAS 2016, Held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2016, Eindhoven, The Netherlands, April 2-8, 2016, Proceedings, volume 9636 of Lecture Notes in Computer Science, pages 167–185. Springer, 2016. URL: https://doi.org/10.1007/978-3-662-49674-9_10, doi:10.1007/978-3-662-49674-9_10.
  35. Johann A. Makowsky. Algorithmic uses of the Feferman-Vaught theorem. Ann. Pure Appl. Log., 126(1-3):159–213, 2004. doi:10.1016/j.apal.2003.11.002.
  36. Foundations of Machine Learning. Adaptive computation and machine learning. MIT Press, 2nd edition, 2018. URL: http://mitpress.mit.edu/books/foundations-machine-learning-0.
  37. Stephen Muggleton. Inductive logic programming. New Generation Computing, 8(4):295–318, February 1991. doi:10.1007/BF03037089.
  38. Inductive Logic Programming: Theory and Methods. J. Log. Program., 19/20:629–679, 1994.
  39. Approximating clique-width and branch-width. J. Comb. Theory, Ser. B, 96(4):514–528, 2006. URL: https://koasas.kaist.ac.kr/handle/10203/90201, doi:10.1016/j.jctb.2005.10.006.
  40. Norbert Sauer. On the density of families of sets. J. Comb. Theory, Ser. A, 13(1):145–147, 1972. doi:10.1016/0097-3165(72)90019-2.
  41. Understanding Machine Learning: From Theory to Algorithms. Cambridge University Press, Cambridge, 2014. URL: https://www.cambridge.org/core/books/understanding-machine-learning/3059695661405D25673058E43C8BE2A6, doi:10.1017/CBO9781107298019.
  42. Saharon Shelah. A combinatorial problem; stability and order for models and theories in infinitary languages. 1972. URL: http://dx.doi.org/10.2140/pjm.1972.41.247, doi:10.2140/pjm.1972.41.247.
  43. Learning Boolean functions with queries. In Yves Crama and Peter L. Hammer, editors, Boolean Models and Methods in Mathematics, Computer Science, and Engineering, pages 221–256. Cambridge University Press, 2010. doi:10.1017/cbo9780511780448.010.
  44. Learning twig and path queries. In Alin Deutsch, editor, 15th International Conference on Database Theory, ICDT 2012, Berlin, Germany, March 26-29, 2012, pages 140–154. ACM, 2012. doi:10.1145/2274576.2274592.
  45. Balder ten Cate and Victor Dalmau. Conjunctive queries: Unique characterizations and exact learnability. In Ke Yi and Zhewei Wei, editors, 24th International Conference on Database Theory, ICDT 2021, March 23-26, 2021, Nicosia, Cyprus, volume 186 of LIPIcs, pages 9:1–9:24. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2021. doi:10.4230/LIPIcs.ICDT.2021.9.
  46. Learning schema mappings. ACM Trans. Database Syst., 38(4):28:1–28:31, 2013. doi:10.1145/2539032.2539035.
  47. Active learning of GAV schema mappings. In Jan Van den Bussche and Marcelo Arenas, editors, Proceedings of the 37th ACM SIGMOD-SIGACT-SIGAI Symposium on Principles of Database Systems, Houston, TX, USA, June 10-15, 2018, pages 355–368. ACM, 2018. doi:10.1145/3196959.3196974.
  48. L. G. Valiant. A theory of the learnable. Communications of the ACM, 27(11):1134–1142, November 1984. URL: https://dl.acm.org/doi/10.1145/1968.1972, doi:10.1145/1968.1972.
  49. Steffen van Bergerem. Learning Concepts Definable in First-Order Logic with Counting. In 2019 34th Annual ACM/IEEE Symposium on Logic in Computer Science (LICS), pages 1–13, Vancouver, BC, Canada, June 2019. IEEE. URL: https://ieeexplore.ieee.org/document/8785811/, doi:10.1109/LICS.2019.8785811.
  50. Steffen van Bergerem. Descriptive Complexity of Learning. PhD thesis, RWTH Aachen University, Germany, 2023. URL: https://publications.rwth-aachen.de/record/953243, doi:10.18154/RWTH-2023-02554.
  51. On the Parameterized Complexity of Learning First-Order Logic. In Leonid Libkin and Pablo Barceló, editors, PODS ’22: International Conference on Management of Data, Philadelphia, PA, USA, June 12 - 17, 2022, pages 337–346. ACM, 2022. doi:10.1145/3517804.3524151.
  52. Steffen van Bergerem and Nicole Schweikardt. Learning concepts described by weight aggregation logic. In 29th EACSL Annual Conference on Computer Science Logic, CSL 2021, Ljubljana, Slovenia (Virtual Conference), January 25-28, 2021, volume 183 of LIPIcs, pages 10:1–10:18. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2021. doi:10.4230/LIPIcs.CSL.2021.10.
  53. Vladimir Vapnik. Principles of risk minimization for learning theory. In Advances in Neural Information Processing Systems 4, NIPS 1991, Denver, Colorado, USA, December 2-5, 1991, pages 831–838, 1991. URL: https://proceedings.neurips.cc/paper_files/paper/1991/file/ff4d5fbbafdf976cfdc032e3bde78de5-Paper.pdf.
  54. A data-driven CHC solver. In Jeffrey S. Foster and Dan Grossman, editors, Proceedings of the 39th ACM SIGPLAN Conference on Programming Language Design and Implementation, PLDI 2018, Philadelphia, PA, USA, June 18-22, 2018, pages 707–721. ACM, 2018. doi:10.1145/3192366.3192416.

Summary

We haven't generated a summary for this paper yet.