Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 65 tok/s
Gemini 2.5 Pro 51 tok/s Pro
GPT-5 Medium 32 tok/s Pro
GPT-5 High 29 tok/s Pro
GPT-4o 80 tok/s Pro
Kimi K2 182 tok/s Pro
GPT OSS 120B 453 tok/s Pro
Claude Sonnet 4.5 34 tok/s Pro
2000 character limit reached

PMGDA: A Preference-based Multiple Gradient Descent Algorithm (2402.09492v2)

Published 14 Feb 2024 in cs.LG

Abstract: It is desirable in many multi-objective machine learning applications, such as multi-task learning with conflicting objectives and multi-objective reinforcement learning, to find a Pareto solution that can match a given preference of a decision maker. These problems are often large-scale with available gradient information but cannot be handled very well by the existing algorithms. To tackle this critical issue, this paper proposes a novel predict-and-correct framework for locating a Pareto solution that fits the preference of a decision maker. In the proposed framework, a constraint function is introduced in the search progress to align the solution with a user-specific preference, which can be optimized simultaneously with multiple objective functions. Experimental results show that our proposed method can efficiently find a particular Pareto solution under the demand of a decision maker for standard multiobjective benchmark, multi-task learning, and multi-objective reinforcement learning problems with more than thousands of decision variables. Code is available at: https://github.com/xzhang2523/pmgda. Our code is current provided in the pgmda.rar attached file and will be open-sourced after publication.}

Definition Search Book Streamline Icon: https://streamlinehq.com
References (46)
  1. O. Sener and V. Koltun, “Multi-task learning as multi-objective optimization,” Advances in neural information processing systems, vol. 31, 2018.
  2. X. Zhou, Y. Gao, C. Li, and Z. Huang, “A multiple gradient descent design for multi-task learning on edge computing: Multi-objective machine learning approach,” IEEE Transactions on Network Science and Engineering, vol. 9, no. 1, pp. 121–133, 2021.
  3. M. Ruchte and J. Grabocka, “Scalable pareto front approximation for deep multi-objective learning,” in 2021 IEEE International Conference on Data Mining (ICDM).   IEEE, 2021, pp. 1306–1311.
  4. Z. Liu, P. Luo, X. Wang, and X. Tang, “Large-scale celebfaces attributes (celeba) dataset,” Retrieved August, vol. 15, no. 2018, p. 11, 2018.
  5. Y. Zheng et al., “Multi-objective recommendations: A tutorial,” arXiv preprint arXiv:2108.06367, 2021.
  6. W. Haolun, M. Chen, B. MITRA, D. Fernando, and L. Xue, “A multi-objective optimization framework for multi-stakeholder fairness-aware recommendation,” ACM Transactions on Information Systems, 2022.
  7. X. Lin, H.-L. Zhen, Z. Li, Q.-F. Zhang, and S. Kwong, “Pareto multi-task learning,” Advances in neural information processing systems, vol. 32, 2019.
  8. X. Lin, Z. Yang, Q. Zhang, and S. Kwong, “Controllable pareto multi-task learning,” arXiv preprint arXiv:2010.06313, 2020.
  9. Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” nature, vol. 521, no. 7553, pp. 436–444, 2015.
  10. D. Lopez-Paz and L. Sagun, “Easing non-convex optimization with neural networks,” 2018. [Online]. Available: https://openreview.net/forum?id=rJXIPK1PM
  11. Z. Allen-Zhu, Y. Li, and Z. Song, “A convergence theory for deep learning via over-parameterization,” in International Conference on Machine Learning.   PMLR, 2019, pp. 242–252.
  12. D. Mahapatra and V. Rajan, “Multi-task learning with user preferences: Gradient descent with controlled ascent in pareto optimization,” in Proceedings of the 37th International Conference on Machine Learning, ser. Proceedings of Machine Learning Research, H. D. III and A. Singh, Eds., vol. 119.   PMLR, 13–18 Jul 2020, pp. 6597–6607.
  13. X. Ma, Q. Zhang, G. Tian, J. Yang, and Z. Zhu, “On tchebycheff decomposition approaches for multiobjective evolutionary optimization,” IEEE Transactions on Evolutionary Computation, vol. 22, no. 2, pp. 226–244, 2017.
  14. B. Liu, X. Liu, X. Jin, P. Stone, and Q. Liu, “Conflict-averse gradient descent for multi-task learning,” Advances in Neural Information Processing Systems, vol. 34, 2021.
  15. N. Milojkovic, D. Antognini, G. Bergamin, B. Faltings, and C. Musat, “Multi-gradient descent for multi-objective recommender systems,” arXiv preprint arXiv:2001.00846, 2019.
  16. A. Kendall, Y. Gal, and R. Cipolla, “Multi-task learning using uncertainty to weigh losses for scene geometry and semantics,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 7482–7491.
  17. J. Fliege and B. F. Svaiter, “Steepest descent methods for multicriteria optimization,” Mathematical Methods of Operations Research, vol. 51, no. 3, pp. 479–494, 2000.
  18. J.-A. Désidéri, “Mutiple-gradient descent algorithm for multiobjective optimization,” in European Congress on Computational Methods in Applied Sciences and Engineering (ECCOMAS 2012), 2012.
  19. K. Li, R. Chen, G. Min, and X. Yao, “Integration of preferences in decomposition multiobjective optimization,” IEEE transactions on cybernetics, vol. 48, no. 12, pp. 3359–3370, 2018.
  20. L. Thiele, K. Miettinen, P. J. Korhonen, and J. Molina, “A preference-based evolutionary algorithm for multi-objective optimization,” Evolutionary computation, vol. 17, no. 3, pp. 411–436, 2009.
  21. A. Roy, G. So, and Y.-A. Ma, “Optimization on pareto sets: On a theory of multi-objective optimization,” arXiv preprint arXiv:2308.02145, 2023.
  22. S. Gonzalez-Gallardo, R. Saborido, A. B. Ruiz, and M. Luque, “Preference-based evolutionary multiobjective optimization through the use of reservation and aspiration points,” IEEE Access, vol. 9, pp. 108 861–108 872, 2021.
  23. Q. Zhang and H. Li, “Moea/d: A multiobjective evolutionary algorithm based on decomposition,” IEEE Transactions on evolutionary computation, vol. 11, no. 6, pp. 712–731, 2007.
  24. B. Gebken, S. Peitz, and M. Dellnitz, “A descent method for equality and inequality constrained multiobjective optimization problems,” in Numerical and Evolutionary Optimization.   Springer, 2017, pp. 29–61.
  25. L. Armijo, “Minimization of functions having lipschitz continuous first partial derivatives,” Pacific Journal of mathematics, vol. 16, no. 1, pp. 1–3, 1966.
  26. D. P. Bertsekas, “Nonlinear programming,(optimization and computation),” Athena Scientific, New York, 2016.
  27. P.-A. Absil and J. Malick, “Projection-like retractions on matrix manifolds,” SIAM Journal on Optimization, vol. 22, no. 1, pp. 135–158, 2012.
  28. S. Diamond and S. Boyd, “Cvxpy: A python-embedded modeling language for convex optimization,” The Journal of Machine Learning Research, vol. 17, no. 1, pp. 2909–2913, 2016.
  29. M. B. Cohen, Y. T. Lee, and Z. Song, “Solving linear programs in the current matrix multiplication time,” Journal of the ACM (JACM), vol. 68, no. 1, pp. 1–39, 2021.
  30. R. S. Sutton, D. McAllester, S. Singh, and Y. Mansour, “Policy gradient methods for reinforcement learning with function approximation,” Advances in neural information processing systems, vol. 12, 1999.
  31. J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov, “Proximal policy optimization algorithms,” arXiv preprint arXiv:1707.06347, 2017.
  32. J. Schulman, S. Levine, P. Abbeel, M. Jordan, and P. Moritz, “Trust region policy optimization,” in International conference on machine learning.   PMLR, 2015, pp. 1889–1897.
  33. A. P. Guerreiro, C. M. Fonseca, and L. Paquete, “The hypervolume indicator: Problems and algorithms,” arXiv preprint arXiv:2005.00515, 2020.
  34. X. Zhang, X. Lin, B. Xue, Y. Chen, and Q. Zhang, “Hypervolume maximization: A geometric view of pareto set learning,” in Thirty-seventh Conference on Neural Information Processing Systems, 2023. [Online]. Available: https://openreview.net/forum?id=9ieV1hnuva
  35. E. Zitzler, K. Deb, and L. Thiele, “Comparison of multiobjective evolutionary algorithms: Empirical results,” Evolutionary computation, vol. 8, no. 2, pp. 173–195, 2000.
  36. R. Cheng, M. Li, Y. Tian, X. Zhang, S. Yang, Y. Jin, and X. Yao, “A benchmark test suite for evolutionary many-objective optimization,” Complex & Intelligent Systems, vol. 3, no. 1, pp. 67–81, 2017.
  37. K. Deb, L. Thiele, M. Laumanns, and E. Zitzler, “Scalable multi-objective optimization test problems.”   Proceedings of the Congress on Evolutionary Computation (CEC-2002),(Honolulu, USA), 2002, pp. 825–830.
  38. X. Lin, Z. Yang, X. Zhang, and Q. Zhang, “Pareto set learning for expensive multi-objective optimization,” arXiv e-prints, pp. arXiv–2210, 2022.
  39. X. Liu, X. Tong, and Q. Liu, “Profiling pareto front with multi-objective stein variational gradient descent,” Advances in Neural Information Processing Systems, vol. 34, 2021.
  40. A. Asuncion and D. Newman, “Uci machine learning repository,” 2007.
  41. J. Angwin, J. Larson, S. Mattu, and L. Kirchner, “Machine bias,” ProPublica, May, vol. 23, no. 2016, pp. 139–159, 2016.
  42. I.-C. Yeh and C.-h. Lien, “The comparisons of data mining techniques for the predictive accuracy of probability of default of credit card clients,” Expert systems with applications, vol. 36, no. 2, pp. 2473–2480, 2009.
  43. K. Padh, D. Antognini, E. Lejal-Glaude, B. Faltings, and C. Musat, “Addressing fairness in classification with a model-agnostic multi-objective algorithm,” in Uncertainty in Artificial Intelligence.   PMLR, 2021, pp. 600–609.
  44. J. Xu, Y. Tian, P. Ma, D. Rus, S. Sueda, and W. Matusik, “Prediction-guided multi-objective reinforcement learning for continuous robot control,” in International Conference on Machine Learning.   PMLR, 2020, pp. 10 607–10 616.
  45. I. Reinaldo Meneghini, F. Gadelha Guimarães, A. Gaspar-Cunha, and M. Weiss Cohen, “Incorporation of region of interest in a decomposition-based multi-objective evolutionary algorithm,” Advances in Evolutionary and Deterministic Methods for Design, Optimization and Control in Engineering and Sciences, pp. 35–50, 2021.
  46. E. Filatovas, O. Kurasova, J. Redondo, and J. Fernández, “A reference point-based evolutionary algorithm for approximating regions of interest in multiobjective problems,” Top, vol. 28, pp. 402–423, 2020.
Citations (6)

Summary

We haven't generated a summary for this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 1 post and received 0 likes.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube