Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
120 tokens/sec
GPT-4o
10 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
3 tokens/sec
DeepSeek R1 via Azure Pro
51 tokens/sec
2000 character limit reached

Capturing waste collection planning expert knowledge in a fitness function through preference learning (2402.01849v1)

Published 2 Feb 2024 in cs.LG and cs.AI

Abstract: This paper copes with the COGERSA waste collection process. Up to now, experts have been manually designed the process using a trial and error mechanism. This process is not globally optimized, since it has been progressively and locally built as council demands appear. Planning optimization algorithms usually solve it, but they need a fitness function to evaluate a route planning quality. The drawback is that even experts are not able to propose one in a straightforward way due to the complexity of the process. Hence, the goal of this paper is to build a fitness function though a preference framework, taking advantage of the available expert knowledge and expertise. Several key performance indicators together with preference judgments are carefully established according to the experts for learning a promising fitness function. Particularly, the additivity property of them makes the task be much more affordable, since it allows to work with routes rather than with route plannings. Besides, a feature selection analysis is performed over such indicators, since the experts suspect of a potential existing (but unknown) redundancy among them. The experiment results confirm this hypothesis, since the best $C-$index ($98\%$ against around $94\%$) is reached when 6 or 8 out of 21 indicators are taken. Particularly, truck load seems to be a highly promising key performance indicator, together to the travelled distance along non-main roads. A comparison with other existing approaches shows that the proposed method clearly outperforms them, since the $C-$index goes from $72\%$ or $90\%$ to $98\%$.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (35)
  1. The effect of kernel values in support vector machine to forecasting performance of financial time series and cognitıve decision making .
  2. Feature subset selection for learning preferences: a case study, in: Proceedings of the twenty-first international conference on Machine learning, ACM. p. 7.
  3. How to learn consumer preferences from the analysis of sensory data by means of support vector machines (svm). Trends in Food Science and Technology 18, 20 – 28. URL: http://www.sciencedirect.com/science/article/pii/S0924224406002433, doi:https://doi.org/10.1016/j.tifs.2006.07.014.
  4. Particle swarm optimization. Swarm Intelligence 1, 33–57.
  5. Predicting paired preferences from sensory data. Food Quality and Preference 12, 481–487. doi:10.1016/S0950-3293(01)00041-6.
  6. Local vs. svm-based stacking approach for predicting proportions of complex blends in food products. XVII Chemometrics in Analytical Chemistry, Junio, 2018. Halifax, Nova Scotia,(Canadá) .
  7. Learning to order things. Journal of Artificial Intelligence Research 10. doi:10.1613/jair.587.
  8. Optimizing urban traffic flow using genetic algorithm with petri net analysis as fitness function. Neurocomputing 124, 162–167.
  9. Discovering relevancies in very difficult regression problems: Applications to sensory data analysis., pp. 993–994.
  10. Liblinear: A library for large linear classification. Journal of Machine Learning Research 9.
  11. Binary decomposition methods for multipartite ranking, in: Joint European Conference on Machine Learning and Knowledge Discovery in Databases, Springer. pp. 359–374.
  12. Proposing a machine learning approach to analyze and predict employment and its factors. International Journal of Interactive Multimedia and Artificial Intelligence 5, 39–45.
  13. An expert rule-based fault diagnosis strategy for variable refrigerant flow air conditioning systems. Applied Thermal Engineering 149, 1223–1235.
  14. Gene selection for cancer classification using support vector machines. Machine learning 46, 389–422.
  15. Concordance probability and discriminatory power in proportional hazards regression. Biometrika 92, 965–970.
  16. Large margin rank boundaries for ordinal regression. advances in large margin classifiers .
  17. Preconditioned conjugate gradient methods in truncated newton frameworks for large-scale linear classification, in: Proceedings of The 10th Asian Conference on Machine Learning, ACML 2018, Beijing, China, November 14-16, 2018, pp. 312–326. URL: http://proceedings.mlr.press/v95/hsia18a.html.
  18. A hybrid model combining wavelet transform and recursive feature elimination for running state evaluation of heat-resistant steel using laser-induced breakdown spectroscopy. Analyst 144, 3736–3745.
  19. Optimizing search engines using clickthrough data, in: Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining, ACM. pp. 133–142.
  20. Recognition model for solar radiation time series based on random forest with feature selection approach, in: 2019 11th International Conference on Electrical and Electronics Engineering (ELECO), pp. 8–11.
  21. Prediction of solar radiation based on machine learning methods, in: Conference: International Engineering Research Symposium (UMAS’17).
  22. Machine learning for the detection of oil spills in satellite radar images. Machine learning 30, 195–215.
  23. The good, the bad, and the kpis: how to combine performance metrics to better capture underperforming sectors in mobile networks, in: 2017 IEEE 33rd International Conference on Data Engineering (ICDE), IEEE. pp. 297–308.
  24. Trust region newton methods for large-scale logistic regression, in: Machine Learning, Proceedings of the Twenty-Fourth International Conference (ICML 2007), Corvallis, Oregon, USA, June 20-24, 2007, pp. 561–568.
  25. Aircraft engine degradation prognostics based on logistic regression and novel os-elm algorithm. Aerospace Science and Technology 84, 661–671.
  26. Improved route planning and scheduling of waste collection and transport. Expert systems with applications 30, 223–232.
  27. Probabilistic outputs for support vector machines and comparison to regularized likelihood methods, in: Advances in Large Margin Classifiers.
  28. Obtaining rubric weights for assessments by more than one lecturer using a pairwise learning model. International Working Group on Educational Data Mining .
  29. Evaluation and selection of kpi in transport using swara method. Transport & Logistics: The International Journal 8, 60–68.
  30. Diagnosis of alzheimer’s disease using universum support vector machine based recursive feature elimination (usvm-rfe). Biomedical Signal Processing and Control 59, 101903.
  31. Enhanced credit card fraud detection based on svm-recursive feature elimination and hyper-parameters optimization. Journal of Information Security and Applications 55, 102596.
  32. Probability estimates for multi-class classification by pairwise coupling 5.
  33. Outdoor thermal sensation and logistic regression analysis of comfort range of meteorological parameters in hong kong. Building and Environment 155, 175–186.
  34. Learning from experts’ experience: toward automated cyber security data triage. IEEE Systems Journal 13, 603–614.
  35. An extended stepwise weight assessment ratio analysis (swara) method for improving criteria prioritization process. Soft Computing 22, 7399–7405.
Citations (2)

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets