Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Cross-benchmarking for performance evaluation: looking across best practices of different peer groups using DEA (1912.01514v1)

Published 3 Dec 2019 in math.OC

Abstract: In benchmarking, organizations look outward to examine others' performance in their industry or sector. Often, they can learn from the best practices of some of them and improve. In order to develop this idea within the framework of Data Envelopment Analysis (DEA), this paper extends the common benchmarking framework proposed in Ruiz and Sirvent (2016) to an approach based on the benchmarking of decision making units (DMUs) against several reference sets. We refer to this approach as cross-benchmarking. First, we design a procedure aimed at making a selection of reference sets (as defined in DEA), which establish the common framework for the benchmarking. Next, benchmarking models are formulated which allow us to set the closest targets relative to the reference sets selected. The availability of a wider spectrum of targets may offer managers the possibility of choosing among alternative ways for improvements, taking into account what can be learned from the best practices of different peer groups. Thus, cross-benchmarking is a flexible tool that can support a process of future planning while considering different managerial implications.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Nuria Ramón (3 papers)
  2. José L. Ruiz (7 papers)
  3. Inmaculada Sirvent (5 papers)

Summary

We haven't generated a summary for this paper yet.