Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
123 tokens/sec
GPT-4o
10 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
3 tokens/sec
DeepSeek R1 via Azure Pro
51 tokens/sec
2000 character limit reached

Removing the Curse of Superefficiency: an Effective Strategy For Distributed Computing in Isotonic Regression (1806.08542v1)

Published 22 Jun 2018 in math.ST and stat.TH

Abstract: We propose a strategy for computing the isotonic least-squares estimate of a monotone function in a general regression setting where the data are distributed across different servers and the observations across servers, though independent, can come from heterogeneous sub-populations, thereby violating the identically distributed assumption. Our strategy fixes the super-efficiency phenomenon observed in prior work on distributed computing in the isotonic regression framework, where averaging several isotonic estimates (each computed at a local server) on a central server produces super-efficient estimates that do not replicate the properties of the global isotonic estimator, i.e. the isotonic estimate that would be constructed by transferring all the data to a single server. The new estimator proposed in this paper works by smoothing the data on each local server, communicating the smoothed summaries to the central server, and then computing an isotonic estimate at the central server, and is shown to replicate the asymptotic properties of the global estimator, and also overcome the super-efficiency phenomenon exhibited by earlier estimators. For data on $N$ observations, the new estimator can be constructed by transferring data just over order $N{1/3}$ across servers [as compared to transferring data of order $N$ to compute the global isotonic estimator], and requires the same order of computing time as the global estimator.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.