Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
144 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Serverless Computing: One Step Forward, Two Steps Back (1812.03651v1)

Published 10 Dec 2018 in cs.DC and cs.DB

Abstract: Serverless computing offers the potential to program the cloud in an autoscaling, pay-as-you go manner. In this paper we address critical gaps in first-generation serverless computing, which place its autoscaling potential at odds with dominant trends in modern computing: notably data-centric and distributed computing, but also open source and custom hardware. Put together, these gaps make current serverless offerings a bad fit for cloud innovation and particularly bad for data systems innovation. In addition to pinpointing some of the main shortfalls of current serverless architectures, we raise a set of challenges we believe must be met to unlock the radical potential that the cloud---with its exabytes of storage and millions of cores---should offer to innovative developers.

Citations (374)

Summary

  • The paper presents a detailed analysis of AWS Lambda, revealing cost inefficiencies, data movement penalties, and limited support for stateful and distributed computing.
  • The paper demonstrates how current data-shipping architectures in FaaS impose latency issues and hinder optimal performance for cloud-scale applications.
  • The paper advocates a shift toward liquid computing, emphasizing fluid code-data management and heterogeneous hardware to address existing operational constraints.

Serverless Computing: An Assessment of Its Current State and Opportunities for Evolution

The reviewed paper titled "Serverless Computing: One Step Forward, Two Steps Back" presents a critical examination of current serverless computing paradigms, particularly Functions-as-a-Service (FaaS) offerings, primarily utilizing Amazon Web Services (AWS) Lambda as a case paper. The authors bring to light the significant limitations of serverless computing in its present form while acknowledging its potential as a pivotal component of cloud infrastructure.

Summary and Evaluation of Serverless Computing

Serverless computing emerged as a promising paradigm, purportedly alleviating the burden of server management from developers while enabling autoscaling and pay-as-you-go economics. The paper meticulously scrutinizes these promises, highlighting inherent gaps in FaaS that impede its efficacy as a comprehensive solution for cloud-scale, data-intensive applications. The core limitations identified include:

  1. Data-Shipping Architecture: Current FaaS models require data to be moved to the functions, resulting in inefficient data handling and increased latency. This goes against the well-understood principle of co-locating computation close to data to optimize performance.
  2. Inhibited Distributed Computing: With no provision for direct function-to-function communication, current FaaS offerings severely limit distributed systems and parallel computing capabilities, which rely heavily on inter-process communication.
  3. Absence of Specialized Hardware: The lack of support for hardware acceleration mechanisms such as GPUs in serverless environments restricts their utility for computation-intensive workloads like machine learning training, where hardware acceleration is vital.
  4. Limitation on Stateful Operations: The transient nature of function invocations makes maintaining state across calls cumbersome, and current serverless architectures do not provide a seamless mechanism to overcome this challenge.
  5. Cost and Performance Concerns: Empirical analyses on the efficacy of AWS Lambda for tasks like model training and prediction serving demonstrate severe performance drawbacks and cost inefficiencies when compared to traditional servered options such as EC2 instances.

Implications and Future Directions

The authors propose a vision that extends the promise of serverless computing to what they term "liquid computing." This concept advocates a fluid architecture where both data and computation can be dynamically placed and optimized across the cloud infrastructure. Key components of this forward-looking approach include:

  • Fluid Code and Data Management: Emphasizing the seamless movement of code to data as opposed to the current data-to-code model. This approach would leverage cloud elasticity more effectively while preserving performance.
  • Support for Heterogeneous Hardware: Integrating heterogeneous processors in the cloud environment could unlock unprecedented computational performance, especially for tasks demanding high computational throughput like AI model training.
  • Programming Model Evolution: Encouraging the development of new asynchronous programming languages and DSLs that inherently address cloud-native application design challenges.

Conclusion

The paper provides a thorough critique of serverless computing, disclosing significant operational inefficiencies with strong quantitative support. The discussion sets a foundation for re-evaluating and redesigning the current serverless models to fulfill their originally envisaged purposes. In doing so, the paper implores both researchers and industry practitioners to push the boundaries toward true cloud-scale programming, thus substantially shifting the paradigm towards more nuanced and capable infrastructure. This evolution is essential not only to fully harness the cloud's potential but also to foster innovation that transcends current proprietary limitations and vendor lock-ins. While tangible progression may require considerable effort and multidisciplinary engagement, the pursuit stands poised to transform serverless computing into a fundamentally more resilient and adaptive component of modern cloud architectures.