- The paper analyzes 10 FaaS platforms to reveal variations in supported languages, resource configurations, and invocation models.
- It employs a dual methodology combining cloud provider inspection with industry insights to ensure comprehensive coverage.
- The study highlights the growing role of edge computing and custom runtimes in shaping future serverless innovations.
Analysis of Public Functions-as-a-Service Providers
The paper "The State of FaaS: An analysis of public Functions-as-a-Service providers" by Nnamdi Ekwe-Ekwe and Lucas Amos, presents a detailed review and analysis of ten presently publicly available FaaS platforms. This comprehensive paper contextualizes the current landscape of serverless computing by extending beyond the conventionally scrutinized major providers—AWS Lambda, Google Cloud Functions, and Microsoft Azure Functions.
Summary and Findings
Methodology and Scope
The authors employed a two-pronged methodology to aggregate a list of FaaS providers: (1) exploring public cloud providers for FaaS offerings and (2) leveraging insights from industry practitioners and developers. This resulted in the selection of ten FaaS platforms for review: Alibaba Function Compute, AWS Lambda, Cloudflare Workers, DigitalOcean Functions, Google Cloud Functions, IBM Cloud Code Engine, Microsoft Azure Functions, Netlify Functions, Oracle Cloud Functions, and Vercel Functions.
The analysis is structured around evaluating several dimensions of these FaaS platforms, including their history, supported languages, invocation types, resource configurations, regional availability, and pricing models.
Evaluation and Features
- Languages Supported: They found that JavaScript, Python, and Go were the most commonly supported languages across the providers. Custom runtimes were supported by four out of ten providers, enabling more flexibility.
- Resource Configurations: There was a wide variance in configuration options. For instance, Alibaba and Google offered the highest memory caps (32GB and 32GiB, respectively), while Cloudflare Workers and Vercel Edge Functions offered the least (128MB).
- Invocation Types: AWS Lambda largely stood out for allowing multiple diverse triggers to a single function. In contrast, platforms like Netlify and Cloudflare had more restrictive invocation options, primarily limiting to HTTP.
- Regions Supported: Cloudflare Workers had the widest global coverage with 323 locations due to its edge-first architecture, while IBM and DigitalOcean offered more limited regional support.
- Pricing Models: A standard pricing model involving charges for function invocations and resource usage dominated. However, there were differences in the specifics, with Cloudflare uniquely billing based on CPU-time rather than duration.
Observations
- Maturation of New Providers: Newer FaaS providers offer limited features, resources, and geographical reach compared to the established platforms. This curtails their current competitiveness.
- Differentiation in Functionality: While core features remain consistent, significant differences exist in invocation types, supported languages, and resource configurations. These differences can impact application deployment and operation significantly.
- Edge Computing: The use of edge locations for deploying serverless functions is emerging, especially visible with platforms like Cloudflare, Vercel, and Netlify. However, resource limits at the edge necessitate careful function selection.
- Language Popularity: JavaScript, Python, and Go are the most supported languages, indicating their key role in serverless ecosystems.
- Generosity in Resources: Established providers generally offered more generous resource configurations compared to newer entrants.
- Core Pricing Elements: Despite varied pricing methodologies, the fundamental model remains invocation count and resource duration-based billing.
Implications and Future Directions
The insights from this paper provide a nuanced understanding of the current state of public FaaS platforms. For practitioners, this can inform better decision-making regarding the selection of a suitable FaaS provider based on specific application requirements and constraints. Theoretically, these findings outline avenues for future research, particularly in exploring the performance and cost-efficiency of newly emerging providers.
Empirical studies focusing on execution performance, cold-start behaviors, and scalability in newly established platforms could yield valuable benchmarking data. Moreover, extending the examination to niche functionalities and their impact on evolving workloads could present a more comprehensive view of the serverless ecosystem's potential and limitations.
Conclusion
The paper presents a significant contribution by broadening the scope of analysis beyond well-established FaaS providers. The observations lay a foundation for future research in serverless computing, suggesting paths for both academic and practical exploration. Understanding the diverse capabilities and limitations of emerging FaaS offerings can drive more informed usage and innovation in serverless computing applications.