Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Large-Scale Intelligent Microservices (2009.08044v3)

Published 17 Sep 2020 in cs.AI, cs.DB, cs.DC, cs.LG, and cs.NI

Abstract: Deploying Machine Learning (ML) algorithms within databases is a challenge due to the varied computational footprints of modern ML algorithms and the myriad of database technologies each with its own restrictive syntax. We introduce an Apache Spark-based micro-service orchestration framework that extends database operations to include web service primitives. Our system can orchestrate web services across hundreds of machines and takes full advantage of cluster, thread, and asynchronous parallelism. Using this framework, we provide large scale clients for intelligent services such as speech, vision, search, anomaly detection, and text analysis. This allows users to integrate ready-to-use intelligence into any datastore with an Apache Spark connector. To eliminate the majority of overhead from network communication, we also introduce a low-latency containerized version of our architecture. Finally, we demonstrate that the services we investigate are competitive on a variety of benchmarks, and present two applications of this framework to create intelligent search engines, and real-time auto race analytics systems.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (11)
  1. Mark Hamilton (20 papers)
  2. Nick Gonsalves (1 paper)
  3. Christina Lee (4 papers)
  4. Anand Raman (2 papers)
  5. Brendan Walsh (2 papers)
  6. Siddhartha Prasad (4 papers)
  7. Dalitso Banda (7 papers)
  8. Lucy Zhang (2 papers)
  9. Mei Gao (8 papers)
  10. Lei Zhang (1689 papers)
  11. William T. Freeman (114 papers)
Citations (5)

Summary

We haven't generated a summary for this paper yet.