Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
125 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Quadratic Conditional Lower Bounds for String Problems and Dynamic Time Warping (1502.01063v2)

Published 3 Feb 2015 in cs.CC and cs.DS

Abstract: Classic similarity measures of strings are longest common subsequence and Levenshtein distance (i.e., the classic edit distance). A classic similarity measure of curves is dynamic time warping. These measures can be computed by simple $O(n2)$ dynamic programming algorithms, and despite much effort no algorithms with significantly better running time are known. We prove that, even restricted to binary strings or one-dimensional curves, respectively, these measures do not have strongly subquadratic time algorithms, i.e., no algorithms with running time $O(n{2-\varepsilon})$ for any $\varepsilon > 0$, unless the Strong Exponential Time Hypothesis fails. We generalize the result to edit distance for arbitrary fixed costs of the four operations (deletion in one of the two strings, matching, substitution), by identifying trivial cases that can be solved in constant time, and proving quadratic-time hardness on binary strings for all other cost choices. This improves and generalizes the known hardness result for Levenshtein distance [Backurs, Indyk STOC'15] by the restriction to binary strings and the generalization to arbitrary costs, and adds important problems to a recent line of research showing conditional lower bounds for a growing number of quadratic time problems. As our main technical contribution, we introduce a framework for proving quadratic-time hardness of similarity measures. To apply the framework it suffices to construct a single gadget, which encapsulates all the expressive power necessary to emulate a reduction from satisfiability. Finally, we prove quadratic-time hardness for longest palindromic subsequence and longest tandem subsequence via reductions from longest common subsequence, showing that conditional lower bounds based on the Strong Exponential Time Hypothesis also apply to string problems that are not necessarily similarity measures.

Citations (238)

Summary

  • The paper establishes quadratic hardness for computing string similarity measures like LCS and edit distance under SETH.
  • It generalizes the results to edit distances with fixed costs and extends the analysis to dynamic time warping through alignment gadgets.
  • The paper offers a systematic framework that reinforces the computational difficulty of various string problems via reductions from satisfiability.

Conditional Lower Bounds for String Problems and Dynamic Time Warping: A Summary

The research paper by Bringmann and Künnemann addresses a significant question in theoretical computer science: the challenge of improving the time complexity of various string similarity measures. Despite longstanding dynamic programming solutions operating in quadratic time, no faster algorithms have been identified. The authors utilize the Strong Exponential Time Hypothesis (SETH) to establish conditional lower bounds that reinforce the perceived computational difficulty of these problems.

Main Contributions

  1. Quadratic Hardness for String Similarity Measures: The paper establishes that strongly subquadratic algorithms do not exist for computing classic string similarities like longest common subsequence (LCS) and Levenshtein distance (edit distance). This assertion holds even when considering such measures on binary strings. The analysis extends to dynamic time warping (DTW) used for curves, with restrictions to one-dimensional cases using values in {0,1,2,4,8}\{0,1,2,4,8\}.
  2. Generalization for Edit Distance: The results expand to edit distances with arbitrary fixed costs for operations like deletion, insertion, matching, and substitution. These are shown to be as hard as the string similarity problems for binary strings except for trivial cost configurations.
  3. Framework for Quadratic Hardness: A major technical offering of the paper is a systematic framework leveraging alignment gadgets that can transform an input instance into a constrained type to simulate reductions from satisfiability problems, yielding quadratic lower bounds. This generalized approach simplifies and abstracts proof processes for SETH-based conditional lower bounds.
  4. Additional Hardness Results for Different Problems: Additional reductions show that problems like the longest palindromic subsequence and longest tandem subsequence are also quadratically hard under SETH. These results highlight that such complexity is not limited to similarity measures but extends to a broader class of string problems.

Methodological Highlights

  • Coordination Gadgets: These are specialized constructs within the framework that facilitate constructing reductions by defining segment types and metrics emulating computational hardness analogous to satisfiability problems.
  • Planarity and Graph Structures: A graph-theoretical approach to align MM-blocks in dynamic time warping problems ensures that paths are efficiently traversed to evaluate necessary and minimal alignment costs.

Implications and Future Directions

The implications of the paper span both theoretical and applied computational domains. Practically, it delineates the boundary of efficient algorithm development for critical similarity measures used in bioinformatics, natural language processing, and other domains reliant on string comparison. Theoretically, it encourages deeper exploration of new paradigms and assumptions that could potentially weaken or bypass the current limitations imposed by SETH.

While the current framework addresses quadratic lower bounds effectively, the paper invites exploration into conditional bounds at varying complexity levels (e.g., cubic time problems) and extends to strongly efficient approximation algorithms under similar hypotheses. Further work could delve into alternative computing models or assumptions that could provide more nuanced insight into computational boundaries.

In conclusion, the alignment gadget-based framework and its application to ruling out strongly subquadratic solutions offer a powerful toolkit for exploring complexity across polynomial-time solvable string problems. As these techniques develop, they are poised to enhance understanding of computational limits and inspire innovation in algorithmic design across diverse areas.