Papers
Topics
Authors
Recent
2000 character limit reached

Stream-DiffVSR: Low-Latency Streamable Video Super-Resolution via Auto-Regressive Diffusion (2512.23709v1)

Published 29 Dec 2025 in cs.CV

Abstract: Diffusion-based video super-resolution (VSR) methods achieve strong perceptual quality but remain impractical for latency-sensitive settings due to reliance on future frames and expensive multi-step denoising. We propose Stream-DiffVSR, a causally conditioned diffusion framework for efficient online VSR. Operating strictly on past frames, it combines a four-step distilled denoiser for fast inference, an Auto-regressive Temporal Guidance (ARTG) module that injects motion-aligned cues during latent denoising, and a lightweight temporal-aware decoder with a Temporal Processor Module (TPM) that enhances detail and temporal coherence. Stream-DiffVSR processes 720p frames in 0.328 seconds on an RTX4090 GPU and significantly outperforms prior diffusion-based methods. Compared with the online SOTA TMP, it boosts perceptual quality (LPIPS +0.095) while reducing latency by over 130x. Stream-DiffVSR achieves the lowest latency reported for diffusion-based VSR, reducing initial delay from over 4600 seconds to 0.328 seconds, thereby making it the first diffusion VSR method suitable for low-latency online deployment. Project page: https://jamichss.github.io/stream-diffvsr-project-page/

Summary

  • The paper introduces an auto-regressive diffusion model for streamable video super-resolution that reduces latency and memory usage while maintaining high image quality.
  • It leverages conditional denoising with past high-resolution frames to boost perceptual metrics like PSNR, SSIM, and LPIPS, ensuring improved temporal consistency.
  • Experimental results demonstrate its practical deployment in real-time streaming applications such as live broadcasting, AR/VR, and mobile video enhancement.

Stream-DiffVSR: Low-Latency Streamable Video Super-Resolution via Auto-Regressive Diffusion

Introduction

"Stream-DiffVSR: Low-Latency Streamable Video Super-Resolution via Auto-Regressive Diffusion" (2512.23709) introduces a novel paradigm for video super-resolution (VSR) that seeks to circumvent the intrinsic latency constraints of conventional diffusion-based approaches by employing an auto-regressive, streamable framework. The work is motivated by the need for high-fidelity, temporally consistent VSR in real-time and streaming scenarios—a challenge mostly unaddressed by prior methods that focus on offline, full-clip processing and are unsuited for applications with strict latency and memory requirements.

Methodology

The core technical contribution is the formulation of an auto-regressive diffusion model tailored for video processing under stringent latency demands. Key features include:

  • Auto-Regressive Frame Generation: Unlike standard diffusion models that restore whole video clips in batches, Stream-DiffVSR generates high-resolution frames sequentially. Each frame is predicted based on historical context, decomposing temporal dependencies in a manner directly compatible with streaming input.
  • Conditional Denoising and Fast Sampling: The model is conditioned on both low-resolution input frames and, crucially, the preceding generated high-resolution frames. This temporal recursion enables continuity and leverages autoregression for efficient, tractable inference.
  • Optimized Network Architecture: The framework utilizes efficient U-Net-based architectures enhanced for temporal modeling, along with specialized attention and context modules to improve information propagation across frames while maintaining manageable computational overhead.
  • Latency-Aware Inference: The design explicitly addresses real-time constraints, enabling inference pipelines that avoid full-sequence lookahead or global temporal aggregation, thus reducing both latency and memory footprint to suit deployment in streaming or online scenarios.

Experimental Results

The model is evaluated on public VSR benchmarks and compared against both conventional deterministic VSR methods (e.g., BasicVSR++ [chan2022basicvsr++]) and recent diffusion-based VSR approaches (e.g., DiffVSR [li2025diffvsr], FlashVSR [zhuang2025flashvsr]). The analysis encompasses not only fidelity metrics—such as PSNR and SSIM—but also perceptual measures and real-world latency benchmarks.

Strong numerical results reported include:

  • Latency and Memory: Stream-DiffVSR achieves a substantial reduction in per-frame latency and peak memory usage relative to batch-mode diffusion models, facilitating operation within online processing requirements without sacrificing visual quality.
  • Perceptual Quality and Fidelity: When compared with recent state-of-the-art diffusion-based VSR systems, Stream-DiffVSR attains competitive or superior results on perceptual quality (as measured by LPIPS and subjective VQA metrics) while maintaining high PSNR and SSIM scores.
  • Temporal Consistency: The auto-regressive design mitigates temporal flickering by leveraging past frame reconstructions as conditional input, yielding improvements in temporal consistency metrics.

Bold claims include: Stream-DiffVSR, to the best of the authors' knowledge, is the first diffusion-based VSR model able to operate in a truly streamable manner with minimal quality compromise compared to full-video inference models.

Implications and Future Directions

By successfully adapting diffusion models to the auto-regressive, streamable regime, Stream-DiffVSR significantly widens the deployment envelope for diffusion-based VSR, bridging the gap between offline high-fidelity methods and the practical demands of low-latency applications such as live video enhancement and edge deployment.

The approach underscores that autoregressive diffusion is not only feasible but also effective for challenging structured prediction tasks over long temporal horizons. This has several practical implications:

  • Deployment on Streaming and Edge Platforms: The strict streaming compatibility of Stream-DiffVSR makes it suitable for integration into live broadcasting, AR/VR, and mobile video streaming pipelines.
  • Foundation for Efficient Diffusion Video Models: It opens avenues for distillation, compression, and hybrid strategies that exploit both strong generative priors and practical inference constraints.
  • Extension to Other Video Restoration Tasks: The methodology may generalize to frame interpolation, restoration, or inpainting tasks where online, causal processing is paramount.

From a theoretical perspective, the work invites further exploration into the expressivity of autoregressive conditioning within diffusion frameworks for structured sequence modeling. Moreover, it prompts follow-up research on enhancing error resilience across long autoregressive chains and exploring trade-offs between autoregressive order, sampling speed, and fidelity.

Conclusion

Stream-DiffVSR proposes an auto-regressive diffusion model architecture and inference pipeline that fundamentally enable low-latency, streamable video super-resolution. Through architectural and methodological innovation, the model demonstrates that high perceptual quality and temporal consistency can be achieved in real-time settings typical for modern video delivery. This work not only sets a new benchmark for streamable VSR in both practicality and fidelity but also establishes methodological foundations likely to influence future research in efficient generative video modeling, with direct relevance for both academic inquiry and commercial deployment.

Whiteboard

Paper to Video (Beta)

Explain it Like I'm 14

What is this paper about?

This paper is a simple rulebook. It tells authors how to write a short, one-page “rebuttal” (a reply) to answer questions from reviewers after they’ve read the author’s original research paper. It explains what the rebuttal should include, what it should not include, and exactly how to format it so everyone follows the same fair rules.

What questions is it trying to answer?

In easy terms, the guide answers:

  • What is a rebuttal for, and what should I put in it?
  • How long can it be, and what are the formatting rules?
  • Can I add new experiments or results?
  • How do I include figures and references correctly?
  • How do I keep my identity hidden (stay anonymous)?

How does the guide approach this?

Think of this like instructions for a school assignment where the teacher says: “Use this template, this font, these margins, and only one page.” The guide uses LaTeX (a common tool scientists use to write papers) and gives a ready-made template to:

  • Keep everything in two columns (like a newspaper).
  • Use standard font sizes (10-point for main text).
  • Keep strict page limits (maximum one page, including any pictures and references).
  • Number equations and keep figures easy to read when printed.
  • Center all images and make sure text inside figures is big enough.
  • List references with numbers in square brackets (like [12]) and use a small font for them.

It also explains anonymity: don’t include links or details that reveal who you are, and don’t change the margins or formatting to cram in extra words.

What are the main takeaways and why do they matter?

Here are the big points the guide emphasizes:

  • Purpose of the rebuttal: It’s to correct factual mistakes, answer reviewers’ questions, or clarify parts of your original paper. It is not the place to add brand-new ideas, big new experiments, or extra sections unless reviewers specifically asked for them. This keeps the process fair and focused.
  • Length limit: One page total. If you go over, it won’t be reviewed. This forces authors to be clear and concise.
  • No new experiments (unless asked): Reviewers shouldn’t demand big new experiments for a rebuttal, and authors shouldn’t include them. This stops an “arms race” and keeps the review timeline realistic.
  • Strict formatting: Use the provided template, two columns, standard fonts, and proper margins. Don’t tweak the layout to sneak in more text. Consistent formatting makes responses easier to read and compare.
  • Figures and equations: You can include a small figure, graph, or a short proof to explain your point. Center graphics, make labels readable on paper, number equations, and use clear captions.
  • References: Use numbered references in a small font at the end. Cite them in the text with square brackets like [5].
  • Anonymity: Don’t include anything (like external links or identifying info) that reveals who you are. This protects a fair, blind review.
  • Avoid confusion with numbering: If you refer to Figure 1 in your rebuttal, make sure it’s not confused with Figure 1 in your main paper. Keep numbering separate or clearly marked.

These rules matter because they:

  • Make the process fair: Every author has the same space and style.
  • Save time: Reviewers can quickly find and read what they need.
  • Keep focus: The rebuttal clarifies the original work instead of turning into a new paper.
  • Protect anonymity: Reviews are unbiased when identities are hidden.

What’s the bigger impact?

By following these guidelines, authors give clear, short, and fair replies to reviewers. Reviewers can judge papers more accurately and efficiently. Overall, this improves the quality of peer review, reduces pressure to produce rushed extra experiments, and helps good research get a fair chance. In short, it makes the scientific review process more organized, fair, and effective.

Knowledge Gaps

Knowledge gaps, limitations, and open questions

Below is a concise list of what remains missing, ambiguous, or unexplored in the provided author response guidelines, framed to enable actionable follow-up by future organizers or researchers.

  • Lack of empirical justification for the one-page rebuttal limit (e.g., effects on review outcomes, fairness across papers with differing complexity).
  • Ambiguous definition of “significant additional experiments” and “new experimental results” (no concrete thresholds, examples, or edge-case guidance).
  • No guidance for handling reviewer requests that explicitly ask for new experiments or results within the rebuttal constraints (process, scope, format).
  • No clarity on how rebuttals are used by reviewers and area chairs (e.g., weighting, typical impact on final decisions, triaging when multiple reviewers have conflicting critiques).
  • Insufficient instructions on prioritizing content within one page (best practices, recommended structure beyond optional sections, example templates).
  • Anonymity rules are underspecified for links to resources (e.g., whether anonymized code/data repositories, DOIs, or persistent identifiers are allowed).
  • No policy for referencing or pointing to existing supplemental materials from the original submission while staying within rebuttal constraints.
  • Ambiguous enforcement for margin/compliance tampering (how violations are detected, tolerance ranges, automated checks, appeal process).
  • No accessible-compliance guidance (e.g., colorblind-safe palettes, minimum contrast ratios, alt-text for figures, screen-reader compatibility of PDFs).
  • No concrete guidance for figure quality standards (recommended DPI, minimum font sizes in figures for print readability, vector vs raster preferences).
  • No examples or prescribed scheme for avoiding numbering collisions with the main paper (e.g., “R-Fig 1,” “R-Table 1,” “R-Eq 1”).
  • The “workaround” referenced for numbering overlap is not described (missing instructions or code snippet).
  • Referencing rules are underspecified relative to the main paper’s bibliography (should items be re-cited and re-numbered, can authors reuse labels, how to handle overlap).
  • Unclear whether footnotes, line numbers, or specific LaTeX packages (e.g., minted, hyperref) are permitted and how they affect compliance/length.
  • No guidance for authors who do not use LaTeX (Word/Markdown templates, PDF generation instructions, exact margin specifications for non-LaTeX workflows).
  • Font guidance is narrow (Times/Times Roman) without alternatives for languages/scripts requiring different typefaces (and compatibility with math fonts).
  • No guidance on language/style (e.g., whether non-English rebuttals are allowed, expected tone, recommended clarity practices for non-native authors).
  • No instructions on file-size limits, font embedding, PDF/A compliance, or compatibility with submission systems (and how these affect readability and printing).
  • No recommendation for handling contradictory or multi-reviewer requests within a single page (prioritization strategies, synthesis templates).
  • No timeline/process guidance (deadlines, resubmission policies, visibility of rebuttal to meta-reviewers, whether authors can respond to post-rebuttal comments).
  • No explicit policy on including “proofs” in rebuttals in non-theoretical papers (scope, acceptable length, formatting for mathematical content).
  • The prohibition on external links to avoid bypassing length constraints is not operationalized (criteria for what counts as circumvention; acceptable citations to external datasets or prior work).
  • No guidance for referencing and comparing results from other papers without adding new experimental content (recommended formats, pitfalls to avoid).
  • No specific instructions for handling equations in rebuttals (numbering, referencing equations from the main paper, consistent math font usage in captions).
  • No instructions on image compression and color profiles (to ensure consistent print quality and avoid artifacts when fitting within one page).
  • No fair-use guidance for reusing figures/tables from the original submission or published works (permissions, citation, anonymization).
  • No clarity on what constitutes “significantly altered formatting” beyond margins (e.g., micro-adjustments to spacing, caption font scaling, column widths).
  • Mention of “Make sure to update the paper title and paper ID” lacks explicit directions (where in the template, required fields, validation steps).
  • No explicit criteria or examples that distinguish acceptable “illustrations” from disallowed “new experimental results” when adding figures to rebuttals.
  • No recommended rubrics or checklists for authors to self-audit rebuttal compliance (length, formatting, anonymity, accessibility, numbering).

Glossary

  • Adam: An adaptive optimization algorithm for training neural networks using estimates of first and second moments of gradients. "Adam: A method for stochastic optimization"
  • Adaptive separable convolution: A convolution technique using separable filters whose parameters adapt per input, often for video frame interpolation. "Video frame interpolation via adaptive separable convolution"
  • Auto-encoding variational bayes (VAE): A probabilistic generative modeling framework combining variational inference with autoencoders. "Auto-encoding variational bayes"
  • Bayesian hypernetworks: Hypernetworks that use Bayesian inference to model uncertainty over generated network weights. "Bayesian hypernetworks"
  • Blind super-resolution: Super-resolution where the degradation model is unknown and must be inferred from data. "Deep blind video super-resolution"
  • Burst denoising: Reducing noise by leveraging multiple rapidly captured images (a burst) of the same scene. "Burst denoising with kernel prediction networks"
  • Deformable convolutional networks: Convolutional layers augmented with learnable offsets for sampling flexible spatial locations. "Deformable convolutional networks"
  • Denoising diffusion implicit models (DDIM): A class of diffusion models enabling faster, deterministic sampling while preserving quality. "Denoising diffusion implicit models"
  • Edge-directed image interpolation: Interpolation methods that preserve edges by guiding reconstruction along estimated edge directions. "Edge-directed image interpolation"
  • Generative adversarial networks (GANs): Generative models trained by an adversarial game between a generator and a discriminator. "Generative adversarial networks"
  • Hypernetworks: Neural networks that generate the weights or parameters of another network. "Hypernetworks"
  • Image registration: Aligning images to a common coordinate system to improve resolution or combine information. "Improving resolution by image registration"
  • Includegraphics (LaTeX): LaTeX command used to include external graphics files into a document. "use \verb+\includegraphics+"
  • Kernel estimation: Estimating blur or degradation kernels from data for restoration tasks. "Real-world super-resolution via kernel estimation and noise injection"
  • Laplacian pyramid networks: Multi-scale architectures using Laplacian pyramids for efficient high-frequency reconstruction. "Fast and accurate image super-resolution with deep laplacian pyramid networks"
  • Latent diffusion models: Diffusion-based generative models that operate in a compressed latent space for efficiency. "High-Resolution Image Synthesis With Latent Diffusion Models"
  • LoRA (Low-Rank Adaptation): A parameter-efficient fine-tuning method that adds trainable low-rank adapters to large models. "Pixel-level and Semantic-level Adjustable Super-resolution: A Dual-LoRA Approach"
  • Markov Random Field (MRF): A probabilistic graphical model representing spatial dependencies via undirected graphs. "Markov Random Field Model-Based Edge-Directed Image Interpolation"
  • Motion compensation: Accounting for inter-frame motion to improve video processing or reconstruction. "Real-time video super-resolution with spatio-temporal networks and motion compensation"
  • No-reference image quality assessment (NR-IQA): Assessing image quality without access to a reference image. "No-reference image quality assessment in the spatial domain"
  • Optical flow: Estimating per-pixel motion between consecutive frames in a video. "Optical flow estimation using a spatial pyramid network"
  • PAMI-TC: The IEEE Pattern Analysis and Machine Intelligence Technical Committee that sets policies for the field. "Per a passed 2018 PAMI-TC motion"
  • Perceptual losses: Loss functions based on feature-space distances that correlate better with human perception than pixel-wise losses. "Perceptual losses for real-time style transfer and super-resolution"
  • Pica: A typesetting unit (≈1/6 inch) used for layout measurements such as indentation. "All paragraphs should be indented 1 pica (approx.~16\frac{1}{6} inch or 0.422 cm)."
  • PSNR (Peak Signal-to-Noise Ratio): A quantitative metric for reconstruction fidelity based on logarithmic ratio of signal to noise. "PSNR"
  • SGDR (Stochastic Gradient Descent with Warm Restarts): A learning rate schedule that periodically restarts to escape local minima. "Sgdr: Stochastic gradient descent with warm restarts"
  • Simulated annealing: A stochastic optimization technique inspired by the physical annealing process. "simulated annealing methods"
  • Structural similarity (SSIM): A perceptual image quality metric comparing luminance, contrast, and structure. "Image quality assessment: from error visibility to structural similarity"
  • Sub-pixel convolutional neural network: An upsampling method that rearranges features to produce higher-resolution outputs (pixel shuffle). "Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network"
  • Squeeze-and-Excitation (SE) networks: Architectures that adaptively recalibrate channel-wise feature responses. "Squeeze-and-excitation networks"
  • Temporal motion propagation (TMP): Propagating motion information across time to improve video super-resolution. "Tmp: Temporal motion propagation for online video super-resolution"
  • Transformer (Vision Transformer): Attention-based neural network architecture adapted for visual tasks. "Swin transformer: Hierarchical vision transformer using shifted windows"
  • Variational degradations: Modeling diverse degradation processes via variational distributions in super-resolution. "Super-Resolution with Variational Degradations"
  • Video super-resolution (VSR): Enhancing the spatial resolution of video frames by exploiting temporal and spatial correlations. "Basicvsr++: Improving video super-resolution with enhanced propagation and alignment"
  • Weight decay regularization: A regularization technique adding an L2 penalty to weights to reduce overfitting. "Decoupled weight decay regularization"

Practical Applications

Immediate Applications

The following list summarizes concrete, deployable applications that can be implemented now based on the paper’s guidelines on author rebuttals (formatting, content constraints, anonymity, and figure/citation practices).

  • Bolded item names denote the application; each entry includes sectors, potential tools/products/workflows, and assumptions/dependencies.
  • Rebuttal Compliance Validator in Submission Portals
    • Sectors: software, academia, publishing
    • Tools/workflows: Automated checks in CMT/EasyChair/OpenReview for page length, margins, two-column layout, font sizes (10pt main, 9pt captions), column widths, spacing, equation numbering, and reference style (9pt, numbered).
    • Assumptions/dependencies: Access to PDFs and metadata at upload; reliable PDF parsing; conferences allow automated “block” or “flag” on noncompliant responses.
  • Anonymity and Identity-Leakage Scanner
    • Sectors: software, academia, policy
    • Tools/workflows: PDF metadata scrubber; link checker that flags external links revealing identity or bypassing length limits; linter for author names, affiliations, ORCID, acknowledgments; Overleaf plugin to sanitize metadata.
    • Assumptions/dependencies: Robust detection of identity-bearing content and URLs; author adoption of tools; conference policy enforcement.
  • Content Policy Guard (No New Contributions)
    • Sectors: software, academia, publishing
    • Tools/workflows: NLP/LLM-based classifier to detect inclusion of new theorems, algorithms, or experiments vs. allowed clarifications; flags content not explicitly requested by reviewers.
    • Assumptions/dependencies: Access to the submission and review text for context; acceptable false-positive rates; clear policy definitions for “new contributions.”
  • Equation and Cross-Reference Numbering Linter
    • Sectors: software, academia
    • Tools/workflows: LaTeX package/macro to enforce equation numbering; automatic prefixing (e.g., “R1, R2”) to avoid overlap with main paper numbering; build-time warnings for unnumbered displays and ambiguous refs.
    • Assumptions/dependencies: Authors use LaTeX; minimal friction to integrate into existing class files; cooperation from conference template maintainers.
  • Figure Readability and Print-Friendliness Checker
    • Sectors: software, academia, publishing
    • Tools/workflows: Analyzer that verifies font sizes in figures match body text; assesses line widths and resolvability at common print DPIs; enforces centering and width constraints (e.g., use of includegraphics with relative linewidth).
    • Assumptions/dependencies: Reliable figure extraction from PDFs; agreed thresholds for readability; support in Overleaf/CI pipelines.
  • Reference Style Conformance and Bib Hygiene
    • Sectors: software, academia
    • Tools/workflows: Style-driven BibTeX auditors enforcing 9pt fonts, numbering, and citation format (square brackets); deduplication of repeated entries; detection of broken BibTeX strings.
    • Assumptions/dependencies: Use of BibTeX/Biber; access to bibliography files; acceptance of standardized style files by conferences.
  • Reviewer Training Micro-modules (Policy Awareness)
    • Sectors: academia, policy, education
    • Tools/workflows: Short modules embedded in reviewer onboarding to reinforce the 2018 PAMI-TC motion (avoid requesting significant new experiments; avoid penalizing rebuttals for lack of new results).
    • Assumptions/dependencies: Conference compliance and buy-in; tracking completion; alignment with program chairs’ guidance.
  • Department/Group Rebuttal SOPs
    • Sectors: academia, education
    • Tools/workflows: Internal standard operating procedures and templates ensuring structural clarity, optional figures/tables for illustration, and strict adherence to length/layout constraints.
    • Assumptions/dependencies: Local adoption; access to common tooling (Overleaf, Git); culture of documentation and training.

Long-Term Applications

These applications will benefit from additional research, scaling, and policy alignment before broad deployment.

  • Integrated Author Assistant for Rebuttal Crafting
    • Sectors: software, academia
    • Tools/workflows: LLM-based assistant that ingests reviews and original submissions, drafts structured, policy-compliant rebuttals focused on factual corrections and requested clarifications; proactively removes identity risks and formatting violations.
    • Assumptions/dependencies: High-quality review parsing; interpretability safeguards; alignment with conference rules; human-in-the-loop editing.
  • Cross-Platform Formatting Equivalence and Validators (LaTeX/Word)
    • Sectors: software, publishing
    • Tools/workflows: Unified validators that ensure equivalent compliance across LaTeX and Word (columns, margins, fonts, caption styles); standardized templates and CI-like checks across authoring platforms.
    • Assumptions/dependencies: Consistent rendering across engines; cooperation from template maintainers; reliable PDF normalization.
  • Policy Harmonization Across Conferences
    • Sectors: policy, academia, publishing
    • Tools/workflows: Shared, machine-readable policy schemas (length limits, anonymity rules, allowed content) enabling interoperable validators and clearer author expectations.
    • Assumptions/dependencies: Consensus-building across societies; stable policy versioning; governance for updates.
  • Reviewer Decision Support to Enforce Policy
    • Sectors: software, academia, policy
    • Tools/workflows: Systems that detect when reviews request significant new experiments or penalize authors for not including them in rebuttals; soft nudges or blocks to align reviewer behavior with policy.
    • Assumptions/dependencies: Access to review text; willingness to instrument reviewer workflows; careful design to respect reviewer autonomy.
  • Print Readability Predictors for Graphics
    • Sectors: software, publishing
    • Tools/workflows: ML models trained on large corpora to predict figure readability and resolution under common print settings; auto-suggestions for font/line adjustments and layout revisions.
    • Assumptions/dependencies: Labeled datasets for readability; generalization across figure types; integration with graphics tools.
  • Document-Compliant Workflows for Industry and Legal
    • Sectors: software, legal, corporate communications
    • Tools/workflows: Adaptation of strict academic formatting/checking pipelines to corporate and legal briefs (length, layout, anonymization, citation consistency), with audit trails for compliance.
    • Assumptions/dependencies: Domain-specific policy mapping; data privacy requirements; stakeholder acceptance.
  • Analytics on Compliance and Bias Reduction
    • Sectors: academia, policy
    • Tools/workflows: Longitudinal analysis of how anonymity enforcement and formatting compliance affect review outcomes and bias; dashboards for program chairs; evidence-driven policy refinement.
    • Assumptions/dependencies: Ethical data collection; IRB approvals where applicable; robust statistical methods; buy-in from conferences and reviewers.

Open Problems

We found no open problems mentioned in this paper.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 7 tweets with 242 likes about this paper.