Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 71 tok/s
Gemini 2.5 Pro 48 tok/s Pro
GPT-5 Medium 23 tok/s Pro
GPT-5 High 17 tok/s Pro
GPT-4o 111 tok/s Pro
Kimi K2 161 tok/s Pro
GPT OSS 120B 412 tok/s Pro
Claude Sonnet 4 35 tok/s Pro
2000 character limit reached

Identifying Bottlenecks of NISQ-friendly HHL algorithms (2406.06288v2)

Published 10 Jun 2024 in quant-ph and cs.ET

Abstract: Quantum computing promises enabling solving large problem instances, e.g. large linear equation systems with HHL algorithm, once the hardware stack matures. For the foreseeable future quantum computing will remain in the so-called NISQ era, in which the algorithms need to account for the flaws of the hardware such as noise. In this work, we perform an empirical study to test scaling properties and directly related noise resilience of the the most resources-intense component of the HHL algorithm, namely QPE and its NISQ-adaptation Iterative QPE. We explore the effectiveness of noise mitigation techniques for these algorithms and investigate whether we can keep the gate number low by enforcing sparsity constraints on the input or using circuit optimization techniques provided by Qiskit package. Our results indicate that currently available noise mitigation techniques, such as Qiskit readout and Mthree readout packages, are insufficient for enabling results recovery even in the small instances tested here. Moreover, our results indicate that the scaling of these algorithms with increase in precision seems to be the most substantial obstacle. These insights allowed us to deduce an approximate bottleneck for algorithms that consider a similar time evolution as QPE. Such observations provide evidence of weaknesses of such algorithms on NISQ devices and help us formulate meaningful future research directions.

Citations (1)

Summary

We haven't generated a summary for this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 1 post and received 0 likes.