Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 73 tok/s
Gemini 2.5 Pro 40 tok/s Pro
GPT-5 Medium 32 tok/s Pro
GPT-5 High 28 tok/s Pro
GPT-4o 75 tok/s Pro
Kimi K2 184 tok/s Pro
GPT OSS 120B 466 tok/s Pro
Claude Sonnet 4.5 35 tok/s Pro
2000 character limit reached

Scalable Simulation of Fermionic Encoding Performance on Noisy Quantum Computers (2506.06425v1)

Published 6 Jun 2025 in quant-ph

Abstract: A compelling application of quantum computers with thousands of qubits is quantum simulation. Simulating fermionic systems is both a problem with clear real-world applications and a computationally challenging task. In order to simulate a system of fermions on a quantum computer, one has to first map the fermionic Hamiltonian to a qubit Hamiltonian. The most popular such mapping is the Jordan-Wigner encoding, which suffers from inefficiencies caused by the high weight of some encoded operators. As a result, alternative local encodings have been proposed that solve this problem at the expense of a constant factor increase in the number of qubits required. Some such encodings possess local stabilizers, i.e., Pauli operators that act as the logical identity on the encoded fermionic modes. A natural error mitigation approach in these cases is to measure the stabilizers and discard any run where a measurement returns a -1 outcome. Using a high-performance stabilizer simulator, we classically simulate the performance of a local encoding known as the Derby-Klassen encoding and compare its performance with the Jordan-Wigner encoding and the ternary tree encoding. Our simulations use more complex error models and significantly larger system sizes (up to $18\times18$) than in previous work. We find that the high sampling requirements of postselection methods with the Derby-Klassen encoding pose a limitation to its applicability in near-term devices and call for more encoding-specific circuit optimizations.

Summary

We haven't generated a summary for this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.