Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash 92 tok/s
Gemini 2.5 Pro 50 tok/s Pro
GPT-5 Medium 11 tok/s
GPT-5 High 14 tok/s Pro
GPT-4o 99 tok/s
GPT OSS 120B 462 tok/s Pro
Kimi K2 192 tok/s Pro
2000 character limit reached

Understanding the Mapping of Encode Data Through An Implementation of Quantum Topological Analysis (2209.10596v4)

Published 21 Sep 2022 in quant-ph

Abstract: A potential advantage of quantum machine learning stems from the ability of encoding classical data into high dimensional complex Hilbert space using quantum circuits. Recent studies exhibit that not all encoding methods are the same when representing classical data since certain parameterized circuit structures are more expressive than the others. In this study, we show the difference in encoding techniques can be visualized by investigating the topology of the data embedded in complex Hilbert space. The technique for visualization is a hybrid quantum based topological analysis which uses simple diagonalization of the boundary operators to compute the persistent Betti numbers and the persistent homology graph. To augment the computation of Betti numbers within a NISQ framework, we suggest a simple hybrid algorithm. Through a illuminating example of a synthetic data set and the methods of angle encoding, amplitude encoding, and IQP encoding, we reveal topological differences with the encoding methods, as well as the original data. Consequently, our results suggest the encoding method needs to be considered carefully within different quantum machine learning models since it can strongly affect downstream analysis like clustering or classification.

Citations (5)
List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Ai Generate Text Spark Streamline Icon: https://streamlinehq.com

Paper Prompts

Sign up for free to create and run prompts on this paper using GPT-5.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.

Authors (2)