Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
144 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Generalized $n$-locality inequalities in linear-chain network for arbitrary inputs scenario and their quantum violations (2212.14326v1)

Published 29 Dec 2022 in quant-ph

Abstract: Multipartite nonlocality in a network is conceptually different from standard multipartite Bell nonlocality. In recent times, network nonlocality has been studied for various topologies. We consider a linear-chain topology of the network and demonstrate the quantum nonlocality (the non-$n$-locality). Such a network scenario involves $n$ number of independent sources and $n+1$ parties, two edge parties (Alice and Charlie), and $n-1$ central parties (Bobs). It is commonly assumed that each party receives only two inputs. In this work, we consider a generalized scenario where the edge parties receive an arbitrary $n$ number of inputs (equals to a number of independent sources), and each of the central parties receives two inputs. We derive a family of generalized $n$-locality inequalities for a linear-chain network for arbitrary $n$ and demonstrate the optimal quantum violation of the inequalities. We introduce an elegant sum-of-squares approach enabling the derivation of the optimal quantum violation of aforesaid inequalities \emph{without} assuming the dimension of the system. We show that the optimal quantum violation requires the observables of edge parties to mutually anticommuting. For $n=2$ and $3$, the optimal quantum violation can be obtained when each edge party shares a two-qubit entangled state with central parties. We further argue that for $n\geq 2$, a single copy of a two-qubit-entangled state may not be enough to exhibit the violation of $n$-locality inequality, but multiple copies of it can activate the quantum violation.

Citations (4)

Summary

We haven't generated a summary for this paper yet.