Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 98 tok/s
Gemini 2.5 Pro 58 tok/s Pro
GPT-5 Medium 25 tok/s Pro
GPT-5 High 23 tok/s Pro
GPT-4o 112 tok/s Pro
Kimi K2 165 tok/s Pro
GPT OSS 120B 433 tok/s Pro
Claude Sonnet 4 36 tok/s Pro
2000 character limit reached

Generalizing parking functions with randomness (2111.12850v1)

Published 25 Nov 2021 in math.CO and math.PR

Abstract: Consider $n$ cars $C_1, C_2, \ldots, C_n$ that want to park in a parking lot with parking spaces $1,2,\ldots,n$ that appear in order. Each car $C_i$ has a parking preference $\alpha_i \in {1,2,\ldots,n}$. The cars appear in order, if their preferred parking spot is not taken, they take it, if the parking spot is taken, they move forward until they find an empty spot. If they do not find an empty spot, they do not park. An $n$-tuple $(\alpha_1, \alpha_2, \ldots, \alpha_n)$ is said to be a parking function, if this list of preferences allows every car to park under this algorithm. For an integer $k$, we say that an $n$-tuple is a $k$-Naples parking function if the cars can park with the modified algorithm, where park $C_i$ backs up $k$-spaces (one by one) if their spot is taken before trying to find a parking spot in front of them. We introduce randomness to this problem in two ways: 1) For the original parking function definition, for each car $C_i$ that has their preference taken, we decide with probability $p$ whether $C_i$ moves forwards or backwards when their preferred spot is taken; 2) For the $k$-Naples definition, for each car $C_i$ that has their preference taken, we decide with probability $p$ whether $C_i$ backs up $k$ spaces or not before moving forward. In each of these models, for an $n$-tuple $\alpha\in{1,2,\ldots,n}n$, there is now a probability that the model ends in all cars parking or not. For each of these random models, we find a formula for the expected value. Furthermore, for the second random model, in the case $k =1$, $p=1/2$, we prove that for any $1\le t\le 2{n-2}$, there is exactly one $\alpha\in{1,2,\ldots,n}n$ such that the probability that $\alpha$ parks is $(2t-1)/2{n-1}$.

Summary

We haven't generated a summary for this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube