Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 41 tok/s
Gemini 2.5 Pro 46 tok/s Pro
GPT-5 Medium 21 tok/s Pro
GPT-5 High 20 tok/s Pro
GPT-4o 91 tok/s Pro
Kimi K2 178 tok/s Pro
GPT OSS 120B 474 tok/s Pro
Claude Sonnet 4 38 tok/s Pro
2000 character limit reached

Conditioned real self-similar Markov processes (1510.01781v1)

Published 6 Oct 2015 in math.PR

Abstract: In recent work, Chaumont et al. [9] showed that is possible to condition a stable process with index ${\alpha} \in (1,2)$ to avoid the origin. Specifically, they describe a new Markov process which is the Doob h-transform of a stable process and which arises from a limiting procedure in which the stable process is conditioned to have avoided the origin at later and later times. A stable process is a particular example of a real self-similar Markov process (rssMp) and we develop the idea of such conditionings further to the class of rssMp. Under appropriate conditions, we show that the specific case of conditioning to avoid the origin corresponds to a classical Cram\'er-Esscher-type transform to the Markov Additive Process (MAP) that underlies the Lamperti-Kiu representation of a rssMp. In the same spirit, we show that the notion of conditioning a rssMp to continuously absorb at the origin also fits the same mathematical framework. In particular, we characterise the stable process conditioned to continuously absorb at the origin when ${\alpha} \in(0,1)$. Our results also complement related work for positive self-similar Markov processes in [10].

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-Up Questions

We haven't generated follow-up questions for this paper yet.