# Limit Synchronization in Markov Decision Processes

Markov decision processes (MDP) are finite-state systems with both strategic and probabilistic choices. After fixing a strategy, an MDP produces a sequence of probability distributions over states. The sequence is eventually synchronizing if the probability mass accumulates in a single state, possibly in the limit. Precisely, for 0 ≤ p ≤ 1 the sequence is p-synchronizing if a probability distribution in the sequence assigns probability at least p to some state, and we distinguish three synchronization modes: (i) sure winning if there exists a strategy that produces a 1-synchronizing sequence; (ii) almost-sure winning if there exists a strategy that produces a sequence that is, for all ε > 0, a (1-ε)-synchronizing sequence; (iii) limit-sure winning if for all ε > 0, there exists a strategy that produces a (1-ε)-synchronizing sequence. In this talk, we consider the problem of deciding whether an MDP is sure, almost-sure, or limit-sure inning, and we present some results about the computational complexity of these decision problems and memory requirement for optimal winning strategies.