# “The World is Markov”

## How an impossible assumption revolutionized robotics.

Robotic capability has been growing by leaps and bounds in the 21st century, redefining the expectations of what is possible, as concepts from science fiction rapidly become reality. This rapid progress is due in large part to the adoption of probabilistic techniques which model the state of a robot and its environment along with uncertainty, an approach referred to as *Probabilistic Robotics,* which is explored in detail in the seminal 2005 text by Sebastian Thrun, Wolfram Burgard, and Dieter Fox. By modeling the inherent uncertainty in robotic perception (due to sensor noise) and action (due to process noise), a far more informed belief of the state of the robot is produced, and the control choices for the robot are made more robust by taking into consideration the level of uncertainty in the current state estimate and seeking to reduce it as much as possible. The probabilistic approach has expanded the operational domain of robots from highly controlled environments out into the real world, with all of its innumerable sources of uncertainty. However, even in light of the explosive growth of computational capability in recent decades, a severe assumption must be made to make the process of modeling this uncertainty computationally tractable: the *Markov assumption*.

As Thrun, Burgard, and Fox acknowledge, all of the algorithms in their book are based on the *Bayes filter*, which is a general framework for recursively producing a posterior belief of a robot’s state given belief of a prior state, control input, and sensor measurements. Put simply, a prediction about the posterior state is made using the previous state belief and control input, and then this prediction is corrected using the incoming sensor measurements, based on the likelihood of those measurements given the prediction. A superb explanation and derivation of the Bayes filter is given in a YouTube video by Cyrill Stachniss. As Prof. Stachniss explains in the video, the Markov assumption is not made just once, but TWICE in the Bayes filter algorithm: once in the motion model, and again in the measurement model. But just what is this assumption, and why is it a big deal?

As stated on pg. 33 of Thrun et al., “The Markov assumption postulates that past and future data are independent if one knows the current state, *xₜ.*” To put it another way, this means that knowing the current state of the robot and its environment tells you everything you need to know about all previous states of the robot and environment, which is why it is also sometimes called the *complete state assumption*. As this latter name implies, we are assuming that the state *xₜ* is *complete*, meaning that “knowledge of past states, measurements, or controls carry no additional information that would help us predict the future more accurately” (Thrun et al. 22). Here we begin to see the rub: exactly how much information would we need to be tracking in each frame to completely describe the entire history of a robot and its environment up until the present? The answer is indefinite.

Let’s imagine a scenario which modern robots are finding themselves in: driving down the highway. As a human driver, you may spot an aggressive driver approaching from far behind, making abrupt and unsignaled lane changes in between tailgating other cars. You will automatically begin to track this actor in a special way, anticipating the future based on past behavior, since when this driver gets close to you on the road, you’ll anticipate a higher likelihood for them to cut you off to take your lane, putting you at risk. If we imagine trying to build this into a complete state representation, we would need to be modeling and tracking a separate behavioral profile for every actor on the road, and the overhead of complexity becomes clear. Further, what if this driver appears to chill out? Perhaps their boss texted them saying not to worry about running late, their wife successfully delivered the baby in the car, or they’ve seen a police car behind them that is yet invisible to you, how could we possibly know any of this using sensors? As Thrun, Burgard, and Fox admit: “the notion of state completeness is mostly of theoretical importance. In practice, it is impossible to specify a complete state for any realistic robot system” (22).

So there you have it: we are knowingly making an impossible assumption for the sake of mathematical convenience in our algorithms. However, this has not stopped probabilistic robotics from delivering revolutionary results across virtually all robotics domains. To the contrary! Kalman Filters, their extensions as Extended and Unscented Kalman Filters, as well as the nonparametric Histogram Filters and Particle Filters, are all based on the Bayes filter algorithm, and therefore the Markov assumption. But these tools are being applied pervasively and with great success throughout modern robotics applications. In fact, they have become industry standard practice wherever sensor measurements are being used to update state estimations. However, in recognizing this great success, it is always important to remain cognizant of our underlying assumptions, which in the case of the Markov assumption, as we have demonstrated here, is quite substantial (impossible to realistically satisfy). Awareness of our assumptions allows us to know where the potential weaknesses of our approaches lie, and provide clues to where future improvements may be focused.