One tool in the control/dynamics engineer’s toolbox is the Kalman filter. It’s one of those big intellectual hammers that makes many problems look like nails.
Put simply, a Kalman filter combines noisy external measurements with an internal simulation of the system in question in order to estimate the system’s true state. Depending on how much you trust the measurements (the covariance of those measurements), the filter will weigh the internal model and the external measurements differently.
As with a lot of these big concepts, it’s a fun thought experiment to draw an analogy from the Kalman filter to the human brain. In many different domains, we are always running an internal model of the world, and comparing our own measurements to that model. We use a pseudo-Kalman filter when we’re walking – you hold a model of the ground in your head and for the most part assume that the ground at your next step is roughly the same as the ground on your last step. We also use a pseudo-Kalman filter when we’re learning or thinking – you compare new information to the model of the world in your head. Depending on certainty in your model and trust in the source of information, you give relative weight to both and compose the two together to estimate the true state of things.
This filtering can be powerful:
Say you see a bunch of dragons out of the corner of your eye. That’s your sensor data. Your internal model of the world says the word doesn’t have dragons (alas!) For most people, both the covariance of our peripheral vision and trust in our mental model are high enough to conclude that the ‘truth’ involves a flock of geese rather than a flight of dragons. (Fun fact: a group of dragons can also be referred to as a wing, flight or weyr.)
However, the filters can also fail: When your model is worse than you think (so you trust it too much) you can slip on a patch of ice by stepping like it’s firm ground or rejecting a new source of information simply because it doesn’t square with your internal picture of the world.
The trick, both in more advanced filters and in your brain, is to correctly update both your internal model and your sensor covariance based on new information. Like many worthwhile things, that’s simple to describe but surprisingly hard to implement.
The director of JPL, Charles Elachi, gave a great talk at Cornell today. I was impressed by his use of intuitively and emotionally resonant examples like pointing out how long it would take to drive to Mars to illustrate the distance from here to there.
What really got me thinking was when he showed a video of huge groups of people (in addition to the mission control staff) reacting jubilantly to Curiosity landing safely after the ‘7 minutes of terror.’ Some of them were so happy, they were practically crying. I felt those intense emotions too, both during the actual event and while watching the video.
But I also felt a twinge of annoyance. It confused me at first, but then I realized why: I’m annoyed that space exploration is basically as much a risky, extraordinary achievement as it was when Neil Armstrong and Buzz Aldrin landed on the moon almost half a decade ago. It’s weird, but sometimes it feels like an endeavor is only truly successful when ever-adaptable humans don’t even think twice about it anymore.
There has been some normalization of space – nobody cares when another GPS satellite is launched, or the crew out on the ISS. However, I feel like that normalization has stalled. Unlike, say, tablets or LEDs I can’t point to anything in space that has normalized recently nor anything that is a big deal now that will be normalized soon.
It’s funny to want this schizophrenia: get more excited! Ok – now stop! But I think that’s a sign of real progress, filtered through the lens of human behavior.
I’ve been working on the Quirk-E project, in particular, tuning parameters so that it produces numerical results that resemble reality. [link to blog] I plan to share it on Github soon, but until then, I wanted to bring up some important points about models. I may have made these points before, but I see so few people who actually think about them that I’m going to risk repeating myself.
The rise of cheap computing has allowed numerical models to explode into basically every domain. Some wonderful discoveries have come of this. It’s just important to remember both that they are only models and not reality and to keep in mind how much human discretion goes into modeling.
Every model is influenced by the discretion of the modeler, from something as simple as fitting a line to data points to running a genetic algorithm. Even though the genetic algorithm nominally ‘figures it out for itself,’ the human still sets a number of parameters that heavily influence the outcome. And it’s so tempting to tweak parameters without justification except that they give you the answer you want.
I’m trying to make sure I don’t fall into this trap by meticulously noting whenever I change a number and why I did it. Even ignoring any malicious intent, it’s so easy to get into a flow state of ‘if I just tweak this one more number it will work!’ The proliferation of models and the inevitable tweaking temptation increases the need for something like automatic documentation.
What you should take away is that any time ‘a computer told us so’ a human directed it in how to do that. It’s often forgotten that scientific models are just that – models. Nobody thinks that a model castle captures all of reality, we realize that instead, it’s a useful representation, whether for discovery or illustration. We should remember to treat scientific models the same way.
Computer models are like the yin to human intuition yang. Models bolster our intuition’s weaknesses and can teach us a lot of things. But it’s important to remember that like yin, a model can’t stand on it’s own and all models have a little dot of human choice in them.
“It’s all about the people.” This little but significant truth turns up in many different domains, and Ditch Day is no exception. It’s an oft-forgotten secret of Ditch Day that as much effort as the seniors put into a stack, at the end of the day it’s the group dynamics that really determine the level of awesome at the end of the day.
Sure, a well-made stack certainly smooths the process of fun-making, but a great group of people could have a good time locked in a room with nothing but a pile of sticks. And on the flipside, if the stackees are naturally acrimonious or just don’t like each other then the best stack in the history of Tech can’t overcome those dynamics.
The best you can do as a senior is advertise your stack as accurately as possible and hope that the underclassmen who are drawn to your sign-up-sheet are all excited by the same things as you and thus, excited by the same things as each other and likely to work well together.
Thus, the period just before 8am is filled with excitement for underclassmen, chaos for alumni and unknowing nervousness for seniors. ‘Will my stack be full of excitement and enthusiasm or judgment demanding to be satisfied?’ The nail-biting is amplified by the fact that by this time the seniors have evacuated the courtyard and campus, lest they be duct-taped to a tree (the traditional punishment for any senior caught on campus during Ditch Day.)
We wouldn’t know who was on our stack until they came through the doors of the Art House The Prancing Pony for breakfast!