When software controls your cooling tactics
Bill Schweber - January 28, 2013
Today's power-driven devices - whether a portable smartphone or hybrid electric vehicle - rely on sophisticated power and cooling management, no news there. While the manifestation of the challenge takes on different guises, the end objectives are the same: using the available power carefully, while keeping things cool. Those "things" can be circuitry, components, engines, motors, batteries, and more, of course.
A large part of this is now accomplished via sophisticated algorithms embedded in the system firmware. These algorithms manage power consumption and use, adjust operating parameters, check usage and cooling, and do whatever they have to do to make the sure system is operating optimally.
Of course, "optimum" means different things under different circumstances. It didn't get much attention, but a recent recall of the Ford 2013 Fusion sedans and Escape crossovers with a 1.6-liter EcoBoost engine was a good example of both the promise and pitfalls of such sophisticated engineering management; see this surprisingly well-written story from USA Today, "Ford blames coolant system for Escape, Fusion fires."
In short, when everything was working fine, the cooling-control software was doing fine. But if a certain combination of faults developed in the hardware (and we really do mean "hardware" here, BTW) then the software was unprepared for this specialized circumstance and could not cope. Result: possible engine fire.
I'm not faulting the designers of this system, not at all. They have done, IMO, a remarkably good job balancing competing requirements, operational circumstances, and foreseeable single faults, as well multiple faults in sequence or combination, and more. But the problem is that despite best efforts, it's not possible to foresee everything that can go wrong, especially when it comes to multiple or sequentional fault events.
Yes, tests were done, fault trees were outlined, and lots of thinking went into the cooling software and algorithms, no doubt. But reality being what it is, not every circumstance can be anticipated. Even if you take tens or hundreds of units and did accelerated life tests on them for the equivalent of a lifetime or more, it's not the same as having tens of thousands of units out in the field, where all those "tail of the curve" problems. Tests based on large populations and the laws of large numbers tell you things that a smaller, well-defined test process can't, that's reality.
Have you ever tried to anticipate "everything" that could go wrong, but once units were in the field, were bitten by law of unforeseen consequences? And did the "smart people" (including lawyers) irritate and annoy you after the fact, by saying that you should have seen this possibility?