Maximizing battery life on embedded platforms - Part 4. Turning off peripherals and subsystems
- In Part 1, the author reviews key methods for power reduction and addresses the nature of efficiency in embedded systems
- In Part 2 looks at the energy cost of memory access and power-reduction methods for memory access
- In Part 3 examines at the energy cost of memory access
- This final part discusses considerations in selectively powering down peripherals and subsystems.
Friendly and unfriendly peripherals
Some systems have very clever peripherals and they are not just there to fill up the available silicon space. So, use the peripheral system to your advantage. If you have a DMA engine and need to copy large amounts of data about then use it! You can either have the CPU go off and do something else in parallel during the transfer time or, if nothing else needs doing, put it to sleep and wake it up when the data is in place.
Also worth considering is the relative speed of your core as compared to your peripherals. When doing something which is bounded by the speed of the peripherals, there is no point in keeping the core running at full speed. It will spend most of its time waiting. A good example might be programming flash memory. The algorithm will spend a lot of its time waiting for a response from the memory device. No matter how fast you run the core, the memory will still respond at the same speed. So, if you are not doing anything else and you can control the clock speed of the core, reduce it to the minimum speed which allows you to respond to the memory in time. That way, the core will spend the same time idle but will consume less power when just waiting for each response.
The overall time to complete the operation will remain the same but the energy usage will go down.
Throughput ≠ Latency
Now, as developers of embedded systems, we are all concerned with responding to external events. Sometimes these events must be responded to within a minimum time. This is the latency of the event. Embedded systems are typically defined by the latency within which they must respond to these events. But not all events are equal. Some events need a response but do not need it within a specified time. Some events need a response within a specified time but that time is so long compared to others that it effectively doesn’t matter – in these cases the key fact is that we maintain throughput by responding to all events but the latency of individual responses is immaterial.
So, don’t confuse latency and throughput. In general, processing an external event will involve some kind of context switch. At best, it may involve suspending some other task, at worst it may involve waking up the entire system. If the response can be delayed or done at some later, more convenient time, the overhead can be reduced or even eliminated.