John.Bass

's profile
image
Owner/Sr Engineer

John Bass is a seasoned hardware/software developer and consultant, with over 40 years of industry experience, in a broad span of industry applications. Formal education is diverse with Business, Science, Statistics, Electrical Engineering, and Computer Science training over 11 years, resulting in a B.S. Computer Science from CalPoly, San Luis Obispo. Something like a computer engineering degree, with a strong science and business background. Extensive industry experience with drivers, porting, and operating systems, combined with hardware/software/firmware development of server level systems, embedded systems, motion control systems, and robotics. Other experience includes Reconfigurable Computing applications with Xilix FPGA's, 802.11 mesh networks, and Canopy Wireless networks.


John.Bass

's contributions
  • 02.12.2013
  • Function pointers - Part 3, State machines
  • When using the above case/else state machine design, there is a rigorous safe form that prevents miscoding state transitions. First we define an explicit PANIC state that is zero to catch coding/design errors where the next state is not explicitly assigned: typedef enum states { PANIC=0,STATE_A, STATE_B, STATE_C, STATE_D } state_type; Second, in each state function we initialize the return value to PANIC as a site specified mandatory design methodology that will be checked with coding reviews. This makes sure if the function has some code path that doesn't explicitly assign the next state to the return value, that the state machine will trap hard, rather than invoking some other state in error. state_type StateD(void) { state_type ReturnValue=PANIC; [ ... state machine code body, including next state assignment to ReturnValue ... ] return(ReturnValue); } } So that by design, and standards in coding methodology, we can avoid unspecified state transition errors that will result in a incorrect operation.
  • 02.12.2013
  • Function pointers - Part 3, State machines
  • Market windows frequently define success and failure of a product, company and your career. These all hang on the developers ability to write and debug code quickly, and not introduce failure modes that are difficult to near impossible to debug. A piece of code that randomly jumps into the processor execution space will sometimes trap, sometimes be benign, and sometimes cause a cascading sequence of nearly UNTRACEABLE second, third, fourth, or larger order delayed failures. The time and expense to find these failures, can, and will sometimes be greater than rewriting the application from scratch. I've been writing hardware level C for nearly 40 years now, and have seen more than a dozen clients with low frequency, random failures that simply could not be found with the time schedules and resources that were available. Failure complexity is a metric of a design, when the worst possible failures occur, for the scale of the time and resources that would be necessary to resolve the errors. Use of function pointers, increases the failure complexity of a design significantly, over other design choices -- with an exponential factor. Simply because jumping randomly into the processor address space will leave corruption that may not affect operation for hours, days, weeks, months, later. This unconstrained time to failure problem makes certain design choicess EXCEPTIONALLY risky for any business.
  • 02.12.2013
  • Function pointers - Part 3, State machines
  • Hi Jacob, I've been writing drivers and systems level code in C since 1974 ... I use function pointers for personal code, and STRONGLY advocate they should NEVER be used in production code, unless there is NO OTHER WAY. As implemented above, switch/case is ALWAYS cleaner, easier to understand, easier to maintain, and SAFER. When programming driver, threaded code, and large applications where memory corruption and failures to be make all globals thread safe, this is SIGNIFICANTLY less prone to jumping into no where, and having to spend hours with a logic analyser trapping a failing case that occurs every several days/weeks/months.
  • 02.12.2013
  • Function pointers - Part 3, State machines
  • I believe your assumption "The cyclomatic complexity of switch/case statements rises much more drastically than that of the function pointer based state machine." only applies with certain coding styles which fall thru into the next case, that are problematic in any case. There should be a literal 1:1 translation from switch/case to function pointers, or function pointers to switch/case that have identical cyclomatic complexity. In the fall thru, we are simply introducing state chaining, without the extra state transition. Consider that any function pointer design can be recoded to switch/case where every case block is a single function call, next state assignment, and a break. If the state variables are enum's with sequential assignment, the case statement will be compiled into a bounds checked jump table that is either is read-only text memory, or possibly in initialized data memory (possibly read-only constant data memory in some compiler designs). This construct uses type safe state assignments, and has a 1:1 complexity mapping to a function pointer design, with minimal code inflation, and is automatically checked for illegal states at the default clause. This scales linearly in complexity, just as function pointer designs, without the failure complexity issues. typedef enum states { STATE_A, STATE_B, STATE_C, STATE_D } state_type; state_type StateA(void); state_type StateB(void); state_type StateC(void); state_type StateD(void); (void) State_Machine(void) { static state_type state=StateA, last_state=StateA; switch (state) { case STATE_A: last_state=state; state = StateA(); break; case STATE_B: last_state=state; state = StateB(); break; case STATE_C: last_state=state; state = StateC(); break; case STATE_D: last_state=state; state = StateD(); break; default: Panic(last_state, state); break; } }
  • 02.12.2013
  • Function pointers - Part 3, State machines
  • Definitely a simpler, easier, implementation, but avoids the implicit type checking that occurs while using states declared with typedef enum, many would say NOT better for production code that will be maintained by a junior team over long periods of time. My objection about using function pointers, instead of switch/case still applies, especially when the code will be maintained by a team of junior engineers over long periods of time.
  • 02.12.2013
  • Function pointers - Part 3, State machines
  • From a defensive programming point of view, a switch/case is SIGNIFICANTLY safer. This is critical when memory corruption occurs, especially in multi-threaded applications, or where attacks against the application are likely from external threats. Having the jump table in read only memory, and the state index bounds checked, removes one case when fighting unexpected execution flow failures from corruption or attacks. Of course, a perfect programmer that never makes mistakes, doesn't ever need to worry about reducing failure complexity.
  • 01.29.2013
  • Ensuring bandwidth and QoS: Several unfortunate technology back-steps
  • QoS with 802.11 has been flawed FOREVER in terms of guaranteed bandwidth, and will always remain so as long as the underlying protocol is based on collision sense. Today in the 2.4GHz and 5.7GHz bands, all nine non-overlapping channels are active in most populated areas, causing extensive unpredictable transmitter hold off delays due to carrier sense. 802.11a/b/g/n combined with extended range portable phones, baby monitors, video extenders, Video security cameras, bluetooth, R/C controls, fixed wireless internet providers, and other applications almost guarantee some signal above the receiver thermal noise threshold will invade your home and trigger a transmitter hold off due to carrier sense of a remote transmitter outside your home (or another application inside your home). 802.11 as implemented doesn't dynamically select the minimum power necessary to maintain a reliable connection ... so it always transmits at max power, further increasing the RF pollution in populated areas. TDMA protocols scale SIGNIFICANTLY better. The idea of spread spectrum carrier signals below the noise floor works better when every device ISN"T using the same chipping pattern. The real problem is that nearly everyone is still trying to use 3 channels in unlicensed 2.4GHz ISM spectrum, possibly also the 6 channels in 5.7-5.8Ghz spectrum. Many devices built in the last 5 years also have access to many more channels 5.2-5.7GHz. Simply abandon 802.11a/b/g/n and replace it with a good TDMA protocol with real on-the-fly DFS and minimum link power per connection. Use a combination of GPS timing with NTP based fall-back and local osc drift/temp calibration.
  • 02.14.2012
  • Future of computing - Part 3: The ILP Wall and pipelines
  • Once upon a time, my computers ALU was a single bit wide processing done with bit serial memory operations. An entire computer was measured in a few hundred bits of logic, and a few thousand bits of core memory registers, and a few million bits of drum storage. When a bit in a flip flop is several tubes, with a short life, computers were pure KISS, but relatively large, hot, and power hungry per bit compared to today. It's been more than a few years since I've seen a Bryant drum spinning quietly. G15's, B200 series, and the like. Today, a few trillion logic cells are easily available, and as the author notes, much of the logic in a modern microprocessor isn't actually doing the real work ... it's just meant to minimize pipe line stalls, provide caches, and do speculative prediction. There is however another form of computing, where code is translated directly to logic, registers, and wiring. While the sequential semantics are honored, the actual parallelism is fully exploited. When there are a trillion gates available to construct registers, logic, and memory that actually perform work nearly every cycle. We prototype with FPGA's ... but the same logic is easily reduced to ASIC's ... computing without the artificial boundaries imposed by instruction sets, ALU construction, ALU/memory pipelines, and branch prediction. Lumping all the data into a critical path memory/cache/register file is just an unnecessary serialization that logic computing leaves behind. Every memory object is an independent FF register, fully parallel, and not bottle necked as found in your classic CPU/Memory architectures. A decade ago, the hardware engineers insisted that FPGA's, PLD's, and ASIC's were Verilog/VHDL only. Then SystemC became important for simulation. And then we find that people finally do get that you can write portable C code, that executes just fine without a CPU/Memory, and the other sequentially serial limitations of traditional computer architectures. At first ... there was a scramble to bend C into an HDL with parallel extensions. That's not really necessary ... anyone that can write for a tiny embedded processor ... can also write C for logic execution. As multithread programming becomes the norm, more an more programmers understand parallelism better. OpenMP is a nice start for a wonderful parallel C variant, that works just fine on common CPU architectures ... and can exploit logic execution environments naturally.
  • 05.26.2005
  • Embedded-system programmers must learn the fundamentals
  • The reality is that a very small percentage of programmers really do need a firm understanding of the hardware platform, IF the tools are well optimized for the platform ... compilers, operating systems, etc. There occured a split a few years back, which is the hybrid between digital engineering for hardware/software, something of a merger, and now called computer engineering where both of these disciplines are taught to a growing number of students. It's not needed, and indeed wasteful to teach pure applications programmers assembly language and very detailed machine architectures when the tools can, and do, hide that well .... and in fact, over time, are optimized so well, that they do a better job of both global and micro optimization than a human can for the time budgeted for the project. We have a very similar problem with hardware engineers, who are stuck in micro-optimization of circuit design. Exactly the reverse problem, where they are unable and unwilling to effectively exploit using more powerful abstract design tools, and hinder development processes by limiting the underlying design to what can be designed a transistor/gate at a time. When 80,000 gate fpga's are $50 or less in volume, doing gate level netlist designs in VHDL/Verilog hinders progress. The lack of traing for hardware engineers to embrace hardware/software co-design and be able to write clean effective firmware/software systems as an integral part of a hardware design limits our hardware architecture progress. Because of this, we have computers designed with 50 year old architectures ... same old thing, just bigger and faster. Innovation in hardware architectures has been at a stand still ... and it's that innovation, requiring cross discipline experience, that needs to grow to use our new hardware effectively. With hope, being able to shed this horrible instruction/memory bottleneck by coding algorithms directly into gates will allow advances unmached for the last decade ... and Moore's law will continue, with changes in architecture, not advances in Dino architectures that we are used to. Reconfigurable computing, algorithms in FPGA's and VLSI.