# EDN -- 03.01.96 Hug an XOR gate today: an introduction to Reed-Muller logic

**Design Feature:****March 1, 1996**

# Hug an XOR gate today: an introduction to Reed-Muller logic

**Clive "Max" Maxfield, Intergraph Computer Systems****Including XOR gates in your functions offers significant benefits by reducing transistor and track counts and increasing testability.**

As designers, we're used to representing digital functions using Boolean expressions in conventional sum-of-products or product-of-sums forms, which we predominantly implement using AND, OR, NAND, NOR, and NOT gates. Other forms of representation, such as Reed-Muller logic, prominently feature XOR gates. Employing XOR gates in functions often facilitates circuit testing, and such implementations can also offer significant benefits by employing fewer transistors and tracks.

The classical form of Reed-Muller logic is based on sum-of-products-style representations, in which XOR gates replace OR gates. You can also implement functions using only XOR and XNOR gates, which I shall call "pure" Reed-Muller functions.

## A function is suitable for a pure Reed-Muller form of implementation if the function's Karnaugh map displays a checkerboard pattern of zeros and ones. For example, consider the truth table and Karnaugh map associated with a familiar two-input function, where &=AND, |=OR, =XOR, and ~=NOT (Figure 1).

## You can easily recognize this truth table as that of an XOR function, so it should come as no great surprise to find that implementing the table as a single XOR gate is preferable to basing your implementation on multiple AND, OR, and NOT gates. Similar checkerboard patterns apply to functions with more inputs (Figure 2).

## Because XORs are both commutative ((a^b)=(b^a)) and associative ((a^b)^ c=a (b^c)), it doesn't matter which combinations of inputs you apply to the individual gates. These simple checkerboard patterns are reasonably well-documented. However, while playing mind games one evening, I realized that checkerboard patterns involving larger groups of zeros and ones also indicate functions suitable for a pure Reed-Muller implementation. Some of these patterns may initially be unclear, so visualize the Karnaugh map as a ceramic tile and imagine the effect of covering a surface using many such tiles (Figure 3).

## Once you recognize the checkerboard pattern, you can use a rule of thumb to quickly determine the variables to use in the Reed-Muller implementation. Select any group of zeros or ones and identify the significant and redundant input variables of that group, where the significant variables are those having the same values for all boxes forming the group, and the redundant variables are those having values that vary among boxes. Simply XORing the significant variables together determines the resulting Reed-Muller expression. Whether the significant variables have values of zero or one is irrelevant (Figure 4).

These checkerboard patterns are abundant; those in Figure 4 represent only a small sample. Due to the fact that all of the checkerboard patterns in Figure 4 include a logic zero in the box in the upper left corner of the Karnaugh map, which corresponds to all the inputs being zero, you can realize the resulting Reed-Muller implementations using only XOR gates. You can also replace any pair of XOR gates with XNOR gates, as long as there is an even number of XNORs.

## By comparison, if the checkerboard pattern features a logic one in the box in the upper left corner of the Karnaugh map, then the Reed-Muller implementation must contain an odd number of XNOR gates or an inversion at the output. Again, it does not matter which combinations of inputs you apply to the individual XORs and XNORs (Figure 5).

Several mathematical techniques exist for extracting Reed-Muller expressions, but these techniques are typically complex and time-consuming. By comparison, checkerboard Karnaugh map patterns are easy to recognize and evaluate.

*Binary-to-gray-code converters*

Reed-Muller implementations are often appropriate for circuits performing arithmetic or encoding functions, such as binary-to-gray-code converters and their gray-code-to-binary counterparts. In a gray code, only one bit differs between adjacent states.

## When covering the conversion of binary codes to gray codes and vice versa, textbooks on digital design almost invariably show implementations based on XOR gates, but the books' authors rarely explain how they derived these solutions. When reading these textbooks, I found this omission immensely frustrating, because, although I could extract the Boolean equations, it was difficult to interpret how to boil them down into the XOR implementations in the books. The graphical techniques above, however, immediately reveal the potential for pure Reed-Muller implementations. For example, consider the case of a binary-to-gray-code converter (Figure 6). You can also see similar checkerboard patterns in a gray-to-binary-code converter.

*Partial implementations*

## Applications that are suitable for pure Reed-Muller solutions can realize significant savings in the numbers of gates required to realize a function. Unfortunately, only a relatively few functions are amenable to such an implementation, and these functions tend to be those with which you are already familiar, such as parity generators and binary-to-gray-code converters. However, a number of well-known functions are susceptible to partial Reed-Muller implementations. For example, consider a simple 4-bit synchronous binary counter based on four D-type flip-flops (Figure 7).

The traditional approach is to draw a truth table showing the current count values as inputs and the next count values as outputs and to then extract the equations for the combinational logic to generate the next count values. This would result in the following first-pass solution:

d3=(~q3&q2&q1&q0)|(q3~q2)|(q3~q1)|(q3~q0),

d2=(~q2&q1&q0)|(q2&~q1)|(q2&~q0),

d1=(~q1&q0)|(q1&~q0),

and

d0=~q0.

This solution neglects those characteristics associated with the physical implementation, such as the switching speeds of the logic gates or the number of transistors they require. However, NAND gates require fewer transistors and switch faster than AND and OR gates, so you can use standard DeMorgan transformations to achieve a better solution, as follows:

d3=~(~(~q3&q2&q1&q0)&~(q3&~q2)&~(q3&~q1)&~(q3&~q0))

d2=~(~(~q2&q1&q0)&~(q2&~q1)&~(q2&~q0)),

d1=~(~(~q1&q0)&~(q1&~q0)),

and

d0=~q0.

Now, consider an alternative approach—a partial Reed-Muller implementation—based on an old PLD designer's trick. In this function, you can determine any data input (d(n)) by XORing the output from the corresponding q(n) register with the AND of any less significant register outputs. This approach leads to the following first-pass solution:

d3=q3^(q2&q1&q0),

d2=q2^(q1&q0),

d1=q1^q0,

and

d0=~q0.

Again, the first-pass solution neglects any characteristics associated with the physical implementation. You can obtain a more efficacious solution by exchanging the AND gates with NANDs and replacing the XORs with XNORs, as follows:

d3=~(q3 ^~(q2&q1&q0)),

d2=~(q2 ^~(q1&q0)),

d1=~(q1^~q0),

and

d0=~q0.

In fact, you can achieve an even better solution by keeping the original XORs and driving their non-NAND-driven inputs from the flip-flops' inverted outputs, as follows:

d3=~q3^~(q2&q1&q0),

d2=~q2^~(q1&q0),

d1=~q1^~q0,

and

d0=~q0.

This solution is preferable to replacing the XORs with XNORs, because it removes some of the loads from the flip-flops' true outputs. This reduction balances the loading on the true and inverse outputs, thereby reducing loading delays.

*Benefits of partial Reed-Muller*

Table 1 —Boolean vs partial Reed-Muller implementation | |||

Optimized Boolean | Optimized partial Reed-Muller | Comparison (%) | |
---|---|---|---|

Maximum delay (nsec) | 1.713 | 1.758 | 2.6 increase |

Transistors | 60 | 40 | 33 reduction |

Tracks | 20 | 12 | 40 reduction |

Track Segments | 34 | 15 | 55 reduction |

The results in Table 1 take full account of the loading effects that the register's outputs' experience but omit the transistors that implement the registers that are common to both implementations. The XORs in this library are buffered pass-transistor implementations, each requiring only 10 transistors (as opposed to building the XORs from ANDs, ORs, and NOTs, which would require more transistors and would be slower). I optimized both solutions using the first data book that came to hand from my bookshelf.

Thus, for the example in the table, the partial Reed-Muller implementation is only slightly slower than its Boolean counterpart but offers significant benefits in reduced area utilization, because the partial implementation requires substantially fewer transistors and tracks. Unfortunately, most currently available synthesis and optimization tools typically make minimal use of XOR gates due to the complexity of extracting these partial solutions. However, the use of partial Reed-Muller solutions may increase as new techniques become available.

*In Maxfield's next article, he will be examining a simple C program that examines truth tables, determines those suitable for pure Reed-Muller implementations, and extracts the appropriate Reed-Muller expressions.*

**Author's Biography**

*Clive "Max" Maxfield is a member of the technical staff at Intergraph Computer Systems, Huntsville, AL, where he gets to play with the company's high-performance graphical workstations (phone (800) 763-0242). In addition to numerous technical articles and papers, Maxfield is also the author of ***Bebop to the Boolean Boogie: An Unconventional Guide to Electronics***, (ISBN 1-878707-22-1). To order, phone (800) 247-6553. You can reach Maxfield via e-mail at crmaxfie@ingr.com, or check out his web page at http://ro.com/~bebopbb/.*

| EDN Access | feedback | **subscribe to EDN!** |

| design features | out in front | design ideas | departments | products |

Understanding SSD over-provisioning

Equations and Impacts of Setup and Hold Time

ARM flexes from servers to wearables

Embedded Systems Architecture, Device Drivers - Part 1: Interrupt Handling

Single-cycle logarithms & antilogs

Automotive System & Software Development Challenges – Part 1

How FPGAs and multicore CPUs are changing embedded design

Memory Hierarchy Design - Part 1. Basics of Memory Hierarchies

The future of computers - Part 1: Multicore and the Memory Wall