Extended-precision simulation cures SPICE convergence problems

-July 22, 2013

Convergence issues are a frustrating deterrent to the use of simulation in analog and power electronics design, and in the four decades of SPICE, millions of engineers have suffered from the failed, painfully slow, or incorrect simulations that result.  We found that a large class of convergence issues (especially in switching-dominated circuits) came from well-defined circuits that simply collided with the dynamic range of the IEEE 64-bit floating point “double” type.  We have now released the first extended-precision numerical core inside a circuit simulation engine.

Unsolvable Equations  

The CircuitLab community ran more than half a million simulations last month.  Each one involved constructing tens, hundreds, or even thousands of simultaneous, highly nonlinear, strongly coupled differential equations, which are repeatedly linearized and solved iteratively.  Last month alone, our software constructed and factored more than 600 million matrices, and solved those factored systems more than 1.6 billion times.

Some simulations were failing to converge even though the circuits were well defined, by which we mean that any proficient high school math student could have computed the correct solution.  But the numerical methods in the solver could not!  

For example, here’s a two-diode circuit in another simulation engine:


This simple circuit situation occurs many times within larger systems, often with one or both diodes implicitly hidden within a power MOSFET or a CMOS input.  It is easily analyzed by inspection and the current should be limited in both directions by the diode reverse saturation current plus any transient current charging the diode, but it faces serious convergence issues in simulation.  In another simulation engine, this takes hours and hours to even partially simulate, and displays noisy currents millions of times larger than would be seen in reality.  Does your software do this?

In CircuitLab, with the new extended-precision numerical core we’ll describe below, this circuit and others simulates quickly and correctly in milliseconds (try it instantly -- no download required):


Here’s the core of the problem:

    1 + 1e-16

Try computing this sum in your favorite programming language.  The result is simply:


when, to any high schooler (not aware of the limited precision of floating-point datatypes), the answer should of course be:


This truncation happens because most programming languages use a standard IEEE “double precision” 64-bit floating point number to represent real numbers internally, which is like working on a calculator that only has room for about 16 decimal digits and must truncate the rest between operations.

How much could this tiny difference make?  One part in 1016?  Impossible for this to become an issue, right?  Typically, that is exactly right -- no real difference at all.

The real problem begins here:

    1 + 1e-16 + -1

which is evaluated as

    ((1 + 1e-16) + -1)

    = (1 + -1)

    = 0

That’s a zero that gets inserted in an equation where there really ought to be a 1e-16.  And in solving equations, of course, zeros as coefficients can be quite a scary thing!

These operations happen in the course of LU factorization, when a circuit matrix is factored to be solved.  In an infinite-precision world, LU factorization does not have the ability to destroy information from the original matrix, or convert a non-singular matrix to a singular system.  But in a limited-precision numerical implementation, it does!

Loading comments...

Write a Comment

To comment please Log In