The basic building block of a classical computer is the two-state system, which is probably the main minimum requirement for a physical system to display quantum superposition.

Any mathematical expression formed using only addition, multiplication and negation can be explicitly represented by a circuit containing only relays in either series or parallel connections.

Imagine building a computer out of coins that can either be heads (denoted by |1\rangle ) or tails (denoted by |0\rangle). When observed there are exactly two states that a coin can be in, so we say that it is a binary system. Another analogy might be bar magnets that can either be pointed North or South; a clock face in which the hands can either only be at 12pm or 6pm, or a transistor which can either be on or off.

A binary system can either be in the zero state |0\rangle or the one state |1\rangle and this way of writing the states is called bra-ket notation. Think of it as an easy way to write states of a system down, which could potentially be more complicated than just on or off. We can write a double-sided arrow \leftrightarrow between the states like this |0\rangle \leftrightarrow |1\rangle to indicate that the binary system can evolve to either state, but not both.

In the examples given above, each outcome (heads or tails, North or South, 12pm or 6pm, Up or Down, On or Off, etc…) can always be mapped to either a 0 or a 1. In the world were only 0 and 1 exist, they are called binary digits.

Representing real world things in terms of binary digits is something that humans have been doing for centuries, and we’ve become rather good at it. The Indian scholar Pingala (c. 2nd century BC) represented long and short syllables in poetry as 0’s and 1’s, paving the way for Samuel Morse to invent his code for binary telecommunication in first used in 1844. Even before this, the peoples of French Polynesia would communicate using two-toned drums. Then, in 1605, Francis Bacon discussed a system whereby letters of the alphabet could be reduced to sequences of binary digits, which could then be encoded as a scarcely visible variation of font in letters – now known as Bacon’s cipher.

It wasn’t until 1679 when Leibniz (apparently inspired by the hexagrams of the I Ching) invented the binary number system we still use today, by mapping the sequence of binary digits 0000 to 0, 0001 to 1, 0010 to 2, 0011 to 3, 0100 to 4, 0101 to 5, 0110 to 6, 0111 to 7, 1000 to 8, and so on.

While this “mapping” is interesting, what can we actually do with it?

Well, for one, we can store data.

In 1854, George Boole published a book An Investigation of the Laws of Thought in which the term Boolean Algebra first appeared. Where in regular old algebra we have expressions denoted by numbers, Boolean algebra has expressions denoted by truth values, i.e. True and False. Again, we see a repeat of the idea of precisely two values. Thus, True can be mapped to 0, and False can be mapped to 1.

Boole figured out a bunch of rules associated with True and False that make logical sense. For example, if x and y are both False, then x \textup{ AND } y is also False. If x is False, but y is True, then x \textup{ AND } y is False; but x \textup{ OR } y is True, because one of them is true. And so on…

Once you have truth tables you can introduce switches that turn on or off depending on the output of a Boolean logical statement. For example, if x \textup{ AND } y is True then Switch On.

But how do you physically build a switch?

Originally, switches (or relays) were built as a coil of wire wrapped around an iron core. An iron yoke provided a low reluctance path for magnetic flux, and there was a moving iron armature and a couple of contact points. The armature was held in place by a spring so that when the relay is off an air gap is introduced, thus breaking the circuit. When an electric current is passed through the coil it generates a magnetic field that activates the armature and the consequent movement against the spring either makes or breaks the circuit.

Placing switches or relays in series forms the logical AND gate, while placing them in parallel forms the logical OR gate. If this is not obvious, imagine a circuit as running water through a pipe. The two switches in series connected individually to the two sources of water providing pressure must “push” through a valve. If the pressure is not high enough in both valves no water gets through; the only way it does is if the water pressure is high enough in both, i.e. if both are 1’s, then 1 AND 1 equals 1, else 0. Similarly, if the valves are in parallel, then the water pressure only needs to be high enough in at least one for water to pass through, not both, and so 1 OR 0 can equal 1, as well as 0 OR 1 and 1 OR 1 equals 1.

The algebra of Boole and the algebra of Relay circuits are in a fascinating 1-1 correspondence thanks to the work of Claude E. Shannon.

For example, 0\times 0 = 0 says: a closed circuit in parallel with a closed circuit is a closed circuit. Here, the mapping is: \textup{Closed Circuit } = 0 and the Boolean operation multiplication is a parallel circuit. Similarly, 1 + 1 = 1 says: an open circuit in series with an open circuit is an open circuit. Here, the mapping is: \textup{Open Circuit } = 1 and the Boolean operation addition is a circuit in series. Similar results can be found for 0 + 1 = 1 + 0 = 1, 0 \times 1 = 1 \times 0 = 0, 0 + 0 = 0 and 1\times 1 = 1.

This mapping of different configurations of closed and open circuits in series or parallel was an astounding breakthrough that essentially mapped the abstract mathematical theory of Boolean algebra to an actual physical system.

These rules are sufficient to develop all the theorems of arithmetic in Boolean algebra; and from this the calculus of propositions was formed. In essence, any mathematical expression formed using only addition, multiplication and negation, like -3\times x = -1 \times (x + x + x) can be explicitly represented by a circuit containing only relays in either series or parallel connections!

Truly remarkable.

In fact, each letter, e.g. x, or \times, or +, etc… represents a make or break relay contact. To find the particular relay circuit requiring the least number of contacts it is necessary to manipulate the algebraic expression in to the form with the least letters. Thus -1 \times ( x + x + x) is smaller than (-1)\times x + (-1) \times x + (-1) \times x indicating that you would sum three x‘s first then negate, rather than negate each x first, then sum.

For more information on this, see Claude E. Shannon‘s A Symbolic Analysis of Relay and Switching Circuits (1937). After Shannon released his analysis on how relays can reproduce arithmetic, many inventors went to work designing new calculators but they were cumbersome and slow. The problem? They were still thinking in decimal

How do Relays Solve Mathematical Problems?

Relays were first invented to relay (hence the name) the dots and dashes of the binary Morse code. By having the original Morse signal activate a relay before it became too weak, one could transmit a message much farther – across the entire North American continent by 1870.

For basic calculating however (i.e. addition, subtraction, multiplication and division) the relay was not much better than the old mechanical cams and gears at the time. It was, for a long time in fact, more reliable and cheaper to store a 10-decimal number on a train of 10-toothed gears than on a bank of multiple-contact relays. After all, mechanical calculators dated all the way back to the time of Leibniz and in the intervening centuries, the technology had matured.

But for something more than simple arithmetic, relays had a critical advantage: they could be arranged far more easily. They could be arranged on a rack, or in rows and columns and with depth, utilising all 3 dimensions. Instead of a cam, one could connect relays with thin, flexible wires according to what they wanted it to do. The wires themselves could be managed with a switchboard providing further flexibility. The switchboard later gave way to perforated paper which activated certain configurations of relays depending on where the holes were.

Try doing that to a set of gears and cams.

For decades thereafter, calculator designers had been building machines that calculated using decimal numbers, operating on the principle that since humans used decimals then machines should as well. And by 1924, two Bell engineers wrote:

A typical office serving ten thousand subscribers would have from 40,000 to 65,000 relays with a combined strength sufficient to lift ten tons.

The American telephone system, in the 1920’s, was a tremendous machine of millions of relays, with the vast majority of them being built by Western Electric (apparently, almost 5 million were built in 1921 alone!).

From Decimal to Binary

Across the Atlantic, in 1935, Konrad Kuse made the fundamental breakthrough to perform all the arithmetic in a relay computer in binary (rather than in decimal), with the machine also performing the conversion in to binary at the beginning and back to decimal at the end. Thus Kuse’s early designs were much simpler and reliable as each relay consisted of only two connections, instead of ten, and it didn’t matter that the machine handled all its numbers in the form of a sequence of binary digits, practically unreadable to humans.

A year later he had developed a rudimentary register of relays that could hold binary data, but was still struggling with making a machine perform arithmetic in binary, probably due to the fact that he insisted on ensuring it could handle the more precise floating-point numbers.

Then, in 1941, he finished building the Z3. The following is an excerpt from this source here.

It used about 1,800 relays to store sixty-four 22-digit binary numbers, ran with a clock speed of 5Hz, and used an additional 600 relays for the calculating and control units. The operation sequence, memory storage and recall, binary-decimal conversion, and input and output all were directed by a control unit that took its instructions from used perforated 35-mm movie film. A person entered numbers and operations on a calculator-style keyboard, and answers were displayed on small incandescent lamps. A drum rotating at 300 RPM synchronized all the units of the Z3; it took between three and five seconds to multiply two floating point numbers together.

Using the Z3 to solve a problem involved first of all writing out the sequence of commands to perform arithmetic operations and send to or retrieve data from storage. This sequence was then punched into a filmstrip, using an eight-hole pattern. Once this code was prepared, initial values were entered into the keyboard in floating-point, decimal form; the machine then converted the numbers into binary, carried out the sequence, and displayed the result on lamps after reconverting it back in to decimal notation.

Zuse wrote out calculating plans to solve small systems of linear equations, to find the determinants of matrices, and to locate the roots of quadratic equations. Because of its modest memory the Z3 could not attack the problem that at the time most concerned the Aerodynamics Research Institute, namely designing enough stiffness into airplane wings so that they did not flutter like a flag at high speed – a problem in aerodynamics similar to the one that caused the Tacoma Narrows Bridge to collapse in 1940. But the Z3 was reliable and flexible enough to persuade the Institute to grant Zuse enough money for a full-size machine, which eventually became the Z4, completed by the end of the war in 1945.

Here is a fun website where you can simulate operations on a real relay computer.

Bits of Information

By 1950 we had computers that could essentially perform any number of mathematical operations associated with addition, subtraction, multiplication and division, by implementing the right combination of relays in series or parallel.

Quite a number of mathematical equations can be reduced to these four fundamental operations, so a flurry of invention followed whereby individual computers would be built with the sole purpose of solving a particular complex problem that had been reduced to a sequence of the four.

The problem was that these equations needed even more data. The so-called input was increasingly larger and larger data sets that needed operating on. Encoding data in to binary objects had already been done by Morse, leading to other data encoding devices like the teletype and stock ticker machines of the 1870’s. By World War II, Kuze’s Z3 used punched tape to store data in the form of whether the relay was on or off.

Binary data storage became huge in the 1950’s. They found ways of storing data as binary digits in the form of relay switches, two distinct voltage levels, two light intensities, two directions of magnetisation, two directions of polarisation, even the two orientations of reversible double stranded DNA!

By the time vacuum tubes replaced relays, binary data was being stored as pressure pulses travelling down a mercury delay line. These then gave way to charges stored on the inside surface of a cathode-ray tube, or, in some circumstances, opaque spots printed on glass discs by photolithography!

Then, somewhere around the mid-1950’s, all these archaic attempts to store data as binary digits were replaced by magnetic storage. Finally, we had become good enough at manipulating the two stable states of tiny magnets (see hysteresis loop for more information).

Later, even magnetic storage would give way to semiconductor memory where the two values of a bit are represented by the two levels of electric charge stored in a capacitor. Then even this was replaced by optical storage where the two values of a bit is represented by the presence (or absence) of a microscopic pit in a reflective material.

How Does All This Relate to Qubits?

We know that data can be stored as a bit in some object which has a 0 or a 1 value when measured. If not the numeric 0 and 1, some sort of two-state system like the ones discussed above.

You would have noticed from the short history lesson above, that the object upon which is transcribed the data, is becoming smaller and smaller. And this begs the question:

Are there any two-state systems that exhibit quantum behaviour?

The answer is a resounding yes!

Many physical objects exist as two-state systems that are clever enough to exhibit quantum behaviour. At the time of writing we have

  • Light – has two polarisation states,
  • Photons – has two fock states,
  • Electrons – has two spin states,
  • Electrons – has two charge states,
  • Nuclear atoms – has two spin states
  • Non-abelian anyons – has two braids of excitation
  • Josephson Junctions – has two charge states (a.k.a. a Transmon
    • also has two current states, and
    • also has two energy states

It’s the last object above: a Josephson junction (which is also referred to as a superconducting qubit) is what is used today in commercial quantum computing.

The Josephson Junction (JJ) was first theorised by Brian D. Josephson in 1962 to exhibit macroscopic quantum phenomenon, that up until that time, had been observed but thought to be nothing but glitches. This junction only exhibts quantum behaviour when the electrons travelling through them move as Cooper Pairs, which means the junction must be cooled to practically zero Kelvin in order to work – it is at this point the metal goes from a state where it has electrical resistance in to a superconducting state where it has no resistance.

Josephson won the Nobel Prize in Physics in 1973 for this work.

The Physical Superconducting Flux Qubit

Like relays, Josephson Junctions can also store data as binary digits. We do this by taking a micrometer length of superconducting metal and curl it in to a loop. Current is allowed to flow through these loops by application of an external magnetic field. As the law of nature dictates: only an integer number of flux quanta are allowed to penetrate the superconducting loop, clockwise or counter-clockwise currents are developed in the loop to compensate a non-integer external flux bias. When the applied flux through the loop area is about a \frac{1}{2} integer number of flux quanta, the two lowest energy states of the loop will exist simultaneously as clockwise and counter-clockwise current.

This construction is known as a Flux Qubit. See this YouTube video explaining the same.

Summary

While the technicalities of how Josephson Junctions work is outside the scope of this article, feel free to read about it here[3]. Warning, this is rather technical.

Suffice it to say, a Josephson Junction acts like a tiny relay and exhibits quantum mechanical effects. Which we will learn more about in the next article.

References

[1] Claude E. Shannon, A symbolic analysis of relay and switching circuits, Electrical Engineering, Master’s Thesis, MIT (1938)

[2] Walter Isaacson, The Innovators: How a group of inventors, hackers, geniuses and geeks created the digital revolution, Simon and Schuster (2014)

[3] J. M. Martinis & K. Osborne, Superconducting Qubits and the Physics of Josephson Junctions, National Institute of Standards and Technology.

[4] Lost Generation: The Relay Computers, technicshistory.com (2017)

[5] Fredrik Andersson – Zusie – My Relay Computer