Fundamentals and evolutions of quantum computing

Fundamentals and Evolution of Quantum Computing

Quantum and its related terms are getting increasingly more eyeballs and mindshare from a wide variety of audiences – from researchers to novice enthusiasts. In this article, QuantumGuru attempts to succinctly introduce the fundamentals of quantum computing and its key principles such as superposition, entanglement, to name a few.  More importantly, we have tried to touch upon and simplify the current and proposed work on quantum. Please share your feedback, your topics of interest, and how you would like us to improve.

The fundamental principles of quantum computing stem from the theory of quantum mechanics. A number of unique quantum principles highlight the clear differences between explanations provided by conventional (or classical) physics and quantum physics. Chief among these founding principles are the concepts of superposition and entanglement as well as the intrinsic randomness that appears in quantum mechanical measurements, i.e., the uncertainty principle. The application of these ideas to the theory of information has led to the development of quantum information theory.  According to quantum information theory, quantum computing originates alongside quantum communication and quantum sensing, among many others.

Figure 1. The surface of the unit sphere represents the set of possible values for a single qubit q = αb0 + βb1 with the north and south poles mapping to the conventional bit values b0 and b1. In practice, the principle of superposition maps onto a quantum physical system like the spin-up and spin-down orientations of an electron.

The binary representation of data and instructions formulates conventional computing in which a register element r stores a bit b that may take on a value of either b0 or b1. Quantum computing also requires a physical register r but now the register may take the value of a quantum bit, or qubit, q. Conceptually, the qubit is defined as a normalized superposition over the exclusive outcomes b0 and b1. This hypothesis leads to a diagrammatic representation for the possible values of a qubit given as the surface of the unit sphere. Whereas the opposing north and south poles of the sphere represent the classical bit values of b0 = 0 and b1 = 1, every point on the surface corresponds to a possible qubit (q) value.

The superposition principle extends to more than a single quantum register element. Quantum mechanics permits multiple register elements to store superpositions collectively over multiple binary values. This phenomenon is known as entanglement. Quantum entanglement represents a form of information that conventional bits cannot reproduce. While the register elements remain independently addressable, the information they store is coupled and hence not expressed or represented piecewise. For example, two entangled registers may either both be in the b0 state and in the b1 state, but exclude any possibility of anti-correlated values.

The principles of superposition and entanglement lead to an important conceptual difference in the interpretation of register value. Although the qubit maps to a point on the unit sphere, observing the qubit through measurement results in a projection to either b0 or b1 values. This transition from a qubit to a bit is the infamous ‘collapse’ of the quantum state induced by measurement. The implication is that the value q itself is not observable. Instead, interpret the qubit superposition state, q in terms of the probability to observe either b0 to b1. The probabilities p0 and p1 provide the likelihood that the observed outcome will be b0 and b1, respectively.

Classical computers use electrical signals that are either on or off to convey information as bits, the smallest unit of data on a computer, represented as two binary values, zero (when ‘off’) or one (when ‘on’). Zeros and ones are strung together to form binary codes for text and other data on classical computers. Quantum computers use quantum systems, such as electrons or photons, to represent quantum bits or qubits that can be in a state of 0 or 1, or an arbitrary superposition of them, for example, an equal combination of both. Entanglement occurs when there is a non-separable joint state of multiple qubits in superposition. For example, two distant parties could share a state where both qubits are in a superposition of 0 and 1, but such that they are perfectly correlated — a superposition of both sides having 0 or both sides having 1. Balanced superpositions of this form are known as Bell pairs. Superposition and entanglement are the defining features that distinguish quantum information from classical information. In addition to enabling quantum state teleportation over quantum networks, they underpin the exponential algorithmic power of quantum computers.

Testing these intriguing principles of quantum information depends on the ability to manipulate individual atoms, molecules, electrons, and photons. Building quantum computers is challenging because nature cannot easily discern an ideal qubit. Technology-based on advanced material physics coupled with a great deal of engineering will plausibly isolate this kind of system and yet control them to perform computations with sufficient precision. Numerous different candidate systems are being explored including low-power superconducting circuits, electromagnetically trapped ions, single-atom dopants in silicon lattices, neutral atoms in optical lattices, and vacancy defects in diamond and silicon carbide as well as many, many others.

Figure 2

A key feature in all of these technologies is the use of sophisticated techniques to remove, reduce and control errors. Alongside state-of-the-art efforts in nanofabrication and device physics, thermodynamics control is a common approach to reduce errors. This mainly consists of refrigeration and ultra-high vacuum to isolate the device as much as possible. Shielding from stray radiations, such as magnetic fields, is also important. On top of these coarse-grained efforts, device designers also use sophisticated sequences of control pulses to negate errors. This requires a detailed understanding of device physics and it is necessary to overcome current intrinsic error rates. The methods use quantum error correction schemes to mitigate against decoherence as well as fault-tolerant protocols to extend operational sequences.

Figure 3

Even with the future appearance of fault-tolerant Quantum Processing Units (QPU), there is still the outstanding need to program them for practical purposes. This requires a tightly integrated system design coupled with conventional computing methods to support high-level control of quantum registers. A long history of developing quantum algorithms has provided a number of possible applications(reference to Finance use cases is shown in Figure #3) to explore. Beyond factoring, there are novel algorithms for solving linear systems of equations, simulating quantum dynamics, searching unsorted databases, and teaching machines to classify and detect patterns.

2 Responses

Leave A Comment

Your email address will not be published. Required fields are marked *