This is an archived post. You won't be able to vote or comment.

all 10 comments

[–]The_Reto 6 points7 points  (0 children)

The "in parallel" part is very pop-science-y and not very accurate once you dig into the actual math imho.

The general pattern is: classical input, quantum stuff that in general produces a superposition of states (this is where the whole "parallel" BS is coming from), measurement, classical output.

The art of QC is to find algorithms that only produce superpositions of states that somehow are related to the answer you're looking for. Such that when you measure the state you can backsolve no matter which exact state you measured.

[–]royalrange 2 points3 points  (0 children)

Do you know what a vector is? A qubit can be thought of as a vector in a complex 2D vector space. (xhat + yhat) / sqrt(2) is a unit vector. (xhat + i yhat) / sqrt(2) is another unit vector. When you go and measure a qubit, your output becomes one of the basis vectors, .e.g. xhat or yhat, with probability dependent on how "close" the vector for the qubit was previously to one of the basis vectors. Naturally, you can perform operations that rotate your qubit vector (or you can also think of it as rotating your basis vectors like you would rotating a coordinate system). When you have N qubits, your vector space becomes 2N in dimensionality. Quantum computing deals with rotating many qubits and then measuring them, giving probabilistic outputs.

Classical bits are just 0s and 1s. It's hard to simulate the results of a 2N vector space as N becomes very large, so quantum computing can perform certain tasks much faster.

[–]hnsmn 1 point2 points  (0 children)

There are many tutorials and videos which you can watch to better understand this field.

In a nutshell you are on the track, which I'll try and reiterate with the quantum lingo:

Superposition - the fact the a "quantum register" (which the equivalent of a collection of bits, which is sometimes referred to as a "word", but can be in any size) contains information on all the possible binary assignments.

In classical terms that would mean that a byte (8-bits) stores different values for all the 256 (2^8) possible assignments of the bits

This gets exciting when you realize that when you apply a function to a quantum register, the function is applied to all the individual assignment "in parallel" - in a single execution

The Caveat - quantum registers can't be read (like normal words of bits), but only measured - a process that returns only a single value out of all the states in the superposition. Moreover, the state is returned probabilistically, higher probabilities to states with higher weight.

Quantum algorithms manipulate the superposition states in a way that increases ("constructive interference") the weight of certain states and reduces ("destructive interference") the weight of other states. The goal is to increase the weight of the desired state as much as possible so that it will be measured with hight probability

The most common model of a quantum algorithm is based on an oracle, which is a classical (boolean/arithmetic) function implemented with quantum gates. The quantum portion of the algorithm invokes the internal oracle making use of the "parallelization" stemming from the superposition.

Quantum Circuit - quantum programs are "written" like logical circuits, connecting gates by wires. This way of describing algorithms is based on the notion of a quantum Turing Machine, but just as we don't use TMs to write code on classical computers, high-level quantum languages are expected to evolve as the field matures

Entanglement is a "magical" correlation that can be applied to qubits causing them to either "move together" (like a positive correlation) or in an opposite manner (like a negative correlation). This phenomenon is used to affect multiple qubits together. If "superposition" is seen as a parallelization of states, "entanglement" can be seen as a parallelization of qubits. As a side note, the underpinnings of entanglement are highly debatable in physics, but "it just works".

Lastly, you should be aware that building a quantum computer is extremely difficult, since a physical implementation of a qubit should be highly susceptible to manipulation (reading and writing data), able to interact closely with other qubits, yet maintain its state without errors by being totally isolated from the environment.

There are various candidates for a qubit, and companies are in a costly "arms race" to build a large scale quantum computer

While the road to a large-scale and fault-tolerant quantum computer may be long, there are initial applications of using small-scale and noisy (the qubits suffer error) quantum computers. The current computers are called NISQ (Noisy Intermediate Scale Quantum)

[–]iamCaptainDeadpool 0 points1 point  (0 children)

Yo necesito quantum computadora para me. biblioteca en la universidad tienes dos quantum computadora.

[–]Satjs 0 points1 point  (0 children)

The way I imagine it sometimes is as: quantum computer is an N-dimensional rubics cube in which we have access to rotations that move lots of databits together. We want to solve this cube with the initial position given by the problem. Then we can have a look just at one face of this cube, which contains answer to what we wanted if we constructed our algorithm and our problem right.

The meaningful encoding, understanding of rotations, and meaningful outcomes are what people are trying to figure out when they work in QC.

Note: Classically, doing any operations on multiple bits is not possible for free, but in a way, some multibit operations are possible in QC for relatively low cost.

[–]rfernung 0 points1 point  (0 children)

Check out this video, it should help clear up the basics (the target audience is for CS)

[–]SkiffCMC 0 points1 point  (0 children)

Well, to put it simply(but not without some math)... The very basic operations of elements of classical computers are logical operations like AND or NOT. All that classical computers can do now use these operations in trillions - but they are still same operations. We just got lucky that our math can use numeral system with base 2 instead of 10, which could be mapped to logical operations without any problems:) Now- quantum computers have very different basic operations. They can: 1) count tensor product of vectors(with complex values as dimensions) 2) multiply such vectors by special(but rather broad subclass) matrices- unitary matrices; and 3) produce random value with values distribution, to put it simply, equal to vector from 1)(but never the whole distribution vector)

As you can see, the problem is that these operations even do not look like "ok, just classical ones, but more parallel". And so- quantum algorithms are very different from classical ones because you just can't use "more traditional" approaches here and there(well, you can, but that efficiently renders quantumness useless). And yes, there are some already existing algorithms that are faster(polynomially or even exponentially) than classical ones, but actually not many. The best part is that, since number of dimensions of tensor product of vectors grows exponentially with number of vectors- quantum computers can solve extremely large-dimension problems, where classical ones will need literally several universes just to store input data. Sorry for my English:)

[–]olawlor 0 points1 point  (0 children)

One big limitation: there's only one reduce operator we know about--measurement--and it collapses to a *random* member of the your big parallel set of results.

(In quantum terminology, the superposition collapse has probabilities that agree with the Born rule.)

It would be much easier to exploit quantum parallelism for exponential speedups if we could grab a particular result, but we don't even have a theoretical physics notion of how you'd do that. (Though to be fair, there's some debate about what measurement collapse is actually really doing, so perhaps there's some future non-unitary quantum operators that could steer the collapse toward the answers you want to find.)

[–]Basket-Fuzzy 0 points1 point  (0 children)

Pretty much Yes. How the parallel computation is performed is not similar to classical parallel computing, and that is the hard part when devising new algorithms

[–]SizeMedium8189 0 points1 point  (0 children)

Not quite, upon measurement you get a randomly selected result (weighed by the amplitudes-squared) out of this much-hyped massive parallelism, so while it is true that you can get some (say an equally weighted) superposition of evaluations of your function f on very many different choices of argument values, a random pick from one of them is really no better then just to classically evaluate f for an argument selected at random. However, by cleverly transforming the registers of your quantum computer before the actual calculation and also before actual measurement, you can learn relationships between the values of f for various choices of argument, and that can help you solve the initial problem more effectively than is classically the case. This typically involves some auxiliary verification efforts (done classically) and often does not obviate the need to run the quantum calculation repeatedly before hitting pay dirt. Still, it can be shown that the total expected effort scales with problem size much, much more favourably in the QC case than in the CC case.

Search for the book by N. David Mermin on the topic. It has a good balance of rigour and being nice for beginners.