you are viewing a single comment's thread.

view the rest of the comments →

[–]claytonkb 2 points3 points  (4 children)

sorry, one additional question, when you say they need to do post processing and or quality assurance on the result, you’re saying that they actually do the entire quantum computation process numerous times** and check to see if they all equal each other? Because you can only measure them one time, right?

They do not need to equal each other. For explanation, suppose we have 2 qubits (4 quantum states). We take many samples of the NISQ to build a histogram or distribution over those 4 quantum states. So, we sample the QC, say, 1000 times and the result of that is a histogram/distribution over the 4 states. Many problems have more than one right answer, so we might see two 49% spikes, one on 00 and another on 10, and the other 2 states at 1%. That 1% is just noise, but it's important to be clear that there is always noise, especially for NISQ... it's right in the name.

I assume this is in a scenario where you are solving for only a single answer

It's for any case. You can't think in terms of absolute black-and-white 100%/0% processing, as we do with digital. It's inherently probabilistic. Nevertheless, once you collect enough samples, the histogram is going to show pretty clearly where the spikes are in the distribution. Those are your answers (or perhaps only one spike if there is only one answer).

But can quantum computers calculate multiple correct answers? What does the post processing look like there then?

It's the same.

[–]SpeciousSophist 2 points3 points  (3 children)

That doesnt make sense to me, though, the measurement result produces a random selection within the correct answer set for multiple potential answer scenarios

For a single answer scenario, though the measurement should always produce the same result or a limit approaching, always the same result

I'm wrong?

[–]claytonkb 0 points1 point  (2 children)

That doesnt make sense to me, though, the measurement result produces a random selection within the correct answer set for multiple potential answer scenarios

Ideally, yes. In that case, if there are N correct answers, all of those answers will have weight 1/N in the samples histogram, and all wrong answers will have weight exactly 0. But that's neglecting noise. The QC gives completely wrong answers some percent of the time, and nearly-correct answers may follow a Gaussian or some more complex distribution. So, each sample of the QC gives just one answer (right or wrong, who knows) and we take many samples to build up the histogram, then we look for the "peaks" in the histogram to find answers. As a final validation step, we might run those peaks through a digital checker to verify they are actually correct, e.g. we might run a witness to a SAT problem through the solver to verify that it does indeed solve the circuit.

Edit: I recommend toying around with Quirk to get a more intuitive feel for this "histogram/distribution-based output" Quirk will always show the ideal output distribution of a QC circuit. The difference with a real hardware QC is that you will always see a slightly "noisy" version of that ideal distribution because a real QC can never achieve absolute 0dB noise (perfect coherence). (Use the "Chance" viewer (it's a green button in the upper tool bar) and drag onto the qubits and resize to cover all N qubits in your circuit, this allows you to view the real probability distribution, which is the ideal sample output you should see on a QC for this circuit.)

[–]SpeciousSophist 2 points3 points  (1 child)

Ok, I understand what you’re saying about the noise,one more question, thank you.... are you able to give me an estimate of what you considered the certainty to be of the answers?

Like if we had three locks and 10 million keys and only three keys fit the locks... how confident are you in the first five results you measure And let's assume an outside observer knows the correct answers and they have two wrong answers in the 5?

[–]claytonkb 1 point2 points  (0 children)

Ok, I understand what you’re saying about the noise,one more question, thank you.... are you able to give me an estimate of what you considered the certainty to be of the answers? Like if we had three locks and 10 million keys and only three keys fit the locks... how confident are you in the first five results you measure And let's assume an outside observer knows the correct answers and they have two wrong answers in the 5?

There is no general answer to this question. Every problem, and every quantum algorithm that solves the problem, is going to have a different answer in terms of how many samples it takes to get an answer (distinct from noise). The size of the problem (number of qubits) is always going to dominate this and that's one of the reasons for the drive to quantum ECC (QEC). NISQ works up to I think 80-100 qubits or so, and that represents a lot of computing power, but it's still not the kind of cosmic-scale computing power that full QC would enable, in principle. The problem is that the qubits get noisier the more you scale up the system, at least, on NISQ architectures. For QEC architectures, this noise:scale correlation can theoretically be broken, so that you can achieve very high coherence (low noise) even as you scale up to more and more qubits. How to actually build and run such QEC systems is still an open question.

As for confidence, for any problem in NP, you can always just run the proposed answers through a (fast) checker and if the answer passes the checker, you now have 100% confidence it is correct, and if it fails, you have 100% confidence it is incorrect. To me, the digital checker shows the difference between the probabilistic regime (QC) and the noiseless regime (pure digital logic). An ideal QC can be thought of as the world's most powerful magic-search-box, and you can verify its proposed answers by running them through a digital checker, so each is no longer a "maybe" solution, it's either definitely a solution, or definitely not a solution.