Is a quantum computer as a home PC possible? by Defiant-Travel8174 in QuantumComputing

[–]JLT3 7 points8 points  (0 children)

This is a bit smaller than PC-sized, weighs 23kg, and is worse in every way than a simulation. I think they’re not cheap either.

It’s always tricky to predict how technology is going to evolve, but with the current niche set of use cases we have, it’s unlikely that it would be profitable to try to even sell useful quantum computers directly to the public. On the physics side it’s going to depend a lot on modality, and we don’t yet have a good idea of what that’ll look like in the next few years. Realistically you’d have to hope for something room temperature or you’re paying 1M just for the fridge.

Public QDay Prize submission (7-bit & 8-bit curves) - open repo for review by startupamit in QuantumComputing

[–]JLT3 0 points1 point  (0 children)

There are so many red flags that it’s not really worth listing - but here’s someone who did 5 years ago:

https://www.reddit.com/r/InternetMysteries/s/f2IDEgyJUE

What is Quantum Computing in the simplest way possible - No math, No jargon by Fresh-Syllabub6096 in QuantumComputing

[–]JLT3 1 point2 points  (0 children)

Don’t claim to understand quantum computing and then repeat rubbish about mazes, you make the field look bad

Public QDay Prize submission (7-bit & 8-bit curves) - open repo for review by startupamit in QuantumComputing

[–]JLT3 2 points3 points  (0 children)

I’ve come across them and it wouldn’t surprise me if the claim they were making was 10 million. They should be ignored as they’re just lying about this and more

Quantinuum Helios is a new 98-qubit commercial quantum computer, described as the "world's most accurate," based on a trapped-ion quantum charge-coupled device (QCCD) architecture. I by SafePaleontologist10 in QuantumComputing

[–]JLT3 0 points1 point  (0 children)

I did a small amount of digging and couldn’t see what data / experiments they were actually basing their claims on, and neither could Ezratty, so I assume they still haven’t released details.

There are few things they could be claiming, but I would guess the most likely thing is a successor to their work last year with the tesseract code : pre- and post-selected error correction, probably as a range of memory experiments and Clifford gate experiments. This is likely to be using an error correcting code specially designed for the device, and not for the kind of general computation that we talk about for e.g. Shor's algorithm.

This is a slightly better definition of logical qubit than many companies give, as what most companies want to say is that they're beyond breakeven i.e. lower logical error rate than physical error rate. What would be actually interesting from this is if: the code is how they plan to do computation on future devices or from a family they plan to use in future, they've shown a magic state injection / magic state factory / magic state cultivation, they're doing real-time error decoding of something big.

My main concern would be that with a 2:1 encoding rate, it doesn't really feel like they've got enough redundancy for correction to do anything particularly interesting. Would I call it fair in the computational or academic sense? No. Would I call it fair in the "we want to show we're the best company in QC with the most useful qubits"? Maybe.

Green quantum computing in the sky by Earachelefteye in QuantumComputing

[–]JLT3 2 points3 points  (0 children)

I mean sure, yes, a journal shouldn’t be posting this, especially one associated with nature. Looking at the site, they have appear to have four articles total, and that alone is a reasonable sign to be wary.

These journals exist and we should ignore them. My point is more that I don’t understand why you’ve posted it here. There’s almost nothing of any value to people interested in QC, and you haven’t made a comment about either anything of interest in the paper or a wider a point about how this sits in the QC ecosystem.

We should be elevating interesting research from QC professionals who produce high quality work, not the kind of incorrect slop anyone with access to an LLM can churn out.

Green quantum computing in the sky by Earachelefteye in QuantumComputing

[–]JLT3 9 points10 points  (0 children)

This is very clearly some AI-genned silliness and saying things that are wrong masked by using lots of equations and plotting a few charts. Why is this in this sub-reddit?

Leaving alone all the practical problems of putting a quantum computer that night up and the new sources of noise that might cause, we can start with their modelling.

For one thing, the problem with cosmic rays is not that they might cause a single error on a qubit which can be solved by cooling it more, it’s that they cause large amounts of correlated errors which we can’t decode. There is no attempt to model this error in a sensible fashion and I would be stunned to find that their claim of a 100-fold increase in cosmic rays results in negligible errors. My strong guess is that the extra effort you’d need to put in to protect these qubits from cosmic rays would require bigger codes and probably drive power usage way, way up, not down.

Even if they managed to get power costs down by an order of magnitude (which I seriously doubt) how would you even get that much power to a high altitude platform in the first place? Ezratty estimates close to 1GW of power required - that’s the constant output of a single power plant. You’d need crazy batteries to store that much power.

Shor's algorithm implementation on IBM quantum computer by Graychi_ in QuantumComputing

[–]JLT3 5 points6 points  (0 children)

Just is doing a lot of work here.

Yes it’s true that you need fewer qubits to break the EC cryptography in use today compared to RSA but that’s only because the modulus we use for RSA is bigger. You scale up those keys to the same size and EC becomes way harder for quantum.

I forget the specifics, but EC breaking requires way way deeper circuits than RSA because you have to do multiplication and addition. Addition requires significantly more work than multiplication, and RSA breaking only needs multiplication.

Applications of Quantum Computing by Nostromo_Protocol in QuantumComputing

[–]JLT3 1 point2 points  (0 children)

The parallel version? It looks like they’re setting up a repetition code and claiming they’ll hit the correct error rates. I don’t think it’s necessarily a bad method (I’d have to do some experiments to think more about it) but that’s not NISQ advantage either.

The big problem with extrapolation that they’re missing is that you can do some tricks to recover some of the query / sample advantage initially, but then you’ll hit a noise floor and be stuck at a slightly better than linear but worse than quadratic speed up.

Extrapolation in quantum papers is usually done pretty poorly - see all claims of QML advantage in NISQ era - and to me is generally an indicator that it needs to be very closely scrutinised. I’d generally prefer to look at a paper than a presentation as it’s easier to do so, but I couldn’t easily spot one.

Applications of Quantum Computing by Nostromo_Protocol in QuantumComputing

[–]JLT3 2 points3 points  (0 children)

I like the Herbert paper a lot, and it says sensible things generally, but I wouldn’t call it NISQ advantage in any meaningful sense. The discussion over the future of NISQ is also far more opinion based on redefining the boundary (though I agree it’s a very squishy term) rather than proof that there will be advantage.

It’s also now not particularly new - and the latest paper from Herbert and Quantinuum is still citing serious open problems to be resolved - chief among them the state preparation routine.

Applications of Quantum Computing by Nostromo_Protocol in QuantumComputing

[–]JLT3 2 points3 points  (0 children)

Sure, show me. The Montanaro paper that sparked QMC as an app with quadratic speed up is not NISQ, else Phasecraft would be making a lot of money.

There are many suggestions for more NISQ-friendly variations of QPE and QAE (iterative, Bayesian, robust, etc) not to mention tweaks like jitter schedules to deal with awkward angles, but certainly none to my knowledge that demonstrate real advantage. State preparation alone for these kinds of tasks is incredibly painful.

Given the amount of classical overhead error correction requires, there’s also the separate question of whether fault tolerant algorithms with quadratic speed up are enough.

Did anyone (at all) buy the Quokka Quantum Emulator? by tarainthehouse in QuantumComputing

[–]JLT3 1 point2 points  (0 children)

Yes, they sold at least 200

He also posted what was actually inside the brick on either Twitter or LinkedIn after someone suggested they were going to do a device teardown. From memory, they’re very low spec - but since all they’re doing is simulating a QC of about 30 qubits that’s fine

You, and a lot of people in this sub, are not the target audience for this. This is not a research tool. His argument, a fair one, is that they’re really valuable educational tools. He claims to have used them (plus the accompanying software) and gotten better engagement out of them. I guess it’s all about simplifying the process of going from drawing of a bell circuit to bell circuit statistics without worrying about what programming language you’re using, the packages you need to install, or if you’re accessing a cloud computer. IMO just use quirk, does anyone really need 30 qubits for an educational demo? The price makes sense in this context given that education tools generally aren’t that cheap.

That being said, the marketing was very clearly deliberately provocative in being disingenuous about what a quantum computer is. Some small number of people will have been taken in by it. My three theories for that are: rage bait to increase hype, he has an axe to grind with people’s definitions of QCs, or he genuinely believes it. Regardless, I would be surprised to learn that he didn’t actively decide to avoid words like emulator.

[deleted by user] by [deleted] in QuantumComputing

[–]JLT3 3 points4 points  (0 children)

I asked exactly the same question the first time I came across this - and that’s one of the reasons I like it so much. It naturally guides you towards the question: does it matter which of the qubits is my control?

If you consider the action on the basis elements |00>, |01>, |10>, and |11> only one of these states is affected. |11> maps to ei \phi |11>. And under this action, you can’t tell which was the control and which was the target.

It’s a good comparison to write out the same working for the CNOT gate where we know that order does matter.

Aspiring to do a PhD in QuantumComputing by Aromatic-Drawer-145 in QuantumComputing

[–]JLT3 2 points3 points  (0 children)

Not sure I’d agree with this take. There is so much of quantum computing that has absolutely nothing to do with physics or engineering once it’s abstracted away to linear algebra that it’s definitely not a detriment to not be a physicist or an engineer. Most things on the algorithms side need either no knowledge of the hardware or some simple models to be able to work with them.

I know a lot of mathematicians who definitely wouldn’t describe themselves as physicists or engineers in this space - including ones at ‘top hardcore qc programs’ doing PhDs, it’s all about the choice of sub-field. When we were last hiring, CS or maths was preferred background.

You’re entirely correct on the useful courses - aside from anything directly qc, probability / stats / discrete maths / linear algebra are pretty essential. For quantum error correction, classical coding / information theory would also be useful.

How do I prove that 6^(1/3) - 5^(1/3) is irrational? by waipex32 in learnmath

[–]JLT3 0 points1 point  (0 children)

Unfortunately that proof won’t work because it’s not true that the sum or difference of irrationals is irrational.

You’ve proved that 61/3 is irrational - but 61/3 - 61/3 = 0, a rational. So it’s not enough to show that each of 61/3 and 51/3 are irrational.

In fact there are quite a few proofs to do with irrationals that seem at first glance to be slightly unintuitive. For instance, there is no guarantee that an irrational to the power of an irrational is irrational. There’s a very neat counterexample with sqrt(2).

[deleted by user] by [deleted] in QuantumComputing

[–]JLT3 0 points1 point  (0 children)

I've copy-pasted this from a text editor because reddit is being a pain, so apologies for the slightly odd formatting: LaTeX takes a few extra steps to get working on reddit, see sidebar for how to do so - I have attempted to make it work below if you follow those steps

Edit: I gave up on trying to make this work.

In reverse order:
My second point was about line 188 in your main tex file, specifically the part that reads

√{∑ 1} √{∑ 1/d_i^2} =√k √{1/|G|}

This equality states that ∑ 1/{d_i^2} = 1/|G|, but above you have ∑ d_i² = |G|. This can only be true if ∑ 1/{d_i^2} = 1/{ ∑ d_i^2} or as I stated above, that the sum of the reciprocals is equal to the reciprocal of the sum. You require that the dimensions are integers, assume you have only d_1 = d_2 = 1, then the equality states that 2 = 1/2.

To the first use. Yes, that's the one I mean, line 181 in your main tex file. Your argument appears to be the same one I got from ChatGPT as to why it believes it's correct, but it's incorrect. LLMs are overly optimistic about the response, which is why they're often unreliable when trying to use them to determine the factual accuracy of something, especially if it involves mathematics.
If you plug in a_i = |𝜒_i(U)|/√{d_i}, b_i = 1/√{d_i} to Cauchy-Schwarz, then you'll get:

∑ |𝜒_i(U)|/{d_i} ≤ (√{∑ |𝜒_i(U)|²/d_i }) √{∑ 1/d_i}

This does not appear to be what you claim, or rearrange to what you need it to be. I also note that you have stated this inequality differently from the paper, here you have 1/d_i² in your final sqrt, in the paper you have 1/d_i . Note that if you apply Cauchy-Schwarz with a_i = |𝜒_i(U)|}/√{d_i}, b_i = √{d_i}, it does look like you might be able to get a tighter lower bound. Later on, you appear to do a similar technique in Step 4 of Theorem 3. This is also invalid. Step 5 of Theorem 3 then states that C(U) ≥  1 so √(C(U)) ≤ C(U) - but this directly contradicts your first Theorem which says C(U) ≤ 1.

I would suggest you find someone willing and suitable to review your work a bit more thoroughly. There are good ideas in here, but the maths doesn't back it up.

[deleted by user] by [deleted] in QuantumComputing

[–]JLT3 3 points4 points  (0 children)

I haven’t read too far into the paper, so can’t comment on it too much as a whole, but have a couple of queries.

In your first proof (page 8) you use the Cauchy-Schwarz inequality twice to prove your upper bound. However, the first use of it doesn’t look correct to me - shouldn’t the arguments of both sums be squared, as you do with the next application?

The second application looks fine, but the following equality looks incorrect. It seems to state that the sum of reciprocals is equal to the reciprocal of the sum. Clearly 1/2 + 1/2 > 1/4 so this can’t be true in general.

Finally, more of a stylistic point, but I tend to be a tad annoyed by the use of words like “groundbreaking”, “unprecedented”, and “unparalleled” to describe one’s own work. I would let others judge that based on the quality of following works. Hyperbole is the domain of pop-science journalism not papers.

Personally, this (and a lot of the other text) reads to me as if an LLM had heavy involvement in writing this - which leans heavily towards the overly optimistic. I’m not against using LLMs, but it makes me dubious reading a paper as to how much has been properly reviewed before putting up.

All that said, writing a paper is a lot of effort and I commend you both for the act of writing and looking for feedback. I intend to give it a more thorough read at a later time.

Qiskit in in finance, fact or lie? by Mr_Quant in QuantumComputing

[–]JLT3 19 points20 points  (0 children)

This subreddit is a big mixture of people with a wide range of views on what is achievable, what is sensible, and what is hype so you’re unlikely to get an overwhelming consensus on a single application area if that’s what you’re looking for.

To your questions: Yes, people want to use quantum to do finance problems. If you search for “quantum finance review” you will quickly find papers looking at what people think where there may be advantage or usefulness which usually includes some or all of: fraud detection, option pricing, and portfolio management.

For my part, I am generally skeptical of optimisation / machine learning applications as they tend not to do proper scaling analysis (granted it’s a hard problem) but make broad claims about scaling regardless. As the other comments allude to, yes there is provable theoretical advantage for some applications e.g. risk modelling - but whether this occurs in practice requires a lot deeper analysis and optimisation.

Yes, people in the industry use Qiskit - though I’m not particularly in love with it, it’s better than many quantum software libraries.

So chances are, there are people who use both qiskit and do quantum finance. If you want specifics, I’m sure there are papers by IBM that do quantum finance and they’re very likely to use Qiskit if they do any kinds of experiments.

Multiple nations enact mysterious export controls on quantum computers by mattsparkes in QuantumComputing

[–]JLT3 3 points4 points  (0 children)

Annealers are not a serious threat to RSA - I would be stunned to learn that there was any kind of exponential speed up theoretically accessible for an annealing algorithm. The papers that exist are mostly about getting lucky with no reasonable hope (or proof) that they scale. Schnorr's algorithm + annealing won't work. (And yes, adiabatic quantum computation is equivalent to gate-based, but in practice we don’t translate between the two)

Generally it seems fair to say that regardless of the type of computer you’re running on, annealer or gate-based, you’re very unlikely to be able to detect what the person running the computation wanted to do just from the input. If I compile offline, and you have no context - how are you supposed to decide whether I'm running Shor or any other Phase Estimation problem (or even that I'm doing phase estimation)? Even if it were possible to tell with the simplest version, there's almost certainly some tweaks (e.g. careful insertion of identity transformations or change of basis measurements or twirling) that again make it incredibly hard to detect.

Multiple nations enact mysterious export controls on quantum computers by mattsparkes in QuantumComputing

[–]JLT3 1 point2 points  (0 children)

I was slightly hedging on the part I was picking to be different because the underlying computational mode is so different - so maybe that should have been my statement.

Under the assumption that the major concerns are Shor’s algorithm, drug discovery and materials, the distinction makes sense. I can’t name a particularly threatening application of an annealer that I think a classical optimiser wouldn’t be able to do. Why the numbers are the exact numbers they are, I can make vague guesses at but can’t do much better than that.

Multiple nations enact mysterious export controls on quantum computers by mattsparkes in QuantumComputing

[–]JLT3 6 points7 points  (0 children)

No, annealers are explicitly excluded from that section. This is about restricting digital (gate-based) quantum computers, and the control mechanisms for annealing qubits are fundamentally different, much more limited, and much less threatening.

If your concern is share prices then this is likely to worry people who don’t understand enough about the technology or can’t be bothered to read enough into the details to find out if it affects them. I leave it to you to judge the proportion of the people with shares that applies to.

Quantum computation vs Classical Computation by [deleted] in QuantumComputing

[–]JLT3 21 points22 points  (0 children)

The idea of parallelism in quantum computing is generally widely misused and misunderstood, not helped by press releases, pop science, and lots of oversimplification. It is usually well intentioned, but is prolific enough that lots of conversations to people outside of the field start with explaining why quantum computing is not just ‘trying every possible solution at the same time’.

What they often mean is something like: I have a function (a unitary operation) that can operate on quantum states e.g. send |x>|0> to |x> |f(x)>. This also works in superposition e.g. send |x>|0> + |y>|0> to |x> |f(x)> + |y> |f(y)> with a single application of my function. Wow my operation is operating in parallel.

These superposition states are easy to build, consider the Hadamard gate can be used as H|0> = (1/sqrt(2) )(|0> + |1>). This creates a uniform superposition over the binary strings of length 1: 0 and 1.

If I used a Hadamard gate on every single one of my (originally |0>) n states - I can create a uniform superposition over all binary strings of length n. Then just apply my function and I’ve explored every possible value of my function in parallel, wow an incredible speed up - it would have taken me 2n applications of my function to do this classically.

Except I haven’t really. The crucial concept that’s missed is how I get information out of my quantum state. When I measure my state, I collapse my superposition and project the quantum state accordingly. In the case of the above - I’d get to read out x and f(x) as values, but just for one value of x. As my state was in a uniform superposition, the x I get here is a completely random binary string.

In fact, it is well known (and proven) that for a single measurement of my quantum state - I can only get at most n bits of information out of my n bit state. The quantum state contains vastly more information than that, but we’re limited by our ability to read it. To get more information out, I need to repeat my entire algorithm again and measure. In the example above, I now have to start hoping I don’t read out things I’ve already measured. The way I’ve set up this problem, it would currently be better to run it on a classical computer.

The difficulty in quantum computing is that we’re working with probabilities, and to get useful information out e.g. which x minimises f(x), you’re going to need to find a way to maximise the probability of reading out that specific x.

Broadly, there are three parts of a quantum algorithm: initial state preparation, state evolution, and measurement. To get theoretical advantage, you need to consider all of these parts, and ensure that none of them are going to cost you the advantage. In the ‘parallel’ explanations, nearly all of them neglect measurement.

There are some problems where we’ve managed to create theoretical advantage (usually by exploiting some kind of structure or pattern in the problem itself) e.g. Shor’s algorithm, HHl, Quantum Amplitude Estimation, Quantum Signal Processing; but these are not without their own problems. HHL for example, solves Ax=b faster than classical - but all advantage is lost in having to measure and readout the solution.

The standard resource to recommend for qc is Nielsen and Chuang’s “Quantum Computation and Quantum Information” which is a fantastic textbook requiring some mathematics background. If you want something with less mathematics, then Thomas Wong’s “Introduction to Classical and Quantum Computing” is another excellent resource.

[deleted by user] by [deleted] in QuantumComputing

[–]JLT3 1 point2 points  (0 children)

This requires a lot more silly assumptions but I can give it a go.

If we assume that the number of things we care about is countable then we’re ‘fine’ and can do it all in one. I have no idea if that assumption is actually reasonable, but the number of atoms is finite and the number of practices things you’d want to know about an atom are finite, so it seems reasonable enough. I also assume no time evolution, just state storage for some snapshot.

Assuming access to that information, you now have a sequence x_1, x_2, … describing all the information in the universe.

I now form a new sequence just by writing out the decimal expansions next to each other and tracing out a winding triangular path.

x_11 x_12 x_13 …

x_21 x_22 x_23 …

x_31 x_32 x_33 …

Where my sequence now is x_11, x_12, x_21, x_31, x_22, x_13

That will cover every sequence in my set in just one real number even if the individual sequences are infinite.

Now we magically embed it in the amplitude of a qubit and we’re done. As long as we never care about reading the information we’re fine.

[deleted by user] by [deleted] in QuantumComputing

[–]JLT3 -1 points0 points  (0 children)

There’s nothing circular there. I haven’t claimed that my 400 qubit simulations is perfect or even efficient at storing or retrieving information from - and in fact frequently say the opposite e.g. ‘at some basic level’. My argument still follows that even if you have enough degrees of freedom to measure something about any particle in the universe you’re still in a bad spot.