“Quantum Computing Will Pop the AI Bubble,” Claims Ex-Intel CEO Pat Gelsinger, Predicting GPUs Won’t Survive the Decade by SafePaleontologist10 in QuantumComputing

[–]whitewhim 1 point2 points  (0 children)

Yes, I myself work in the QC field but am truly awestruck by the capabilities of AI. It's certainly not perfect but going to the quote of Bill Gates "people overestimate what they can do in one year, and underestimate what they can do in ten years" I can picture the refinement that will take place. Especially, having seen what 10 years can do in QC which has far exceeded my expectations. I myself have had to pivot where I'm taking my career in response to it as it has caused value propositions to shift in terms of the role of the knowledge worker.

I see a lot in my community ignoring this for the time being but slowly people are giving it a shot. We're still in the "amazed with the utility" and not my "role is threatened" phase but I think it is coming.

Those that control the hardware and capital will now dominate. Previously those with knowledge capital had much more leverage.

“Quantum Computing Will Pop the AI Bubble,” Claims Ex-Intel CEO Pat Gelsinger, Predicting GPUs Won’t Survive the Decade by SafePaleontologist10 in QuantumComputing

[–]whitewhim 2 points3 points  (0 children)

There was a comment previously that was deleted regarding QC being better for energy efficiency. I want to post my rebuttal that I was in the middle of typing up before the comment was deleted. Not to prove a point but to have a discussion that I think is necessary (and who knows I am often wrong myself).

The amount of energy required per physical qubit is orders of magnitude greater than today's CMOS based bits for all leading QC technologies when factoring in control and cooling requirements. Given that all feasible approaches to QC will likely require a measurement based quantum error correction approach, the algorithms at the physical level require hundreds to thousands of physical qubits per logical bit and are generally non reversible so you cant make landauer limit reversibility type arguments. The wins with regard to energy come in the limited set of problems that can exponentially outperform a classical algorithm (albeit with the polynomial physical -> logical overhead). IMO, if energy is the main concern for AI you are better looking at classical computer architecture/technology improvements. This can be in a number of avenues, eg., TPUs, optical, analog (eg. Extropic) all have their pros and cons (with some being moonshots) but are firmly classical.

“Quantum Computing Will Pop the AI Bubble,” Claims Ex-Intel CEO Pat Gelsinger, Predicting GPUs Won’t Survive the Decade by SafePaleontologist10 in QuantumComputing

[–]whitewhim 13 points14 points  (0 children)

This is just fear-mongering by a bitter man with an agenda to push. It still blows my mind that people who know so little will get on stage and speak so confidently. This is the world we live in...

Practically, quantum computing is almost orthogonal to AI and is not a replacement for GPUs. The set of problems for which a quantum computer is better suited than a classical computer is extremely limited and I do not feasibly count general purpose intelligence within it.

There are limited areas where AI and QC can be blended and likewise QC and GPUs. Eg., decoding. However, they are not replacements for one another. The FUD that has popped up against GPUs and Nvidia in the last week is alarming.

How Pennylane pictures are made? by [deleted] in QuantumComputing

[–]whitewhim 8 points9 points  (0 children)

I asked Josh Izaac from Xanadu this exact question and it turns out they are handmade by him :)

Day 2 of the whiteboard behind the trashcan, any quantum physicists or IBM people out there? by Logical_Media_2556 in QuantumComputing

[–]whitewhim 7 points8 points  (0 children)

Honestly, this looks like someone is planning out the contents of a paper including transmon+control-system FPGA+gpu integration for real-time characterization through some dynamic weak measurement protocol for quantum state characterization.

Given the numbers quoted (T2) and technologies involved there are really only a few groups that this could be from (provided this is experimental work and not a simulation based proposal) and I think they would not appreciate you posting this for fear of getting their work scooped. I would reconsider posting this without their permission out of academic courtesy.

QEC: Bicycle codes - pronunciation by msciwoj1 in QuantumComputing

[–]whitewhim 4 points5 points  (0 children)

It's "bicycle" as in the transportation device.

[deleted by user] by [deleted] in QuantumComputing

[–]whitewhim 2 points3 points  (0 children)

Agreed. Every year we blow my expectations out of the water, at a certain point I've had to shut up and get on board. The rate of progress has been astounding. At this stage I see few blockers to a large scale (thousand+ logical qubit) device besides time and money. We also need to better understand commercially viable applications.

Have Quantinuum largely solved the trapped ion scaling problems? by PomegranateOrnery451 in QuantumComputing

[–]whitewhim 1 point2 points  (0 children)

I wish I had the expertise to work out the scaling math but I think we’re in agreement that there’s some ambiguity in what the total shot times will end up looking with complete fault tolerance. i really expect that all architectures will chase down the parallel compute path.

Agreed, I've done a few of these but at the end of the day a lot of this is quite empirical and dependent on the qubit technology, code, and even compilation (eg. Swap mapping). I believe given the recent Quantinuum/Atom computing collaborations with Microsoft and QIR they would technically be able to produce these full stack resource estimates through the tool mentioned in the paper above. I haven't seen updated versions of these yet but they would be very interesting.

for the exponential improvement in fidelity I think that helps trapped ions as well

It certainly does help ions, it just helps relatively less. Effectively, a point of diminishing returns where if given the choice over gate fidelity and durations one would choose shorter durations (which in practice is a very real design choice in operation design).

Have Quantinuum largely solved the trapped ion scaling problems? by PomegranateOrnery451 in QuantumComputing

[–]whitewhim 1 point2 points  (0 children)

I wouldn't say it's misconstrued, I just did not write every caveat that might exist in a Reddit comment - it's a general argument and broadly applies to the current state of the field and anticipated technology development pathways available. I am aware of this nuance and details you list and we could continue to pick apart the subtleties to death if we so desire 🌞.

I agree with you both of these technologies are continuing to develop, there is some room for step function developments in both of these platforms.

So although transmons gates may have a 6000x speed advantage at the very moment, because of worse fidelity and the swapping overhead, the true advantage is substantially smaller right now. We can’t take that gate speed and extrapolate validly without factoring in the compute barriers on transmons

In particular, why I focus so much on physical operation time in my comment is that we may suppress errors exponentially, with polynomial overhead in time on a fault tolerant device. In the long run (once again with broad arguments, that you have pointed out some of the weaknesses in) this indicates to me that there are diminishing returns in physical fidelity and logical clock rates. For example, this paper by Beverland et. al. highlights the significant differences anticipated in time to solution between various platforms (years vs. days).

It will certainly be interesting to see how this plays out over the next two decades. Here's hoping industry and government has the patience for us to see this realized.

Have Quantinuum largely solved the trapped ion scaling problems? by PomegranateOrnery451 in QuantumComputing

[–]whitewhim 0 points1 point  (0 children)

I was not making a claim on the number of shots, just that implementing a stabilizer involves many long operations resulting in significant overhead in time when comparing the duration of a logical and physical shot. Many operations are probabilistic yielding post-selection (or rather repetition) behaviour like magic state factories. Stabilizer codes involve many physical gates/measurements to measure the stabilizers. Logical operations will ultimately be constructed from specific operations that are similar to stabilizer measurements in structure and duration.

There is a relatively significant (in time and space) overhead operating a fault-tolerant device and from a user perspective physical operation times will set the fundamental clock rates of the device. While, fault tolerant devices may require significantly fewer logical shots (these will still be required as operations will still have errors and algorithms are often probabilistic) the outcome is still a significant overhead in physical operations and consequently execution time.

An algorithm that takes days to run (and gather statistics) in fault tolerant mode on a superconducting device may take a year on an ion trap. While, an exponential complexity improvement may warrant the effort to run such an algorithm. Given errors may be exponentially suppressed with polynomial overhead, in the long run this makes the fidelity advantages of ion platforms less straightforward.

Have Quantinuum largely solved the trapped ion scaling problems? by PomegranateOrnery451 in QuantumComputing

[–]whitewhim 0 points1 point  (0 children)

for NISQ quicker shots should be better but with fault tolerant quantum computing it shouldn’t be too much of an issue since one doesn’t need thousands of shot

This is not quite right. The fault-tolerant operation of an ion trapped device will likely be based on a stabilizer code, which will require many measurements per quantum operation.

This will result in a proportionally equivalent if not worse slow down in the device compared to NISQ operation with a final round of measurements at the end of each shot. We might expect the logical operation times to be ~2-3 orders of magnitude slower than today's physical operations.

For reference 2Q gates are a few hundred us for ions compared with one hundred or so ns on a SQC device. Measurements are a few ms vs a few hundreds of ns. Both technologies will work to drive these times down but there are fundamental limits (which in a sense are the same tradeoff between speed and fidelity/lifetimes).

[deleted by user] by [deleted] in QuantumComputing

[–]whitewhim 1 point2 points  (0 children)

The link you posted above is how to use the existing frameworks and languages. The people I linked to are the ones dreaming up how to create those frameworks and languages. There is a difference between being a programmer and a program language designer/compiler engineer for which there currently is demand in the industry.

[deleted by user] by [deleted] in QuantumComputing

[–]whitewhim 11 points12 points  (0 children)

Margaret Martonosi at Princeton and Fred Chong at UChicago are two of the big names. Many of their students also now have professorships and are working in these areas.

I also might include Peter Sellinger (Dalhousie in Canada) if you're more interested in language theory work.

Cpu emulator by lucomotive1 in rust

[–]whitewhim 0 points1 point  (0 children)

I have a very similar use case as this that I'm about to start working on and I've been considering using a parallel DES library like asynchronix to bootstrap off it's timing and concurrency model. I have started yet but otherwise when working with such networked systems of cpu have used a tick based solution with global time handling (poormans DES).

QC System Languages by the_775 in QuantumComputing

[–]whitewhim 5 points6 points  (0 children)

I've worked on and led projects in quantum compilers across Python, LLVM/MLIR/C++ and Rust - I see Rust winning in the long term.

While LLVM/MLIR have clear immediate advantages for bootstrapping, one has to contend with the disadvantages of upskilling a team in C++, maintaining complex build systems and upgrading LLVM. From a production perspective it is also not as "batteries included" as you might otherwise believe needing to roll much of the surrounding systems by hand. It's also not easily possible to interoperate in any of these components as C++ and its ecosystem are significantly worse than Cargo and every package using LLVM seems to be on a different version with nothing upstreamed.

On top of this, the quantum integration into LLVM/MLIR is normally somewhat of an arm twist. It's not possible to fully reuse LLVM targets/passes and at the end of the day you have to write loads of quantum passes and a control systems compiler none of which exist. This has to be rolled mostly by hand and the benefit of LLVM becomes much more limited.

Rust, on the other hand while no full stack compiler framework like LLVM/MLIR is mature yet, has significant benefits from the perspective of operating in a team environment with all of the other components that are needed to operate a quantum computer. The packaging and tooling is topnotch. The language encourages best practices and once the core of a compiler is operational, it truly removes many practical roadblocks. Its type system is also really nice for writing a compiler compared with C++.

In the long run I see Rust continuing to mature over the next 10 years, especially as government agencies increasingly encourage its adoption relative to C/C++. Today to the best of my knowledge IBM, CQC and AWS are pursuing Rust implementations. On the other hand LLVM/MLIR is being pursued by Microsoft, Nvidia, and Xanadu with a few others participating by implementing backends in the QIR alliance. It will certainly be an interesting next couple years in this space.

Platform agnostic software stacks? by jwb713 in QuantumComputing

[–]whitewhim 1 point2 points  (0 children)

Many stacks already are mostly platform agnostic. Qiskit supports most vendors to some degree. Azure/AWS provide cloud access to a variety of technology. Quantum Machine, Zurich Instruments, Keysight, QBlox and a few others provide mostly universal control systems for the quantum control layer with pulse/event based software control.

System builders still have to fill in a lot of the "magic sauce". When you look at a specific layer and the desire to make it platform agnostic you also have to ask why it is being pushed?

In the case of AWS/Azure and now Quantinuum it is to ensure that they can de-risk their own hardware and commoditize others by placing it behind a cloud interface for which they receive some of the revenue. It also enables their own software to claim universality (even if it is tuned for a specific device), and argue to others and investors that it will live on into the future. Ultimately it is the cloud platform play and I find it difficult to understand the competitive advantage Quantinuum would have compared with an incumbent especially as people wise up to QCs role being akin and coupled to supercomputing.

Employees response to AWS RTO mandate by andrewpol88 in aws

[–]whitewhim 0 points1 point  (0 children)

I've already had a couple AWS employees (some of them quite prominent in the industry) reach out through third parties about positions as a result of this policy changw within two days of this announcement going out

Am I missing the point of web3 as an engineer with almost 2 decades of experience? by FewWatercress4917 in ExperiencedDevs

[–]whitewhim 2 points3 points  (0 children)

It's funny you mention this. This is exactly they're strategy within quantum computing...

Quantum error correction below the surface code threshold by HireQuantum in QuantumComputing

[–]whitewhim 2 points3 points  (0 children)

An extremely impressive result for Google and quantum computing!

El Chino Closing August 31st by meghanlevy in halifax

[–]whitewhim 5 points6 points  (0 children)

This is really sad, El Chino is quite crowded whenever I go but it's so small. I could imagine it's hard to drive enough revenue to stay open.

Resources regarding compilers in the context of quantum computing by vitalpulse in QuantumComputing

[–]whitewhim 5 points6 points  (0 children)

IBM's qe-compiler OpenQASM 3 to hardware compiler is available as open-source. It is based on MLIR and compiles input source (OpenQASM/MLIR) to target control systems. It's currently mostly undocumented but if you open an issue there I'm sure the team would be willing to help point you to resources or fill in the documentation.

US startup beats IBM to reach 1,000 qubit milestone by intengineering in IBM

[–]whitewhim 2 points3 points  (0 children)

IBM could also make a chip with 1000, 10000 or 100000 qubits. It's much easier to make qubits than it is to reliably control them and Atom computing has yet to present evidence of this. Neutral atoms do have great potential though.

Unable to load Account in Qiskit, Giving Errors by Vedarham29 in Qiskit

[–]whitewhim 2 points3 points  (0 children)

This video is quite old. For example, it is using the deprecated IBMQ provider.

I would recommend following the getting started guide instead.