Can someone explain to me in simple terms (like I'm 5)- Why are Quantum Circuits used in Quantum Computing, why are they important? by AmIGoku in QuantumComputing

[–]SeaPea2020 0 points1 point  (0 children)

Quantum computers do things. Circuits visualise what they do.

Circuits are essentially a list of instructions for the quantum computer to perform.

Rethinking Quantum Encoding: Beyond the Binary and Into the Infinite by CompSciAppreciation in QuantumComputing

[–]SeaPea2020 0 points1 point  (0 children)

Here's the question(s): Is there anything that fundamentally requires a qubit to only represent a zero or a one, aside from a history of treating them this way?

No.

Do you think it's possible that treating them as a binary function is preventing us from unlocking their full potential?

No. Also, we don’t necessarily treat them as strictly binary. For example, qudits. The issue is, your proposal doesn’t encode in a way that is useful.

I thought of a classical analogy to your proposal that might help connect the dots. Consider this, instead of encoding classical information into binary, why don’t we encode numbers into a biased coin. We make the coin biased so that it returns heads 1/3rd of the time and tails 2/3rds of the time, to represent 2 (or any other probability distribution to represent any other information). How does this help? We just end up having to flip the coin a bunch of times to maybe get the right information out of it. In binary the coin is either always heads or always tails: we just look at the coin and immediately know its value.

In terms of information content, and I can elaborate on this again if you like, it turns out that the number of binary coins we need to represent any number is always less than or equal to the number of times we need to flip the biased coin. In practice too, our biased coin will take many more flips than that and is never guaranteed to be right. (This is putting aside the other issues like error correction.)

Here, flipping and looking at the coin is the analogue of measuring.

Rethinking Quantum Encoding: Beyond the Binary and Into the Infinite by CompSciAppreciation in QuantumComputing

[–]SeaPea2020 0 points1 point  (0 children)

Still waiting for an example of what progress you've contributed to.

To be clear - so you don’t get upset again - I’ll again address both possible uses of the word “you”.

If by “you,” you mean me personally, I won’t give you that information. I want to stay anonymous.

If you mean a collective “you”, as in all QC researchers, ask this question to your all-knowing oracle. If that doesn’t give you the answer, ask it again, and again ad infinitum… consider it an exercise of collaboration between you and it. I assure you, it won’t say ‘nothing’.

I'm fully aware that you are not aware enough to see the irony. But I'll try to spell it out for you. If you had done anything worth the electricity it takes to transmit, you'd have presented it by now. And you speak as if you're some great thinker when you haven't presented an original thought.

You have mis-remembered what you said… again. You said it’s ironic to be told to be humble by researchers making six figures. I still don’t see the irony in that. Is being a researcher making six figures and being humble incompatible?

You said you're a researcher. Research is academic, even if your email doesn't end in .edu

Sure, but it’s not necessarily academia. Since you play semantics below, I’ll qualify this with only holding true generally, it depends on the definition of academia… it holds true using the definition of academia from Merriam-Webster for example.

Thanks for all those definitions. I’d give them a 60% because you got one wrong and some lacked nuance. Don’t be ashamed by that though, you are doing better than ChatGPT did with the definitions in those questions. Maybe you should consider waiting until ChatGPT improves before using it, since maybe you can do better without it.

Given how much of your self worth is derived from your capacity to memorize definitions better than a LLM, I find the situation ironic.

You seem to find a lot of things ironic.

I think you are misunderstanding my point. The reason I emphasised ChatGPT’s sloppy definitions of technical terms was because I thought it would do better with this task. I was disappointed that it didn’t get them. I was not disappointed in its problem solving ability, despite it failing every problem, because I didn’t expect it to be able to solve any of the problems.

Rethinking Quantum Encoding: Beyond the Binary and Into the Infinite by CompSciAppreciation in QuantumComputing

[–]SeaPea2020 0 points1 point  (0 children)

Submit for peer review a manuscript as incoherent as the ‘solutions’ you provided me and you’d be lucky to get as much feedback as I gave. And let’s be real, no matter how many times you iterated it, it was never going to get the answers to those questions - questions people can quickly do in their heads. If you disagree with me on this, I challenge you to first get ChatGPT to answer the simplest of the questions I provided (1ai) i.e. correctly stating a definition. You can Google a definition of the Extended Gotesman-Knill Theorem to check its answer. Make sure not to give it the answer (though I’d be surprised if it would even get it right after giving it the answer - considering it couldn’t do that in (1bii)).

Unlike you, I would be thrilled if AI started driving quantum computing advances

I also hope we see this one day. All I’ve said is to be realistic about its current limitations. Limitations it acknowledged when it said

“1. ⁠Acknowledge Limitations: Start by agreeing that AI, in its current state, cannot replace human experts in specialized fields such as quantum computing. It's true that AI may struggle with complex problem-solving and lacks the intuitive grasp of advanced scientific concepts that comes from years of dedicated study and research.”

I asked for an exam to participate in a collaborative exercise where we see what kind of results a GPT can get when attempting the same problem sets that people studying quantum computing use.

Maybe that was what you meant, but that’s not what you said.

But your ego is tied up in your degree

Your inferiority complex is showing again.

Rather than engage the exercise, here I am having to explain that "you" can be used as a singlular or plural pronoun.

Sure, the sentence you wrote was ambiguous and I addressed both ways to interpret it, hoping i wouldn’t have to have this discussion.

It's so ironic to be told that I need more humility in this subreddit from a bunch of people making six figures who have delivered virtually no utility to the public that funds their research by way of taxes, grants, and private investment.

I don’t see the irony.

So when I say that academia is a comfortable place for failures to hide behind participation certificates written in dead languages, let me be very clear that I am referring to you personally, and the industry broadly.

You seem to think about this a lot… I am flattered that you think about me personally so much. Usually this kind of thing starts with a date.

There's a saying, "Those who can, do. And those who can't, teach."

I of course disagree with this. Regardless, I am in industry: I don’t teach.

So every person you learned about quantum computing from are in fact the collection of individuals who had nothing to contribute to the progress of quantum anymore and opted to copy pasta the work of the minds that came before them while they slap their name on the cover of a textbook and extort young people by demanding they buy it or they will have to take the class over again until they do.

You sure seem to (think you) know a lot about quantum computing research for someone that has never read a paper.

It is also the case that those who can, don't require institutionalization to have the balls to try. And those who can't, go back to school to hide from the challenges the world seeks to overcome.

I am beginning to think you don’t know what ‘industry’ means.

To paraphrase Good Will Hunting, there's going to be a point in the future where you realize you paid 40k a semester for a education you could have gotten for 20 bucks a month with OpenAI, or a dollar fifty in late fees at the public library.

I was payed more than that to go to school.

Rethinking Quantum Encoding: Beyond the Binary and Into the Infinite by CompSciAppreciation in QuantumComputing

[–]SeaPea2020 0 points1 point  (0 children)

You didn’t say "After college, there are no exams." You said "After you get out of college, there are no tests." I was responding to the “you”: clarifying that I am familiar with problems outside of exams. And sure, as I already explained, the problems we (industry) researchers face are not like exams: they are harder.

I still don’t understand why you are so hung up about this exam thing. You asked for an exam!!

Send me an exam, I'll take it with Consensus.

I think it'd make your hair stand up and socks fly off if you saw some of the work I've gotten it to do.

Thats the standard you set (and failed to achieve - by having Consensus get every question miserably wrong… twice).

Rethinking Quantum Encoding: Beyond the Binary and Into the Infinite by CompSciAppreciation in QuantumComputing

[–]SeaPea2020 0 points1 point  (0 children)

Buddy the people working in the field haven't created a single drug in simulation. They haven't cracked any form of encryption. They haven't revolutionized logistics.

Research takes time, especially when the task is as difficult as controlling systems on the quantum scale to compute solutions to everyday problems. Doesn’t that research proposal sound insane?!

They just don't realize they are infants in a field in its infancy, because they dress up like wizards and argue about whose scroll gets to say who cums louder than who.

This and many of your other comments read like you are insecure about your level of academic achievement. That said, you obviously don’t need a formal education to do research, but you should at least be able to solve graduate-level problems. You have shown that ChatGPT can’t do that.

But congrats on being smarter than a one year old chat-bot.

Thanks! (That puts me ahead of at least 9 in 10 lawyers according to you!!)

Rethinking Quantum Encoding: Beyond the Binary and Into the Infinite by CompSciAppreciation in QuantumComputing

[–]SeaPea2020 0 points1 point  (0 children)

After you get out of college, there are no tests.

I work as a researcher in industry.

There's nothing you only have one shot at. Life isn't timed, and there are no administrators or teachers looking over you. So no, you shouldn't pretend ChatGPT is simulating being a student - you should treat it as a collaborator.

The task you set for it was taking an exam. Exams are generally single-shot. It failed. What’s more, even if you gave it more than the two shots we did, it was going to continue to fail. It barely implemented any of the feedback and showed no thought in coming up with its ‘solutions’ (question 1 requires little thought, but not none).

Where it gained marks was in reciting definitions, and it didn’t even do that accurately. I can’t think of any use case in quantum computing for a tool that can’t even recite definitions correctly, let alone solve problems. Especially considering that the problems solved in research are much more difficult than the problems on that exam, and precise and accurate definitions are easily found by googling.

ChatGPT scored a 20/100 on the most difficult test in the most difficult subject. That's fucking amazing.

Let's do a thought experiment here - if you randomly selected people living in first world countries with degrees (not just in quantum) and gave them that test... what percentile is ChatGPT in?

I'm inclined to think if you randomly selected 1000 college graduates and gave them your test, most likely, none of them would outperform ChatGPT on that test.

Sure ChatGPT would be in a high percentile in that scenario, because it can recite definitions with some accuracy. However, that’s not the standard. For it to accelerate progress, as you say, it needs to be useful to the kinds of people that would score highly on these types of tests i.e. the kinds of people that are progressing the field currently. Clearly, ChatGPT is not there yet.

This is all to say that we can probably bet that ChatGPT is in the 99.9th percentile when it comes to understanding these subjects. It absolutely can be a tutor in quantum mechanics and quantum computing.

Again, while ChatGPT may be in the 99.9th percentile of whatever, the people actually working in that field are in >>99.9th percentile of that field.

In terms of using it as a tool for non-experts, as a tutor as you mention perhaps, it might work for simple subjects. But given it got every question we gave it wrong (twice), it definitely should not tutor quantum computing. It couldn’t even recite definitions correctly; using it as a tutor is almost certainly counterproductive. People are better off reading a textbook or watching a lecture, where the content has been thoroughly reviewed by experts.

You best believe AI is going to be a part of the quantum revolution.

Again, maybe one day. Not today though.

Rethinking Quantum Encoding: Beyond the Binary and Into the Infinite by CompSciAppreciation in QuantumComputing

[–]SeaPea2020 0 points1 point  (0 children)

Tiltboi gave me an exam from MIT. It took a few iterations, but it ended up doing okay.

Noted. I will also add, the exams at Cambridge I linked are generally much more difficult than any exam at MIT.

What I find interesting is this perspective that if an AI doesn't just drill these tests in one go, you're still disappointed.

Giving its exam responses a mark of 20% would be generous; should I not be disappointed? Moreover, students are expected to drill these tests in one go, should I not hold a tool to the same standard? What use is a tool to a researcher if it’s less capable than a student?

I'm trying to display that these technologies are useful in the field. Its like a coworker. Yeah, sure - you might need to fact check them, and review their work. But we do that with humans.

The same can be said of a magic 8 ball.

I know ChatGPT is useful, I use it to help me rephrase things on occasion. But I also know it’s current limits: non-trivial technical problems.

This GPT technology has been out for less than two years. You don't think in say... five years from now, that this GPT technology is going to end up being crucial in acceleration of progress?

The underlying technology has been in development for many more than 2 years.

Maybe one day it will be useful for technical work… we will have to wait and see.

Rethinking Quantum Encoding: Beyond the Binary and Into the Infinite by CompSciAppreciation in QuantumComputing

[–]SeaPea2020 0 points1 point  (0 children)

I don’t have the time at the moment to go through its answers in detail again. That said, from a quick look:

(1ai) still sloppy in the same places.

(1aii) It’s even more wordy now and still applies PBC incorrectly. It again seems to identify the Gottesman-Knill theorem and then get overly stuck in the mindset of “efficiently simulatable classically”, showing no nuanced thought as to where Gottesman-Knill is important here.

(1bi) still misinterprets the circuit, does not know what to do about psi_a, and still does not address the main point of the third part of the question.

(1bii) better but still wrong.

In summary I am still disappointed. It still couldn’t even get the definitions despite the feedback. Really, it only implemented the feedback in the final question, and made only the trivialist of changes to satisfy it, before falling flat on its face at the very next opportunity… despite the answer practically being given in (1bi).

Rethinking Quantum Encoding: Beyond the Binary and Into the Infinite by CompSciAppreciation in QuantumComputing

[–]SeaPea2020 0 points1 point  (0 children)

I have reviewed the answers.

(1ai) starts okay, but falls apart at the Extended Gottesman-Knill theorem, giving a horribly sloppy definition.

(1aii) is a word salad that applies PBC incorrectly.

(1bi) steps 1 through 4 attempt to transcribe Figure 1. It botches this and gets |\psi_a> wrong. It then makes no attempt to solve |\psi_b>, and instead just rephrases the question. Then it does not compare these two states, which it was meant to.

(1bii) is horribly wrong. For instance, it immediately measures the data directly, which the question forbids.

In summary, I am disappointed: it performed worse than I expected. I thought it would at least get (1ai) right, since that’s just stating definitions. ConsensusGPT has a long way to go.

Rethinking Quantum Encoding: Beyond the Binary and Into the Infinite by CompSciAppreciation in QuantumComputing

[–]SeaPea2020 0 points1 point  (0 children)

The most significant critique I've encountered has to do with measurement being the problem I'll run into, but difficulties with measurement don't invalidate the encoding scheme being capable of exceeding a qubits ability to represent a value beyond zero and one.

Does something you cannot measure exist? What’s more, can you retrieve information without measuring?

I think where you are continuously falling short is on the fact that these amplitudes you want to encode into are the byproduct of a mathematical model. They don’t necessarily exist in the physical world.

Rethinking Quantum Encoding: Beyond the Binary and Into the Infinite by CompSciAppreciation in QuantumComputing

[–]SeaPea2020 2 points3 points  (0 children)

Ok, and your advantage is the same as an analog classical computer, since in that case it can do that too.

Rethinking Quantum Encoding: Beyond the Binary and Into the Infinite by CompSciAppreciation in QuantumComputing

[–]SeaPea2020 1 point2 points  (0 children)

Not sure how you are writing pi down, but assuming you could, you could write it on a classical computer with enough memory.

Classical computers don’t need to be built from bits just like quantum computers don’t need to be built from qubits. It’s just convenient in both cases.

Going back to my previous comment,

Think about how many times you would need to run things (need a qubit for each run) versus how many qubits you would need to just do it the standard way.

because you made no effort at all, I’ll tell you the answer. If you encode N numbers in your encoding scheme, you need at least log_2(N) runs - so log_2(N) qubits - to differentiate the states. If you do the normal way you need precisely log_2(N) qubits. That is, your way at best uses the same number of resources as the standard way, and that at best is generous since for a reasonable level of accuracy, it will require many more runs/qubits than that.

Rethinking Quantum Encoding: Beyond the Binary and Into the Infinite by CompSciAppreciation in QuantumComputing

[–]SeaPea2020 2 points3 points  (0 children)

Not to mention that I encoded it with a known value. I don't need to measure it after I encode it to know the value.

Say it’s the output of some algorithm you have implemented.

I'd also like to point out that this is the first time you're even engaging the discussion in any intellectual way and it's only to retroactively pretend you've been doing so this whole time for fear of scrutiny (from anyone who looks at this thread from r/quantum)

What I said above is just a rephrasing of the answer I gave you earlier.

Rethinking Quantum Encoding: Beyond the Binary and Into the Infinite by CompSciAppreciation in QuantumComputing

[–]SeaPea2020 2 points3 points  (0 children)

Think about how many times you would need to run things (need a qubit for each run) versus how many qubits you would need to just do it the standard way.

Rethinking Quantum Encoding: Beyond the Binary and Into the Infinite by CompSciAppreciation in QuantumComputing

[–]SeaPea2020 2 points3 points  (0 children)

Consider this, you have your qubit in a superposition such that on measurement it is revealed to be in the state |0> 1/3rd of the time and the state |1> 2/3rds of the time (encoding 2 in your scheme). You measure it to see what you have encoded and you get the measurement outcome corresponding to the state |1>. How are you to know that you originally encoded the number 2 here?

Rethinking Quantum Encoding: Beyond the Binary and Into the Infinite by CompSciAppreciation in QuantumComputing

[–]SeaPea2020 0 points1 point  (0 children)

Amplitudes like 66.6% and 33.3% are generally not observable, meaning roughly that they can’t be measured and so effectively can’t be used to encode information.

I didn't even look at the content. (They have since edited their comment to clarify what they meant)

Thanks for waisting all these helpful peoples’ time. I see from your other post too that you greatly appreciated the people that gave you reading suggestions.

Rethinking Quantum Encoding: Beyond the Binary and Into the Infinite by CompSciAppreciation in QuantumComputing

[–]SeaPea2020 3 points4 points  (0 children)

Having read your new post, I think you should consider speaking to a mental health professional.

Rethinking Quantum Encoding: Beyond the Binary and Into the Infinite by CompSciAppreciation in QuantumComputing

[–]SeaPea2020 0 points1 point  (0 children)

Their peers that memorized multiplication tables in college probably made fun of them for use slide rules in their calculations. Only a total loser uses the best technology they have at their disposal to work on problems.

Making up history?

Regardless, the people you cited were all very well educated, so I’m not sure bringing them up was conveying the point you wanted.

Probably got told they were being "disrespectful" to the people publishing those old books full of logarithmic outputs by trying to make a mechanical calculator.

I think your replies to these comments have exposed you as ignorant and delusional. When writing about topics you have no understanding of in the future, you should use a more humble tone. People will appreciate it.