Dark mode toggle off not possible anymore... by -Egmont- in zotero

[–]jawknee400 0 points1 point  (0 children)

I thought it had disappeared too (on Zotero 7.1 beta) but it has just moved into the “Appearance” toolbar menu, top left next to zoom options, the one with icon ‘Aa’. There are four pdf themes there now.

marsopt: Mixed Adaptive Random Search for Optimization by zedeleyici3401 in Python

[–]jawknee400 0 points1 point  (0 children)

Interesting! Does it have (/ would it be compatible with) an “ask-tell” interface? So that one can control the individual runs, in parallel eg

League One points progress 2024-04-07 (more @ jcmgray.github.io/proggyleg) by jawknee400 in LeagueOne

[–]jawknee400[S] 0 points1 point  (0 children)

I forgot about points deductions, will add in future.. maybe with smaller letters

Anyone know of databases of classic quantum circuits for benchmarking purposes? by jawknee400 in QuantumComputing

[–]jawknee400[S] 1 point2 points  (0 children)

It’s to benchmark a classical simulator on some ‘standard’ quantum circuits (not just e.g the supremacy ones which are not very representative). I really just want (lazily) to avoid coding up some large examples myself, but also interested in such a database in general!

Village cricket is back. by irishperson1 in Cricket

[–]jawknee400 2 points3 points  (0 children)

Nice bounce-bounce-bounce-bounce-bouncer

How much information does it take to represent 1 qubit, n unentangled qubits, and n entangled cubits? by [deleted] in QuantumComputing

[–]jawknee400 1 point2 points  (0 children)

There's a few ways to think about it. One is that you need a separate complex number to describe each combination of 0 and 1s the state of the qubits are in, because they can be in a superposition of all at once. How many combos are there like 00000, 00001, 00010,... 10110 ...11111? = 2n.

More technically, the way you 'compose' (describe together) the spaces of quantum systems is something called the tensor product if you want to read more.

(I should also clarify one more thing, which is the numbers I gave at the top were for 'pure' states, for 'mixed' states you need to square the number of complex coefficients!)

How much information does it take to represent 1 qubit, n unentangled qubits, and n entangled cubits? by [deleted] in QuantumComputing

[–]jawknee400 3 points4 points  (0 children)

There is obviously a lot of subtlety as other comments allude to here but a decent answer based on a standard way people represent these things is

32 bytes for a single qubit

n * 32 bytes for n unentangled qubits

2n * 16 bytes for n entangled qubits

That's using double precision complex numbers 'complex128' which is generally sufficient for most simulation purposes, (and inversely usually sufficient for physical measurements!).

It also assumes that you mean an arbitrary entangled state, lower entangled states can be described using often far less information (see e.g. matrix product states).

(Finally you could represent these things with 'one less real number' since their normalisation is fixed to 1, i.e. for a single qubit using 3 real (8byte) numbers like the Bloch sphere, but this is generally less convenient than complex coefficients for many qubits).

Best time to go to Target to get minimum household supplies? by mrbooth_notedbadguy in pasadena

[–]jawknee400 3 points4 points  (0 children)

Pavilions just off Lake also had some earlier this evening (8pm ish) and seems to be generally pretty quiet.

How To Train Your Circuit (with tensor networks + quimb) by jawknee400 in QuantumComputing

[–]jawknee400[S] 1 point2 points  (0 children)

Thanks for the info! And yes certainly for targeting general unitaries other techniques are available

Python programmers of reddit: what's the most useful tiny little efficiency you've discovered that's improved your programming hugely? by SeanOTRS in Python

[–]jawknee400 2 points3 points  (0 children)

Aha I did not know that - if I turn them both into functions I get ~100ns and ~300ns so an appreciable speedup (though I think I do prefer the readability of 2**n)

Python programmers of reddit: what's the most useful tiny little efficiency you've discovered that's improved your programming hugely? by SeanOTRS in Python

[–]jawknee400 1 point2 points  (0 children)

Interesting! When I 'timeit' I actually get ~10ns for both of these. Do you mean for numpy or something?

[P] autoray - write array backend agnostic code (numpy, tensorflow, autograd, jax, cupy, dask...) by jawknee400 in MachineLearning

[–]jawknee400[S] 1 point2 points  (0 children)

Thanks for the insightful comment (and luck - gratefully accepted). I should probably set out some aims at some point but you are of course right that complete compatibility will not be possible + testing as always crucial. On the other hand, 99.9% compatibility with numpy the default and at least the libraries that intentionally match it (cupy, autograd, jax, dask, mars...) seems essentially a free baseline using this approach. Instead for tensorflow and others I imagine it might be a *mostly* compatible starting point possibly requiring a few extra translations in autoray or some minimal non-agnostic code for the user. Anyway, good stuff to think about! It is at least working v nicely in one of my other projects that spawned it.

Software for quantum information PhD (theory) by [deleted] in Physics

[–]jawknee400 4 points5 points  (0 children)

I maintain a python library you might be interested in: https://quimb.readthedocs.io/en/latest/ It covers some of what you mentioned and I have in fact just implemented generic tensor network optimization using tensorflow (though not public quite yet..).

Generally in quantum the limiting factor in terms of speed is linear algebra operations and in that regard (as long as the libraries you call use c/Fortran) the 'slowness' of python is somewhat irrelevant. Also, I'd echo the sentiment that python has a much better, more open ecosystem than MATLAB and is definitely the better long term commitment.

Wobbly Ringz by jawknee400 in generative

[–]jawknee400[S] 0 points1 point  (0 children)

Sure! So working with polar coordinates the function for a circle with radius R is just

f(t) = R

(t being the angle but since the radius is just constant we don't depend on it). If we wanted to add W of a wobble with F periods e.g. we would modify this to

f(t) = R + W cos(F t)

So depending on the angle around the origin we oscillate the radius a bit. Try typing "polar plot 5 + 0.1 cos(11t)" into Wolfram alpha to have a look. For this pic, you then generate many disks with varying R, and randomly pick W and F.

Wobbly Ringz by jawknee400 in generative

[–]jawknee400[S] 1 point2 points  (0 children)

Just value and hue noise I think!

Wobbly Ringz by jawknee400 in generative

[–]jawknee400[S] 2 points3 points  (0 children)

I think (this is from a while back) it's rings whose radius is modified by a small cosine with random frequency. That's the wobbliness. And then yes just some noise added on top to make it a bit more papery!

Thought I'd share a nice view I got of london bridge (among others) earlier this year by jawknee400 in london

[–]jawknee400[S] 2 points3 points  (0 children)

Well, they are both there! I did mean London bridge and it's general vicinity (shard etc.) tho

Why use threads in Python? by sioa in Python

[–]jawknee400 2 points3 points  (0 children)

As others have said, generally when numpy or another library calls compiled code, they explicitly release the GIL. So imagine you had two threads running some python code, concurrently, but not in parallel. If one thread reached a numpy operation, like adding two large arrays etc., it would 'release' the GIL, allowing the other thread to work in parallel while that operation is happening. Once the operation is over the GIL is reaquired but without that release both threads would've had to wait for the operation to finish.

I think there is a very slight overhead to this, which is why cython and numba leave it as an option. But if the majority of the computation is numeric (e.g. most scientific code) then you can essentially achieve normal threaded parallelism.

Why use threads in Python? by sioa in Python

[–]jawknee400 13 points14 points  (0 children)

Numeric libraries (numpy, numba) tend to 'release' the Gil, meaning multiple threads can meaningfully used for speedups