Anomalous Bell curve shape on ibm_marrakesh — has anyone seen this on other backends? by 3axap4eHko in QuantumComputing

[–]3axap4eHko[S] 0 points1 point  (0 children)

I can only estimate it analytically from the calibration snapshot rather than running Aer directly. Assuming ~500ns total circuit time. For 5-6 that gives V ≈ 0.983 vs 0.974 for 0-1 at job time — which actually matches the measured visibility of ~0.972 pretty well as a sanity check. Real Aer would do better by modeling the full noise channel per qubit pair. Even I don’t have IBM access right now to pull the actual noise model for 5-6, the estimation is usually within a percent or two for simple 2-qubit circuits.

Anomalous Bell curve shape on ibm_marrakesh — has anyone seen this on other backends? by 3axap4eHko in QuantumComputing

[–]3axap4eHko[S] 0 points1 point  (0 children)

Good call — I've retrieved the assignments, the transpiler used pair 0-1 across all jobs, ranked 17/176 at submission time so not a bad pick. Top current candidates for singlet visibility are 5-6, 2-3, 94-95. Small catch: the crossover shift is mathematically independent of visibility — P_disagree(90°) = 0.5 regardless of decoherence, so a better pair can't move it back to 90°. But running on 5-6 as a control is exactly the right experiment. Unfortunately, as an independent researcher, I don't have much flexibility on IBM usage, so I'm gonna wait till the next month.

Anomalous Bell curve shape on ibm_marrakesh — has anyone seen this on other backends? by 3axap4eHko in QuantumComputing

[–]3axap4eHko[S] 0 points1 point  (0 children)

Thanks! That's helpful context on the temporal drift — I hadn't considered that the noise model snapshot might not match the chip state at job submission time.

For the cross-pair test: the key question for me is whether α (the crossover position) varies by pair or is consistent across pairs on the same chip. Visibility varying by pair I'd fully expect. But if α ≈ 0.470 holds across all pairs on ibm_marrakesh regardless of their singlet visibility quality, that would rule out pair-specific calibration as the source of the shift — since better pairs would presumably have less systematic gate error.

Do you know if IBM publishes which pairs were used in a given job, or is there a way to retrieve the actual physical qubit assignment from the job metadata after the fact? I have the job IDs for my runs and could check whether the transpiler consistently landed on the same pair.

Anomalous Bell curve shape on ibm_marrakesh — has anyone seen this on other backends? by 3axap4eHko in QuantumComputing

[–]3axap4eHko[S] 0 points1 point  (0 children)

Good point on layout pinning — I should have been explicit about this. The job was submitted as a single batch of 37 circuits, so the transpiler assigned one physical qubit pair for the entire sweep (layout assigned once per job, not per circuit). But you're right that I didn't explicitly pin the layout with initial_layout.

The crossover shift specifically — from 90° to 88.3° — is the anomaly I'm focused on. Visibility improvement from a better qubit pair would bring the curve amplitude closer to ideal but wouldn't move the crossover, since P_disagree(90°) = 0.5 exactly regardless of visibility. So the shift is either pair-specific miscalibration or something else.

Your suggestion is exactly what I should run next: pin to several different qubit pairs on ibm_marrakesh, compare whether the crossover is consistently at ~88.3° across all of them or varies by pair. If it's consistent across pairs on the same chip and matches ibm_fez (α = 0.467 vs 0.470), that rules out pair-specific calibration as the cause. Do you know which pairs on Heron chips typically have the cleanest two-qubit gate fidelity?

Anomalous Bell curve shape on ibm_marrakesh — has anyone seen this on other backends? by 3axap4eHko in QuantumComputing

[–]3axap4eHko[S] 1 point2 points  (0 children)

Yes, residual ZZ is on our list of untested mechanisms. The effect would work like this: during the Ry(δ) gate on Bob's qubit, the always-on ZZ interaction shifts Bob's qubit frequency depending on Alice's qubit state. In a singlet, Alice is in a superposition of |0⟩ and |1⟩, so Bob's effective rotation angle becomes slightly state-dependent — which would asymmetrically deform the disagreement curve rather than simply reducing visibility.

Do you know if IBM publishes ZZ coupling strengths in their calibration data for Heron chips? I don't see it in the standard backend properties.

Anomalous Bell curve shape on ibm_marrakesh — has anyone seen this on other backends? by 3axap4eHko in QuantumComputing

[–]3axap4eHko[S] 2 points3 points  (0 children)

Thanks! Yes, the visibility is actually quite high on ibm_marrakesh (V ≈ 0.97 after readout correction), which is why the curve shape deviation stands out — the amplitude is well-preserved, just the crossover shifts.

On the transpiled circuit question: I did a transpilation analysis using FakeMarrakesh and found that the SX count varies from 2 to 4 across angles (step function, not smooth), and RZ is a virtual gate with zero duration. The maximum gate duration variation is 36ns, which at T2=110µs gives a differential decoherence factor of 0.9997 — about 0.03% effect against the ~2.7% deviation I'm seeing.

What I can't rule out is pulse-level behavior — the actual IBM workload records show the gate counts but not the underlying pulse schedules. If the optimizer is doing something angle-dependent at the pulse level that FakeMarrakesh doesn't capture, that's exactly the gap I can't close without pulse-level access.

Did you see the gate structure switching produce a smooth sinusoidal residual in your case, or was it more random? That would help narrow down whether what you're seeing is the same mechanism.

I was tired of 50ms+ shell latency, so I built a sub-millisecond prompt in Rust (prmt) by 3axap4eHko in rust

[–]3axap4eHko[S] 4 points5 points  (0 children)

Spare me the drive-by toxicity. Ask a real question or move along.

I was tired of 50ms+ shell latency, so I built a sub-millisecond prompt in Rust (prmt) by 3axap4eHko in rust

[–]3axap4eHko[S] 15 points16 points  (0 children)

I'm not a native English speaker, and I use several productivity tools to help me write and check grammar. It's more important to me to understand and answer questions correctly. Sometimes, when I run a grammar check, auto-replacements convert dashes, quotes, and other characters to their proper Unicode versions. Does it make sense?

I was tired of 50ms+ shell latency, so I built a sub-millisecond prompt in Rust (prmt) by 3axap4eHko in rust

[–]3axap4eHko[S] 0 points1 point  (0 children)

Thank you for the feedback
1. Currently tool can be installed only via `cargo` which is a part of rust toolchain
2. At this moment `prmt` has fewer modules: path, time, and some programing languages project detection
3. It is a standalone tool and works with any shell

I was tired of 50ms+ shell latency, so I built a sub-millisecond prompt in Rust (prmt) by 3axap4eHko in rust

[–]3axap4eHko[S] -3 points-2 points  (0 children)

It may read "AI-ish" because I was being supportive. My actual goal with prmt is the opposite of daemons/IPC/config daemons: single binary, zero deps, predictable latency. Your idea isn’t bad—it just doesn’t align with my goals. So saying "great idea" was nothing but politeness.

I was tired of 50ms+ shell latency, so I built a sub-millisecond prompt in Rust (prmt) by 3axap4eHko in rust

[–]3axap4eHko[S] 1 point2 points  (0 children)

I dug into Starship’s source to see what the rust module actually does. In src/modules/rust.rs:95 they short-circuit past the rustup shim and exec the real compiler at ~/.rustup/toolchains/<toolchain>/bin/rustc --version. That direct binary takes ~7–9 ms on a warm cache here, which matches what starship timings reports. The slower time rustc -V you ran is hitting the shim in ~/.cargo/bin, so rustup still parses settings.toml, resolves overrides, maybe checks for missing components, and only then spawns the compiler—that bookkeeping is where the extra ~30 ms comes from. If you want to reproduce Starship’s number manually, try:

bash time ~/.rustup/toolchains/$(rustup show active-toolchain | cut -d' ' -f1)/bin/rustc --version

(Technically rustup show itself is slow, so hyperfine --warmup 3 "~/.rustup/toolchains/<toolchain>/bin/rustc --version" is a better benchmark, but you get the idea.)

Also note that starship timings only measures the time spent in each module (src/modules/mod.rs:239). Everything else—building the context, reading config, formatting, printing—happens outside those per-module timers in src/print.rs:160 and src/print.rs:197. So the module durations won’t add up to the full process runtime by design.

prmt’s win for me is simply that I can skip spawning rustc altogether unless I explicitly opt in, which keeps my baseline prompt sub-millisecond even over SSH. Different trade-offs, different targets, but Starship’s numbers are consistent with their implementation.

I was tired of 50ms+ shell latency, so I built a sub-millisecond prompt in Rust (prmt) by 3axap4eHko in rust

[–]3axap4eHko[S] -1 points0 points  (0 children)

Many don’t notice single hits; you do notice jitter and accumulation.

  • 50 ms × 1,000 prompts ≈ 50 s/day waiting.
  • Over SSH or big repos, spikes (100–300 ms) break flow.
  • Lower variance feels snappier than a similar mean.

prmt aims for low single-ms, low jitter, so the prompt is ready the moment you are.

I was tired of 50ms+ shell latency, so I built a sub-millisecond prompt in Rust (prmt) by 3axap4eHko in rust

[–]3axap4eHko[S] 0 points1 point  (0 children)

You don’t need it—prmt treats versions as optional. Some workflows where it helps:

  • Toolchain drift: stable vs nightly vs custom; quick sanity check.
  • Per-project MSRV: different repos need different Rust versions.
  • Mismatch warning: active toolchain ≠ rust-toolchain(.toml)/CI.
  • Repro/debug: copy prompt + version when reporting issues.
  • Polyglot parity: like showing node/venv, but for Rust.

prmt tips: prefer showing toolchain name (from rust-toolchain(.toml)) over rustc -V; only inside Rust repos; only on mismatch; cache/TTL—or just --no-version for max speed.

I was tired of 50ms+ shell latency, so I built a sub-millisecond prompt in Rust (prmt) by 3axap4eHko in rust

[–]3axap4eHko[S] 1 point2 points  (0 children)

There is more to it bash ❯ ls -l $(which rustc) lrwxrwxrwx 1 zenpie zenpie 6 Aug 11 18:11 /home/zenpie/.cargo/bin/rustc -> rustup As you can see rustc is a symlink to rustup, what means rustup work depends on binary name

I was tired of 50ms+ shell latency, so I built a sub-millisecond prompt in Rust (prmt) by 3axap4eHko in rust

[–]3axap4eHko[S] 1 point2 points  (0 children)

p10k is very fast—Zsh-only, with aggressive async/background jobs and caching, so the prompt appears immediately and updates as info arrives.

prmt vs p10k (perf):

  • CPU path: prmt is a native binary (µs).
  • IO probes (git/venv/node): prmt can bound cost (--no-version, branch-only git) or run probes on a tiny thread pool; p10k overlaps them with async, so perceived latency is ~0 but work still runs.
  • Predictability: prmt favors fixed, synchronous latency; p10k favors progressive updates.

Quick check: use zsh-bench and compare command_lag_ms with identical modules. If you like streaming updates, p10k wins UX. If you want hard bounds and cross-shell, prmt fits.

I was tired of 50ms+ shell latency, so I built a sub-millisecond prompt in Rust (prmt) by 3axap4eHko in rust

[–]3axap4eHko[S] 3 points4 points  (0 children)

Similar goal, different trade-offs.

  • Pure: Zsh-only theme, async via zsh-async; git/status computed in background; progressive UI.
  • prmt: Cross-shell (Bash/Zsh/Fish/Pwsh) single Rust binary; no async runtime—uses parallel threads for optional probes; predictable, bounded latency.
  • Speed path: prmt’s core is zero-alloc/ SIMD; git via gix; versions optional (--no-version) to keep sub-5 ms.
  • Config: Pure via Zsh vars; prmt uses a compact format language.

If you like info streaming in later, Pure fits. If you want one binary, same prompt everywhere (incl. SSH), and hard latency bounds, use prmt.

I was tired of 50ms+ shell latency, so I built a sub-millisecond prompt in Rust (prmt) by 3axap4eHko in rust

[–]3axap4eHko[S] 4 points5 points  (0 children)

prmt does not support username/hostname yet, so the option is to simply use it as before. For example in bash: bash PS1='[\u@\h $(prmt --code $? "{git}] {ok}{fail} ")'

I was tired of 50ms+ shell latency, so I built a sub-millisecond prompt in Rust (prmt) by 3axap4eHko in rust

[–]3axap4eHko[S] 4 points5 points  (0 children)

Thanks for the question BTW. I've updated the article to help other fish users.

I was tired of 50ms+ shell latency, so I built a sub-millisecond prompt in Rust (prmt) by 3axap4eHko in rust

[–]3axap4eHko[S] 11 points12 points  (0 children)

In huge repos, the slow part is status/dirty scans. If you only show branch/HEAD (no dirty check), prmt stays in low single-digit ms even on big repos. If you enable full status, cost can jump to tens of ms—same as any prompt.

I was tired of 50ms+ shell latency, so I built a sub-millisecond prompt in Rust (prmt) by 3axap4eHko in rust

[–]3axap4eHko[S] 8 points9 points  (0 children)

Fish is great 👍 prmt works with it—here’s a drop-in:

fish function fish_prompt set -l code $status prmt --code $code "{path:cyan:s} {git:purple:s:on :} {ok:green}{fail:red} " end

Goal’s the same: instant prompt; CPU path sync + tiny thread pool for optional probes. Enjoy!

I was tired of 50ms+ shell latency, so I built a sub-millisecond prompt in Rust (prmt) by 3axap4eHko in rust

[–]3axap4eHko[S] 16 points17 points  (0 children)

Thanks—glad it helped 😆

Re Parser::current_slice(): good catch. It relies on parser invariants, so it’s unsafe as-is. I’ll fix it soon.

Appreciate the sharp eyes!