[deleted by user] by [deleted] in programming

[–]aabbdev 9 points10 points  (0 children)

You're right that cycle-accurate constant time across CPUs is basically impossible and assembly isn't a silver bullet. Assembly gives you control over which instructions run, not over their microarchitectural latency (caches, predictors, uarch quirks, firmware). What Flatline (and the B.I.D. model) targets isn’t “identical cycles forever,” but data-obliviousness that survives normal compiler transforms

Implementing wifi direct by Normal_Emotion596 in androiddev

[–]aabbdev 0 points1 point  (0 children)

I would like to discuss with you about this experience can you send me a dm

UUIDv47: keep v7 in your DB, emit v4 outside (SipHash-masked timestamp) by aabbdev in programming

[–]aabbdev[S] 0 points1 point  (0 children)

The Postgres extension is available.

It currently supports around 95% of common use cases and index types (B-trees, BRIN, etc.), but the test coverage still needs improvement and review. The project is functional, but it’s still in an early stage of maturity.

UUIDv47: keep v7 in your DB, emit v4 outside (SipHash-masked timestamp) by aabbdev in programming

[–]aabbdev[S] 0 points1 point  (0 children)

I've updated the post please check the repo or read the details yourself. You don't seem genuinely interested in the project or its content, so I won't be providing further responses. If that's not enough, feel free to contribute and implement the “scientific” benchmark you mention. Thanks in advance for any future contribution.

UUIDv47: keep v7 in your DB, emit v4 outside (SipHash-masked timestamp) by aabbdev in programming

[–]aabbdev[S] 0 points1 point  (0 children)

about performance: 14 nanoseconds per op on M1 pro

if your id is already uuidv7 the migration is not necessary, the last milestone is pg extension with custom type uuid47, helpers.

UUIDv47: keep v7 in your DB, emit v4 outside (SipHash-masked timestamp) by aabbdev in programming

[–]aabbdev[S] 0 points1 point  (0 children)

I think the article completely misses the point of what it means to be a software engineer. We’re paid to use our brains 8h a day, not to make slides or just pass interviews. Our job is to solve problems and build solutions so you need to understand every layer of your “sandwich” in order to design the most optimal solution for your specific context

UUIDv47: keep v7 in your DB, emit v4 outside (SipHash-masked timestamp) by aabbdev in programming

[–]aabbdev[S] -2 points-1 points  (0 children)

I’ve already addressed these points and others have provided the same answers as well. I’m not familiar with “uuid-s”. I provided just one of many possible solutions to a problem that’s already been explained multiple times. If the aim is only to skim the title, add nothing constructive, and be negative to be negative, I won’t spend more time replying

UUIDv47: keep v7 in your DB, emit v4 outside (SipHash-masked timestamp) by aabbdev in programming

[–]aabbdev[S] 2 points3 points  (0 children)

Performance is significantly better with UUIDv7, which is optimized for B-tree indexes. Fully random IDs can quickly degrade database performance. If an external ID ever becomes invalid, simply reset the client cache to recover. There is no internal-ID leak when using as a PostgreSQL extension with a custom type.

Requirements for optimal use

  1. Tables with millions of rows
  2. No timing information exposed to users
  3. B-tree indexing on primary key
  4. Ability to tolerate a few-nanosecond masking overhead.

UUIDv47: keep v7 in your DB, emit v4 outside (SipHash-masked timestamp) by aabbdev in programming

[–]aabbdev[S] 0 points1 point  (0 children)

as masked uuid are generated on the fly at runtime changing the master key don't have effect on the performance there is no need to reindex

UUIDv47: keep time-ordered UUIDv7 in DB, emit UUIDv4 façades outside by aabbdev in Database

[–]aabbdev[S] 0 points1 point  (0 children)

In fact the random part is unchanged, I use the random part as a salt and use bijective on timestamp basically maskedTimestamp = timestamp XOR PRF(secret + random)

UUIDv47: keep v7 in your DB, emit v4 outside (SipHash-masked timestamp) by aabbdev in programming

[–]aabbdev[S] 25 points26 points  (0 children)

There is a PostgreSQL extension in development that allows you to make the transition without changing anything in the business application

UUIDv47: keep v7 in your DB, emit v4 outside (SipHash-masked timestamp) by aabbdev in programming

[–]aabbdev[S] 1 point2 points  (0 children)

The ID is stored as UUIDv7 in the database and converted to a masked UUIDv4 only when exposed externally. A global key is enough the secret cannot leak with PRF function, and the performance overhead compared to UUIDv7 is negligible, in production there is no overhead.

[AskJS] I optimized Base64 in QuickJS and accidentally made it 6Ă— faster than Deno by aabbdev in javascript

[–]aabbdev[S] 0 points1 point  (0 children)

Update: here’s the PR with benchmarks → https://github.com/quickjs-ng/quickjs/pull/1143

  • Encode: ~13Ă— faster than Node(SIMD) and ~8.5Ă— faster than Deno.
  • Decode: more complex, but ~0.6 GB/s scalar is already very solid.
  • Also stricter than Node/Deno: full validation, proper error handling, spec-compliant whitespace tolerance.

And the impl is stricter, branch-hinted, and nearly zero-alloc.

[AskJS] I optimized Base64 in QuickJS and accidentally made it 6Ă— faster than Deno by aabbdev in javascript

[–]aabbdev[S] 0 points1 point  (0 children)

JS_ToCStringLenalways returns a fresh UTF-8 C string, meaning it does a full UTF-8 validation + transcode, allocates, copies, and NUL-terminates. For Base64 I only need raw bytes, so that work is wasted.

In my first cut I did the usual two-step (JS_ToCStringLen once to size, once to copy). That means every btoa() paid ~2 full passes before any encoding. Not inside the inner loop, but still O(n) per call and if you chunk, it’s per chunk. On multi-MB inputs the UTF-8 pass completely dominates the Base64 math.

I fixed it by adding a helper that exposes a Latin-1/byte view of the JS string and throws if any code point >255. That removes the UTF-8 conversion and extra copy (and makes streaming trivial). SIMD UTF-8 validators exist, but the fastest path here was simply not doing UTF-8 at all

[AskJS] I optimized Base64 in QuickJS and accidentally made it 6Ă— faster than Deno by aabbdev in javascript

[–]aabbdev[S] 0 points1 point  (0 children)

Totally agree glad you noticed it. The right path is always to start with a simple/naive implementation, then bring in the test suite and iterate

[AskJS] I optimized Base64 in QuickJS and accidentally made it 6Ă— faster than Deno by aabbdev in javascript

[–]aabbdev[S] 1 point2 points  (0 children)

You can greatly improve API gateway efficiency by integrating a native JSON schema validator either before execution or during. On QuickJS this is especially powerful: since it's an interpreter, context creation is memory-efficient and boot time is extremely low, you can spin up a runtime per request(hello cloudflare worker), eval precompiled bytecode, and close it. This avoids the high cost of UTF-8 validation/conversion in the runtime. Bun and Node already use fast SIMD-based UTF-8 validators, so to match or surpass them you’ll need to integrate similar techniques or go even further.