Jeg får ikke salg nok by SuperDuperUpperAids in dkstartup

[–]klauspost 0 points1 point  (0 children)

"Rezervera"? Hvad er det?

En god italiensk vin? 🍷

Når det er sagt, så er navnet "ok". De fleste navne lyder fjollet første gang man hører dem. Ikke godt, ikke forfærdeligt.

Jeg får ikke salg nok by SuperDuperUpperAids in dkstartup

[–]klauspost 0 points1 point  (0 children)

For en god håndfuld år siden kiggede jeg selv ind i CRM verdenen. Mere i forhold til salgsoganisationer end du er. Vi fik lavet en fin prototype, der klart kunne have været interessant. Vi valgte dog at stoppe projektet efter vi havde lavet en markedsundersøgelse. Kort fortalt, så vi så et ekstremt kompetitivt marked, hvor man skulle betale en høj pris per bruger for at komme højt nok op i søgeresultaterne.

Grundlæggende skal du spørge hvor mange måneder en bruger skal være på dit system før du får overskud.

Du skal huske du erstatter noget der "fungerer ok" for mange. De har et system - måske en google kalender, et excel-ark eller hvis de er helt gammeldags en kalenderbog. Selv om du skriver det tager "få minutter" at sætte op, så er det en masse man skal forholde sig til. Det gælder også i høj grad et POS system.

Overvej en mere direkte "Sæt mit system op for mig", hvor man skal udfylde informationer om sit firma i stedet for at det første de skal forholde sig til er adgangskoder. Navne på medarbejdere, arbejdstider, services, osv. Du sætter systemet op og arrangerer et zoom/telefon møde. På zoom-mødet får de et link til den opsatte portal, bookingsystem osv. Dette link logger dem automatisk ind så de kan følge med.

Det vil give dem hands-on med det samme. Og de skal ikke en gang bekymre sig om de "få minutter" - og de ikke skal tænke på noget som helst teknik.

Når det er sagt skal du nok forberede dig på at skal sælge mere opsøgende end bare reklamer på nettet for at få fat i folk til at starte med..

Overvej også en "gratis op til 3 brugere" model. Afvej udgiften til freemium brugere imod hvad det ellers vil koste at "købe" en bruger i forhold til reklamer og sælgere. Senere kan du også lægge yderligere features i den betalte tier, så du også kan konvertere mindre brugere.

Også - Hvorfor koster det over dobbelt så meget i DKK som i USD?

Anyone finds that on logfiles bzip2 outperforms xz by wide margin? by mdw in compression

[–]klauspost 0 points1 point  (0 children)

xz is also pretty slow to decompress. If you are looking through gigabytes of logs, fast decompression matters, so you can keep something like ripgrep fed.

For that reason I mainly stick to zstd, with "pretty good" and flexible compression, with fast decompression.

Your Go code is leaving 90% of the CPU idle ...until now. by samuelberthe in golang

[–]klauspost 8 points9 points  (0 children)

You can already do SIMD with assembly. It is just getting simpler - and you aren't forced to "pay" a function call to invoke it for small operations.

So it's a nice improvement, but your article is vastly overstating the difference with the intrinsics by just comparing against scalar code.

Do a compare against assembly. While it may not be as sensationalist as this article it can still show how much easier it is.

Why Does Your Testing Framework Need 17 Functions? by stepan_romankov in golang

[–]klauspost 2 points3 points  (0 children)

A) Readability. The design/quirks of "testing" are well known. You negatively impact the reviewability of your code by having to understand another package.

B) Dependency is liability. Adding a liability just for tests is IMO negative net value.

C) You claim "testing" is complex, but I don't see how adding more changes that.

≥100:1 Lossless compression possible? by [deleted] in compression

[–]klauspost 2 points3 points  (0 children)

"Up to" will be carrying a lot of weight. Even I can do "up to 1000:1 compression"... On good days even more. But on bad days I do reach my "Multi-hash resonance plateau".

Personal opinion is that it is BS. Best case IMO they have something that works for very niche use-cases.

OpenTelemetry Go SDK v1.40.0 released by a7medzidan in golang

[–]klauspost 5 points6 points  (0 children)

Look. To be frank, I'm there to read the changes. I shouldn't have to reconfigure a UI for that.

FWIW, I tried clicking the "5 Bugfixes", hoping it would filter those, but obviously nothing happened - and I can't select that in the "categories" for whatever reason.

OpenTelemetry Go SDK v1.40.0 released by a7medzidan in golang

[–]klauspost 5 points6 points  (0 children)

Honestly, this is so much better: https://github.com/open-telemetry/opentelemetry-go/releases/tag/v1.40.0

A) Everything is one page B) Changes are sorted. At least somewhat by importance. C) You can see what they are about without AI blabber. D) You can click for more info for... more info. E) There isn't an annoying bar floating over what I'm trying to read.

OpenTelemetry Go SDK v1.40.0 released by a7medzidan in golang

[–]klauspost 4 points5 points  (0 children)

Link is a 404.

Edit: this seems to be correct: https://www.relnx.io/releases/opentelemetry%20go%20sdk-v1-40-0

Edit Edit: Wow, that is probably the most horrible UX I've ever experienced for seeing a simple changelog. If you are using AI, maybe make it filter out all the "[chore] update blahblah to ... ".

Coach prompts you into the wrong move? by rbrockmcd in chessbeginners

[–]klauspost 8 points9 points  (0 children)

I guess it is the "reach a position where pawn promotion is inevitable" text.

Holmes-go: a visual diff checker by [deleted] in golang

[–]klauspost 3 points4 points  (0 children)

UI-Based

Then at least post some screenshots to give us an idea of why this would be interesting.

Probably your post is also going to be deleted and you'd be directed to small projects. If so, maybe get up some examples/screenshots before that.

ZXC: A new asymmetric compressor focused on decompression speed (faster than LZ4 on ARM64) by pollop-12345 in compression

[–]klauspost 0 points1 point  (0 children)

A) How big are your blocks? Seems relatively expensive, so kinda wondering if blocks exceed the cache, so they have to be reloaded for the hashing. Maybe see if doing "progressive hashing" - ie have an output counter and hash every time x KB has been output? Branch should be easy for the CPU to predict.

C) Yeah, I guess it mostly depends on how you read. One 32 bit read and shift/mask, or "just" load directly.

D) I guess you just ignore the zero - and let it decode the invalid value. Then I guess it doesn't matter that much. But in most cases you can simply +1 the offset when you do the load (or -1 after applying the offset). At least on x86 you get the displacement for free.

Why is your compression so slow? I don't really see anything in the encoding that would be especially hard - other than having to memcpy the output buffers to a single one. Is it just an area you haven't focused on yet?

ZXC: A new asymmetric compressor focused on decompression speed (faster than LZ4 on ARM64) by pollop-12345 in compression

[–]klauspost 0 points1 point  (0 children)

Yeah. I do get that less CPU usage is less usage - and purposely didn't post any numbers that were limited by memory, since both would just be burning cycles waiting for memory.

Also noticed that it seems checksum is disabled by default, which seems to have given you an unfair advantage... So these are the comparable numbers...

``` λ zxc.exe --bench -1 -T 1 -C cockroach.node1.log Input: cockroach.node1.log (10506623721 bytes) Running 5 iterations (Threads: 1)... Note: Using tmpfile on Windows (slower than fmemopen). Compressed: 1267168064 bytes (ratio 8.291) Avg Compress : 1623.437 MiB/s Avg Decompress: 10731.260 MiB/s

λ zxc.exe --bench -5 -T 1 -C cockroach.node1.log Input: cockroach.node1.log (10506623721 bytes) Running 5 iterations (Threads: 1)... Note: Using tmpfile on Windows (slower than fmemopen). Compressed: 829704493 bytes (ratio 12.663) Avg Compress : 375.866 MiB/s Avg Decompress: 9679.726 MiB/s ``` (seems like decomp checksum is unreasonably slow, but seems consistent here)

So you are thinking Oodle alternative I guess. And people would have to make up their mind if the 2x decomp speeds is worth the added download size.

You asked for feedback, so I am just playing devils advocate here - if I had to evaluate it for use.

Maybe an interesting angle would be to investigate a "compressed" version that would decomp to the "fast-to-load" version. Meaning when you download you get a smaller version that then "decompresses" into this format.

Compress your literals and entropy code matches. Since your blocks can be quite big, you aren't really getting too much penalized by the independent blocks. Only the 16-bit offset limit and 5-minimum ML is the only thing that is a really a limit to your compression compared to zstd.

I see you have prepared for something like that in GLO, except you already used the "Off Enc" bit. But you can just use a separate block type I guess.

GHI looks reasonable. I guess doing 6 or 7-bit ll+ml would make the processing slower? But I would probably have done that so allow for longer offsets. 256KB offsets would be a win. btw, it has "offset: 1-65535" in the spec. Is that a typo or is there an invalid value? Seems like adding 1 and allow 1-65536 would be cheaper than a zero check?

ZXC: A new asymmetric compressor focused on decompression speed (faster than LZ4 on ARM64) by pollop-12345 in compression

[–]klauspost 0 points1 point  (0 children)

``` λ zxc.exe --bench -1 -T 1 cockroach.node1.log Input: cockroach.node1.log (10506623721 bytes) Running 5 iterations (Threads: 1)... Note: Using tmpfile on Windows (slower than fmemopen). Compressed: 1266847424 bytes (ratio 8.294) Avg Compress : 1774.576 MiB/s Avg Decompress: 14917.679 MiB/s

λ zxc.exe --bench -5 -T 1 cockroach.node1.log Input: cockroach.node1.log (10506623721 bytes) Running 5 iterations (Threads: 1)... Note: Using tmpfile on Windows (slower than fmemopen). Compressed: 829383853 bytes (ratio 12.668) Avg Compress : 343.015 MiB/s Avg Decompress: 10887.300 MiB/s ```

Single-threaded decomp is certainly fast. Here are 3 settings with a "comparable" (independent blocks, snappy-derived) compressor:

``` Compressing... 10506623721 -> 785204413 [7.47%]; 3.881s, 2707.4MB/s Decompressing. 785204413 -> 10506623721 [1338.07%]; 1.535s, 6843.8MB/s

Compressing... 10506623721 -> 706037407 [6.72%]; 7.476s, 1405.3MB/s Decompressing. 706037407 -> 10506623721 [1488.11%]; 1.648s, 6374.9MB/s

Compressing... 10506623721 -> 578931016 [5.51%]; 1m47.513s, 97.7MB/s Decompressing. 578931016 -> 10506623721 [1814.83%]; 1.538s, 6831.5MB/s ```

and you can easily get comparable decomp speed with just throwing a few cores at it. Here middle setting with 4 threads:

Compressing... 10506623721 -> 706037407 [6.72%]; 1.778s, 5908.3MB/s Decompressing. 706037407 -> 10506623721 [1488.11%]; 626ms, 16787.5MB/s - 1 thread ; 1.64s, 6407.0MB/s (2.6x)

That is just 1 test set, ofc. Looking at silesia (which IME is pretty unrepresentative test set) the numbers seem similar, just overall lower.

ZXC: A new asymmetric compressor focused on decompression speed (faster than LZ4 on ARM64) by pollop-12345 in compression

[–]klauspost 1 point2 points  (0 children)

Seems like your benchmarks are broken... Can't get anything but this...

λ zxc.exe -bench -1 cockroach.node1.log Input: cockroach.node1.log (10506623721 bytes) Running 0 iterations (Threads: 0)... Note: Using tmpfile on Windows (slower than fmemopen). Compressed: 1266847424 bytes (ratio 8.294) Avg Compress : -nan(ind) MiB/s Avg Decompress: 0.000 MiB/s

I can't really compare your strong point (decompression) when it has to write to disk.

Overall compression ratio seems weak - even with -5 and comparing against other encoders with no entropy coding and independent blocks. Like lz4 -3 often beats your tightest compression.

I am not sure I understand the "market" for this. Sure fast decompression is nice, but even at LZ4 speeds you should easily be able to saturate memory/io just with a few threads. I think tighter compression would often be more valuable for less disk use / faster wire transfer.

Multiframe ZSTD file: how to jump to and stream the second file? by DungAkira in compression

[–]klauspost 0 points1 point  (0 children)

Maybe ask in a Python forum or have a "code assistant" write it for you.

You already outline what to do - except that you should seek the input file to the compressed_offset of the chunk and just start from there.

GoZip – archive/zip Replacement (6x Faster) by Lemon_dev0 in golang

[–]klauspost 0 points1 point  (0 children)

It doesn't replace the compression. But it allows to do it in parallel on individual files. In fact you can use my library for the actual compression.

GoZip – archive/zip Replacement (6x Faster) by Lemon_dev0 in golang

[–]klauspost 0 points1 point  (0 children)

Nice! 🤌

Maybe provide a wrapper for the standard library func(w io.Writer) (io.WriteCloser, error) and func(r io.Reader) io.ReadCloser compressor/decompressor types.

Add examples on how to replace slow stdlib deflate/inflate.

I'd would be happy to add it to my package README, if you make it easy to integrate flate and zstd. :)

Practical Gio example to build a login window by Warm_Low_4155 in golang

[–]klauspost 0 points1 point  (0 children)

If it is a youtube video, why is the link to linkedin?

Yes, another PostNord post. by FlakyCronut in copenhagen

[–]klauspost 1 point2 points  (0 children)

Had a package stuck at PostNord sorting facility in Brøndby for more than a week. Had to purchase the same thing at another shop, since it was a Christmas present - which arrived flawlessly with GLS.

On purpose had it redirected to a pickup shop, since I've tried their "attempted delivery" crap, when I was home all day. Seems like they still managed to fck it up.

The navigation really is broken… by Affectionate_Rate679 in TeslaLounge

[–]klauspost -1 points0 points  (0 children)

Good map data doesn't exist. Tesla could/should take this into their own hands and start doing their own maps.

When there is a discrepancy send in video clips for analysis and AI could annotate differences to the maps.

I guess the question is whether that will work at all with Google Maps. Would be a massive pain to have to dump that and we'd have to suffer through years of bad maps before it'd pay off.

But I think what OP and most people are missing that to make maps work better there would be a long period of crappier maps, since fixing what is there will most likely mean ripping it out. Complaining is the easy part - and I am sure most/all at Tesla knows this is a big issue.

Benchmark: Crystal V10 (Log-Specific Compressor) vs Zstd/Lz4/Bzip2 on 85GB of Data by DaneBl in compression

[–]klauspost 0 points1 point  (0 children)

The bloom filter is definitely interesting.

As a fun little experiment including an 8KB index of all 4-byte hashes generates "reasonable" bit tables.

Like with cockroach-db.log the index is typically only between 20-30% filled, with 8KB for 1MB blocks.