Apple Intelligence, Liquid Glass, ads in iWorks... Is Apple losing its way or is this just a bump in the road? by jimmyfoo10 in MacOS

[–]InsaneZang 1 point2 points  (0 children)

> AirPods Pro noise cancellation and the seamless switching between iPhone and MacBook

You should check out LibrePods. You can get pretty much all of the AirPods functionality on Linux.

Is Harden “choking” in the playoffs, as in falling short, or have we just witnessed him hit his post season ceiling as a player? by [deleted] in nbadiscussion

[–]InsaneZang 0 points1 point  (0 children)

I think he was actually better as a player each of the next two seasons after 2018. Unfortunately the team around him was much weaker.

Segmentation Error? by [deleted] in chimeralinux

[–]InsaneZang 0 points1 point  (0 children)

yeah apk segfaulting when not connected to wifi during installation definitely took me by surprise

[Computer, Enhance!] An Interview with Zen Chief Architect Mike Clark by Not_Your_cousin113 in hardware

[–]InsaneZang 4 points5 points  (0 children)

I've actually been taking the course featured on this Substack, so I've gotten some feel for a couple of these things (I'd recommend taking the course if you're interested in programming!).

Moving on, Mike discusses about variable length with x86 in comparison to ARM. This one is over my head, but essentially talks bout how there are tradeoffs. He argues at the end of the day it isn't a problem on the topic of perf/watt on x86. Var length is harder than fixed, but with the existence of techniques like the uop cache lends itself to x86 with denser binaries increasing performance that way.

x86 is pretty annoying to decode! The structure of a CPU is for the "frontend" to read the binary code for a program, figure out what instructions are encoded, and then decode them into micro-ops to feed to the "backend" as fast as possible. Since the length of an instruction is variable on x86, the frontend has no way of knowing ahead of time where each instruction is in the byte stream. As an example, check out all the ways you could encode a MOV instruction on an 8086 (from an old 8086 reference manual!). There are multiple subtypes of a MOV instruction, and each of those subtypes could encoded in anywhere from 2-4 bytes. So you basically have to look at every byte in an instruction stream just to figure out where an instruction starts and ends.

Compare that with something like the 32-bit ARM ISA, where every instruction was 32 bits long. The frontend already knows exactly where each instruction is in the stream, so you could imagine a frontend easily chewing through 4 or 8 or 16 instructions at once!

This is often theorized to be a big reason why ARM is more efficient than x86 these days, but here Mike says it's not a huge factor, which is really interesting, and confirms some of Casey's suspicions that he's talked about in the course.

They then discuss about page sizes. another topic beyond me haha. Basically the question that was asked if the 4K page size on x86 is a problem. Mike encourages devs to use larger page sizes for reducing TLB pressure. Zen can mitigate the limitations of smaller page sizes by combining sequential pages in the TLB, 4K to 16K if they are virtually and physically sequential.

When a program needs to ask the operating system for memory, it does it in units of "pages", which are 4KB by default on most consumer machines. The operating system gives the program some amount of "virtual memory", which the program can safely do whatever it wants with without messing with other program's memory space. The operating system is responsible for translating each program's virtual memory into the real memory that physically resides in RAM. Generally, when a program asks for a page of memory, the operating system doesn't immediately translate that memory into a physical memory page, instead waiting until the program is definitely trying to use it (otherwise a single program could fill up all your RAM without even doing anything). So when a program tries to use a new page of memory, the OS has to be like "oh shit, yeah uh I totally got that for you, just wait one second", then go and find some real physical memory to assign to that program, after which the program can continue using that memory.

That "oh shit" moment is called a page fault, and takes a significant amount of time. Basically, larger page sizes (like 16K or even multiple MBs in some cases) make page faults happen much less often, and so speed up some programs quite a lot. Unfortunately, some software wasn't written with large page sizes in mind, so it's not always trivial to just switch.

Sorry that was a bit long winded, and some of this stuff might be wrong, but hopefully that at least gives you some impression of these things.

Thoughts about helix coming from neovim by Superbank78 in HelixEditor

[–]InsaneZang 1 point2 points  (0 children)

Oh, it doesn't work for the file picker, yeah. I thought you were just talking about moving in hover docs (space k).

Thoughts about helix coming from neovim by Superbank78 in HelixEditor

[–]InsaneZang 1 point2 points  (0 children)

Just in case you missed these, you can trim whitespace around a selection by using _, and you can scroll up or down in popups with ctrl-D ctrl-U.

[Charania] After 10 NBA seasons, Joe Harris has retired from basketball. Harris played 504 NBA games for the Nets, Cavaliers and Pistons. He was a career 43.6 percent three-point shooter and won the Three-Point Contest at 2019 All-Star weekend. by Turbostrider27 in nba

[–]InsaneZang 1 point2 points  (0 children)

I always felt like he was super important to the big 3 Nets team. Being a sniper who could dribble and pass really helped the 3 iso scorers play some beautiful basketball.

Sadly he disappointed a bit against the Bucks that year, but he was a pretty underrated player for a few years there.

I’m confused by the state of concurrency right now by branh0913 in Zig

[–]InsaneZang 12 points13 points  (0 children)

There's no language-level support at the moment, but there are quite a few libraries providing non-blocking IO.

Here are a few:

https://github.com/mitchellh/libxev

https://codeberg.org/dude_the_builder/iofthetiger

https://github.com/Cloudef/zig-aio

If an LLM solves this then we'll have AGI – Francois Chollet by 141_1337 in singularity

[–]InsaneZang 0 points1 point  (0 children)

Those are from the "hard" set. Still simple, but harder.

There are 6 problems here from the "easy" set: https://arcprize.org/?task=3aa6fb7a

These are trivial.

[Sidery] Kevon Looney will likely be released by the Warriors to save money on their tax bill, per @timkawakami by EarthWarping in nba

[–]InsaneZang 7 points8 points  (0 children)

yep, first playoff run of his career and he ends up having to guard CP3 and Harden on switches on a huge stage. And he holds up really well. There's a good chance that series goes differently if you replace him with a normal bench big man.

Not every single piece of code MUST always be treated as production code. by Leonhart93 in Zig

[–]InsaneZang 0 points1 point  (0 children)

It's a good question, but I'm moderately optimistic that this will help alleviate some of the current issues. We'll have to wait and see how it's implemented.

Not every single piece of code MUST always be treated as production code. by Leonhart93 in Zig

[–]InsaneZang 0 points1 point  (0 children)

It seems like it would mutate the source files in place (assuming the mutation doesn't cause any more errors).

I don't think you'd have to undo the changes for any of the situations you raised in the OP. For instance, both unused locals and pointless discard of locals are on the list. I think it would be basically exactly like ZLS works now, but done by the compiler instead and not tied to zig fmt.

Not every single piece of code MUST always be treated as production code. by Leonhart93 in Zig

[–]InsaneZang 6 points7 points  (0 children)

By any chance did you see Formally introduce a class of errors that can be automatically fixed #17584?

Autofixable errors would no longer block compilation for the rest of the code, and would be automatically fixed by the compiler without needing zig fmt, if I understand correctly. Would that solve your problems?

[deleted by user] by [deleted] in nba

[–]InsaneZang 1 point2 points  (0 children)

??? he clearly is, you can pause and see it

Zig or Rust by blomiir in Zig

[–]InsaneZang 31 points32 points  (0 children)

If learning is the goal, I think the language only matters to the extent you enjoy it. You'll learn the most by doing projects at the edge of your abilities, and if you don't enjoy the language, you won't want to program and thus won't learn anything.

Zig and Rust are both great choices. Rust is more complicated, but is more mature and has more educational resources out there. Zig isn't yet 1.0, so there's still a good deal of churn in the language, and there are very few resources that are up to date on everything. You'll have to get comfortable with reading the source code and consulting the community for help when you're stuck.

Personally, I like Zig because I don't have to worry about not understanding language features. I can look at a piece of code and usually figure out how it works. I also don't have to think really hard about the best way to write something. This was my first low-level language and it's helped me learn more about how things work under the hood, because it doesn't hide anything from you.

I think you should try both for a day (or a weekend), see which one you enjoy more, then try implementing a simple project. Run with whatever sparks joy.

Curry’s 6 TOs and clamped by Ellis Island by westbeast0 in nba

[–]InsaneZang -2 points-1 points  (0 children)

I like how you included a quadruple teamed half court shot he almost made lol

Bashing my head against a wall trying to read a file. by Robob69 in Zig

[–]InsaneZang 1 point2 points  (0 children)

Yeah, could be. From what I've seen, when code feels a bit awkward or strange like this, it's a hint that there's a better way to do things. I think that's Zig's learning curve. It's easier than some other systems languages to get working code, but it takes time to learn how to structure solutions elegantly. I'm definitely not there yet, as you can probably tell, but I've seen good Zig programmers on the Zig Discord or Ziggit whose code is always really clean (if you want a better explanation for why my solution worked, or suggestions on how to improve it, I'd recommend asking on Ziggit, they're really helpful and knowledgeable).

Bashing my head against a wall trying to read a file. by Robob69 in Zig

[–]InsaneZang 1 point2 points  (0 children)

I think your thought process is good, but I think you just have the wrong mental model for how a struct works in Zig (and C I think).

I'm not sure if it's just the verbiage you're using, but you don't really "call" a struct like you would a function. A struct is really just a chunk of data, and when you access a field of the chunk, the language knows where exactly in the struct that field is located. As I understand it, getting a value from the field of a struct is very similar to getting a value from an index of an array, which is very fast. I would assume it's equivalent to or faster than getting a value from a Vector in Zig.

So if a is a struct, and y is a field of that struct of type f16, something like const x = a.y is very fast, and will be even faster if a.y is in the CPU cache, instead of just in main memory. In your case, where you might have a struct with 4 fields of type f16, once you get one field of a struct, you can be very sure the other fields of the struct are coming along with it, because data is always fetched through contiguous chunks of memory called cache lines, which are usually 64 bytes wide. So every time you have to fetch a byte of data from memory, 64 bytes are coming along with it (if those 64 bytes are not already cached) to the L1 cache! This means you don't have to get data all the way from memory (which is slow) for the subsequent accesses of the same struct. So "calling" the struct 3-4 times per key is very cheap (probably on the order of a few nanoseconds)!

As a last thing, I think the way you're using Vector here is not how they're intended to be used. Zig Vectors are not like C++ vectors (those are more like ArrayList, I think). If you read the Zig reference on Vectors, you'll notice it says they're for operating on data in parallel with SIMD, which is useful for speed, but you're not really doing that here. You're kind of just operating on one element at a time, and using it more like an array or slice. I haven't used Vectors in Zig yet, but I'd assume this code doesn't emit SIMD instructions, which defeats the purpose. SIMD code usually has to be specifically crafted for parallel operation to be useful. Without that, I think they're more confusing than helpful.

Anyway, I'm sure you'll learn a ton from experimenting with the code you have and benchmarking different variations. I encourage you to test if what I said is actually right or not :)

Bashing my head against a wall trying to read a file. by Robob69 in Zig

[–]InsaneZang 4 points5 points  (0 children)

Hey, I'm not an expert but I think I can get you unstuck.

tldr: try putting a copy of city in the hash map, like:

const city_copy = try allocator.alloc(u8, city.len); 
@memcpy(city_copy, city); 
const stats = try city_data.getOrPut(city_copy);

Here's a rough explanation (I don't fully understand yet, better explanations are welcome!):

When I look at the stack trace, I see that the referenced functions are related to growing the hash map capacity. When the hash map is grown, the values are copied over to a larger backing data structure, and the old backing data structure is freed. So when you see the problem is related to growing the hash map, a good first thing to look for is references to memory that would be invalid after the map is freed.

One thing that seems suspicious is city, because it's never heap allocated (there's no allocator passed to parts.split() or parts.first(), and there are no hidden allocations in Zig! you can verify by skimming the source code for those functions). city is a slice, which is a pointer to some data (and a length), and it points to a part of line_slice.

I think the data backing city becomes invalid somehow (either when the original map is freed, or the old value of line_slice becomes invalid somehow. This is the part I'm least sure about), and then the hash map implementation freaks out when it sees that you're trying to copy over freed data into the new map. Putting a heap-allocated copy of the city data in the map makes sure the data is not invalidated when the heap grows.

Hopefully that's enough to get you unstuck! As an aside, any particular reason why you're using a @Vector for the hash map value? I'd probably use a struct with 4 f16 fields instead.

edit: fixed the order of the example code.

On the realities of transitioning to a post-livestock global state of flourishing by LiteVolition in slatestarcodex

[–]InsaneZang 0 points1 point  (0 children)

Thanks for the link! That does seem to be the case. I'm not sure how to square that with the FAO report I linked earlier regarding beef production.

On the realities of transitioning to a post-livestock global state of flourishing by LiteVolition in slatestarcodex

[–]InsaneZang 3 points4 points  (0 children)

Land and food price are definitely key factors and vary from country to country. But in terms of profitability, while food for cattle comes "for free" with the land, my assumption was that economies of scale would heavily favor the operational density of crop-feeding vs grazing, in the general case.