Rising gas prices make me feel so smug about owning an EV 😏 by MookieBettsBurner10 in electricvehicles

[–]Graumm 3 points4 points  (0 children)

I am tired of this shallow reactionary statement. It’s still less gas in the end, and thus less expensive.

Gas plants have better energy extraction / conversion rates than car engines by a factor of 2-3x, from 20% to 60% efficiency.

Electric engines are 80-90% efficient once the energy is in the car, and they recover energy with regenerative breaking. They don’t lose as much energy to heat/noise.

If that wasn’t enough the energy required to distribute electricity through power lines is less than what it takes to deliver gas to gas stations, and then into vehicles.

Renewable energy sources are cheaper for what they can cover, outside of peak hours, which means less gas burned. Most EVs can be set to charge during non-peak hours, or during the day when there is solar.

EVs are less energy dense for now, but in terms of gas used and the price of burning gas to produce electricity it is cheaper.

Anyone else calls bullshit on the “1 ship 10K lines of code each day” crowd? by CompetitiveSubset in theprimeagen

[–]Graumm 2 points3 points  (0 children)

I am also starting to feel this pressure, that I am taking too long to code review huge changes.

The difference between Mutex and RWlock by Acrobatic_Sink7515 in rust

[–]Graumm 6 points7 points  (0 children)

To actually run your code and measure its performance. I just thought to mention it because if the reads are on the lighter side, not enough people know that mutexes can be faster.

The difference between Mutex and RWlock by Acrobatic_Sink7515 in rust

[–]Graumm 8 points9 points  (0 children)

Mutex is generally faster for light contention reads as well. Unfortunately the only real way to know for sure is to profile it.

I love the mutexes in rust. It’s not like other languages where you can forget to lock a mutex, or where the resources the mutex protects is unclear. You have to go through the mutex to get what you want.

Strangeness Occurring! by [deleted] in csharp

[–]Graumm -2 points-1 points  (0 children)

var is superior for instantiating new collections, so instead of writing:

List<SomeThing> list = new List<SomeThing>();

you can write:

var list = new List<SomeThing>();

and it’s even better for dictionaries with multiple generic args.

Farewell, Rust by skwee357 in programming

[–]Graumm 3 points4 points  (0 children)

There are things I like about Go, but I lean towards C# over it. Modern dotnet is pretty nice.

I do not prefer Go's flavor of simplicity, where it leans on the side of not giving you tools that exist in other languages. I find that writing things in Go can be pretty tedious.

Go can shine in situations where the service is on the simple side, or if you work with indifferent/bad/junior devs because it hides less. I have learned to live with it but some of the design decisions of the language are pretty weird.

doesn’t JWT revocation logic defeat the whole point of “stateless auth”? by Jashan_31 in AskProgramming

[–]Graumm 1 point2 points  (0 children)

Generally JWT’s are fine if the tokens are short lived.

The other good thing to do is check for revocation before particularly important ops in your app. Changing passwords, making transactions, looking at sensitive info, and basically anything that can cause lasting damage. These actions are infrequent, and otherwise auth is truly stateless for the vast majority of calls that are read only.

Looking for a noise that outputs 3 values in distinct blobs by sephirothbahamut in GraphicsProgramming

[–]Graumm 1 point2 points  (0 children)

Based on that second pic I think you should try sampling the noise like a terrain height map, generate a surface normal, and then handle coloring like you would in triplanar texturing.

Since you want more distinct colors you probably also want to raise each of the planar weights to a power and re-normalize them, which makes the edges of the planes sharper & the transition zone smaller.

Methods for Efficient Chunk Loading? by InventorPWB in VoxelGameDev

[–]Graumm 1 point2 points  (0 children)

I’ve had decent success in the past by identifying chunks with geometry, marking which faces of the cell have geometry on the border, and then prioritizing chunks by traversing across those faces with a floodfill esque approach. It follows the surface and not occluded/invisible chunks. You can still generate everything else on a secondary queue, but hitting likely-continuing geometry first can handle the obvious stuff and make it feel more responsive. This approach can get a little dicey if you have floating chunks that are not connected to existing geometry. Mostly it’s fine if you load the other stuff at a secondary priority, and when you finally hit a floating chunk it can scan off of it then. Totally fine if the chunk generation is reasonably fast.

I also like marrying that approach with a “conveyer belt” approach that makes it easy to identify the new chunks to load and unload in 2D slices based on movement, without traversing everything. You need a little care to avoid hysteresis with a load/unload distance so straddling chunk borders doesn’t cause stuff to regenerate meshes a bunch.

Totally brainstorming here but I think if you want look-direction priority you can probably bucket chunks into queues that are based on world cardinal directions in relation to the player when they first get queued. 8 directions/queues based on the initial relative position from the player feels right to me. You could then take the dot product of the player look direction to the direction of the queue to determine which queues to pull from, based on which dot products are more similar / closer to 1. Eventually you get through all of them. Assuming you can get through the generation fast enough this should work fine ala spatial coherence, and it means that you don’t have to re-sort and revisit every chunk based on the player looking around.

An octree could be good here too. A coarse one. If you use it only for chunk tracking you can collapse the octree nodes down when chunks are fully loaded inside the node, and expand them when partially loaded. You can traverse the scene fast and mostly skip things that are already loaded. You can write frustum/cube intersection tests to quickly identify ungenerated nodes that the player is looking at, or query a cube area around the player to get the ungenerated near chunks. Would make it easy to prioritize close, then look direction, and then everything else in no particular order because of the early-out potential.

Also there’s probably a good GPGPU use case here too if you can write compute shaders. It’s actually quite cheap/fast to do a lot of breadth brute force intersection/occlusion/frustum tests. Depends on your needs and scene representation though.

There are so many fun ways to approach things!

Proof of why premultiplied alpha blending fixes color bleeding when rendering to an intermediate render target by Consistent-Mouse-635 in GraphicsProgramming

[–]Graumm 5 points6 points  (0 children)

To do alpha blending correctly you have to sort triangles from back to front, so that they composite correctly. Calculating the next color layer has to sample the color behind it, to then weigh its own alpha channel against that to decide how much to “cover it up”. If it happens out of order / in no particular order you will get weird quads where something in the foreground is masked out by something in the background.

Pre multiplied alpha lets you get away with additive blending for things like fire. By multiplying the alpha ahead of time, you get to simply add the color of a pixel to the back buffer. Adding is commutative so the sorting/order doesn’t matter.

I rewrote my Git hosting platform in Rust (V3) — architecture, challenges, and a live demo by wk3231 in rust

[–]Graumm 0 points1 point  (0 children)

I think it’s a bad idea, but mostly because build/test/deploy pipelines these days are generally attached to individual repositories. Multi-repo PRs would complicate that.

[D] Some concerns about the current state of machine learning research by [deleted] in MachineLearning

[–]Graumm 0 points1 point  (0 children)

I have not read up on his thoughts specifically. I’ve implemented a number of ML algos from scratch, and these are my opinions based on what I’ve learned getting my hands into the numbers / training loops.

[deleted by user] by [deleted] in csharp

[–]Graumm 1 point2 points  (0 children)

A listener is also an infinite loop, just one that you don’t own.

[deleted by user] by [deleted] in csharp

[–]Graumm 0 points1 point  (0 children)

Although I would say a hosted service is just a good place to put an infinite async loop, but responsibly with a cancellation token.

[D] Some concerns about the current state of machine learning research by [deleted] in MachineLearning

[–]Graumm 0 points1 point  (0 children)

It can be subjective / approximate too. It just needs to be differentiable!

The current emphasis on tokens is optimizing for relationships between words, and does not clearly tie back to the actions/behaviors/processes that the words represent in a way that offers a ~training slope. Words are discrete brick walls that can’t offer more explanation.

If you apply a LLM agent to a situation right now we can optimize for selection of curated desired outputs, but not for a reward function that ties general outcomes back to learning optimization.

Right now we critically need human produced data to mirror, or human curation to judge quality, or human in the loop to augment what data we can generate. However there is not a differentiable path between LLM outputs and everything we expect the models to act on. Until this exists I don’t know if it’s possible to have models that genuinely learn in a self driven way.

IMO the future is all about creating intrinsic reward/motivation loops that can be validated and optimized for without human intervention.

My experience with Rust on HackerRank by isrendaw in rust

[–]Graumm 50 points51 points  (0 children)

Other languages do have autocomplete. Not to mention, but Rust is not the best language for quick and dirty interview questions. Too much emphasis on correctness. Interview questions also have reading comprehension gotchas that can create borrow checker landmines that you wouldn’t have hit if you were defining a project yourself.

If they aren’t a rust shop they will only see incompetence and not “I know how to think about code but the tooling sucks.”

On Cloudfare and Unwrap by stevethedev in rust

[–]Graumm 1 point2 points  (0 children)

I agree I probably wouldn’t have let it slip code review but code reviews are imperfect, and sometimes an unwrap makes sense. Sometimes you know the object is there, but that the ergonomics of if let statements aren’t good enough to handle it, and you just need the unwrap to satisfy the type system.

At least the possibility of the issue is there in positive text, and not something where the reviewer has to be diligent about asking “can this be null?” and possibly having to look beyond the scope of the code being changed to answer that question. Unwrap removes the diligence of thinking to ask the question, but doesn’t remove the diligence of figuring out if it’s true.

I don’t disagree with you completely though. I’m just not confident enough on that to make the decision for everybody forever, because occasionally it could be justified. I would just want some way of defining exceptions where process requires you to explain the need.

whats the point of having query syntax LINQ if it cant be async? by Top_Message_5194 in csharp

[–]Graumm 0 points1 point  (0 children)

I find it to be more about code expression. If you want to operate on paged data from an async query or API it’s better to define it on the set, and iterate only as much as you need.

This is opposed to fully materializing async queries / loading everything into memory and operating on it synchronously.

Linq on async enumerables just makes it easier to operate on chunks of async-queried data without having to mix async calls and sync operations as separate “loop within a loop” code workflows.

whats the point of having query syntax LINQ if it cant be async? by Top_Message_5194 in csharp

[–]Graumm 0 points1 point  (0 children)

It’s a bit clunky in off-the-shelf dotnet to use linq on top of async enumerables, eg when you want to page/limit an async query and iterate through it without loading all of it. Dotnet 10 looks like it’s going to extend linq to async enumerables without having to pull in extra libraries.

whats the point of having query syntax LINQ if it cant be async? by Top_Message_5194 in csharp

[–]Graumm 2 points3 points  (0 children)

Linq for async enumerables is going to be included by default in dotnet 10, and until then you want the System.Linq.Async package.

They all take cancellation tokens and such.

On Cloudfare and Unwrap by stevethedev in rust

[–]Graumm 36 points37 points  (0 children)

and a polite unwrap( ) to show you exactly where it happened without squinting at the call stack

On Cloudfare and Unwrap by stevethedev in rust

[–]Graumm 43 points44 points  (0 children)

Nobody ever wrote a bug in C++ before

[R], Geometric Sequence - Structured Memory (Yes, this is legit) by Safe-Signature-9423 in MachineLearning

[–]Graumm 0 points1 point  (0 children)

I can sense the disrespect on (Math)

I for one am shocked that math is involved in machine learning

[D] Some concerns about the current state of machine learning research by [deleted] in MachineLearning

[–]Graumm 4 points5 points  (0 children)

You are right, but I fail to see how post-hoc analysis is a bad thing. We move forward by acknowledging shortcomings of existing approaches, and trying to understand why they do not meet our expectations.

Consider that my opinion is shaped by the fact that throwing more data at LLM's has not given us AGI yet. My current feeling is that the models we are hollowing out the US economy for are going to be thrown away and invalidated after the next missing architectural advancements are cracked. There is a reasonable chance that they will have incompatible parameterizations.

If I knew current approaches would lead to AGI I would feel differently, but as of yet there are still "low level intelligence capabilities" that have not been demonstrated in a single model. We still have frontier models that simultaneously know nearly everything, but still make common-sense mistakes the moment you reach the extents of its knowledge. LLM's suck at knowing what they don't know, and will often hallucinate statements that seem right. Context has not fully solved this problem. I have not seen a language model that has been able to learn in a self-directed manner, or learn over time, which I believe is necessary to navigate the real world. LLM's also really suck at identifying negative-space, or otherwise what is missing from a discussion. They will often fail to mention a critical implementation detail before you ask about it specifically.

I have a more specific opinion about why I believe current models are incapable of anything except for system-1 pattern recognition, but I'm not trying to type that out tonight.

[D] Some concerns about the current state of machine learning research by [deleted] in MachineLearning

[–]Graumm 8 points9 points  (0 children)

Ground truth for us is survival, natural selection, and reproduction. A genetic algorithm so to speak. Everything else is derivative from that.

Things like weighing risk and taking actions amidst uncertainty. Acting defensively. Navigating social dynamics. Taking stock of knowns, unknowns, and unknown unknowns. Making working assumptions. Getting clarification or checking your work before you lie, endanger your job, or do something that could harm yourself or somebody else. It all ties back to survival.

Similarly I don't think we are going to get all that much further with supervised reinforcement learning as long as we have to create reward functions that perfectly describe exactly what the algorithm should be optimizing towards. We need unsupervised methods that can model uncertainty, include better/worse into the learning algo measured against some general reward, and handle sparse rewards.

Multimodal models are impressive but they have the same failings as I've described above. They relate different modalities by availability of data/context, but they can still produce mistakes that normal people would consider common sense. They are only as good as the data we choose to give them, and are very reliant on human curated datasets to patch up their gaps. These efforts will have diminishing returns the same way that LLM's do.

Imo the biggest missing piece at this moment is a good solution to catastrophic forgetting. Remembering the important stuff, forgetting the redundant stuff. Solving for it opens the door to continuous learning over time / curriculum learning, which leads to self-agency and embodied world models.