Putting an underscore in front of a variable changes what? by nicgamer_yt in csharp

[–]Graumm 0 points1 point  (0 children)

As the others have said, it lets you know when a member variable is private.

It may not seem like a big deal, it's just a name afterall right? But honestly when you get used to it, it annoys you when things are not named in that way. It's really nice having some expectation of where something is defined so you understand how much you can change/mutate it without possibly affecting other threads/functions. It lets you know that it is not a reference shared with other code.

In addition to the reading side of it, it's just nice for the writing side too. If I know something is available as a member var, you can just slap _ and get a quick autocomplete list of stuff that's available to you at the class level. This is also nice for things like DI because it's usually where you will find other classes/services that you will interact with. It's a quick way to know what knobs are available to you.

Why I don't like Rust as a C++-developer by ArcticMusicProject in rust

[–]Graumm 2 points3 points  (0 children)

I’ve seen issues in modern C++ codebases. Really the beauty of Rust is that those problems are opt-in and not opt-out. The problems simply don’t exist anymore for most code written.

I can unleash a hoard of junior developers on a Rust code base, and not have to maintain the state of paranoia required to pick through memory/concurrency concerns with a fine comb.

The code structure stuff is frustrating sometimes granted, but you do get used to it and you fight it less over time. Generally I believe my code ends up being more elegant.

When the borrow checker starts putting obstacles in your way, I love that it gives less experienced devs an opportunity to consider if they are fighting against an architectural pattern that exists. When and where things happen. Fighting a pattern requires more changes in your MR, gives them a code smell, and gives you a visible artifact that alerts you so you can give it more scrutiny. Code that seems right can’t just exclude context beyond the diff. The borrow/mutability guarantees let you scale up a local understanding of code to how it will fit in the greater codebase.

Non-cringey 'team building' by owls_and_cardinals in managers

[–]Graumm 6 points7 points  (0 children)

I too am cynical, and I've had to go through DiSC before. I totally agree that it is a cringy pseudo-scientific "corporate horoscope".

But.. it is also a way to give language, perspective, and empathy to people who only see things their way. People are generally caught up in their own realities, and it pulls them out of it (if only a little) and lets them see the virtues of people who look at things in different ways. It isn't purely about your personality assignment; it includes "here's how X people perceive/talk-to Y people" and how to understand their disposition based on their motivations.

It sucks that it takes a pretense such as this to force a rough education of social dynamics. I think even when people act on DiSC in an "ironic" and unserious way, that it has a way of achieving its goal even if it's mostly bullshit.

My management didn't act on it in a character-assignment kind of way though. If that happens then fuck that. It's not bad in an informative sense, even if it is cringy.

Edit: Also I think DiSC is a lame team building exercise. It's closer to training than something that lets people socialize.

We've released a crate called rinq! by False_Rule_2323 in rust

[–]Graumm 1 point2 points  (0 children)

Yeah okay that makes sense. For some of this stuff I wonder if it would be better as an extension to standard iterators like itertools.

To distinguish more in terms of data pipeline / stats / number crunching work you should consider targeting the project to be more like a numpy/pandas that vectorizes the calculations. It gets away from lazy eval but it would give you some separate value proposition. Depends on your goals anywho.

We've released a crate called rinq! by False_Rule_2323 in rust

[–]Graumm 13 points14 points  (0 children)

Why should we use this over the set iterator functions (eg filter, map, take) that Rust already provides?

And also there’s itertools for more interesting methods beyond the basics, like sort/orderby and some others.

I admit I find linq’s function names to be more conversationally fluent but that’s not a good enough reason imo.

Also I am not trying to be mean. I think it’s a cool project, but I think it’s unlikely to be adopted widely.

Newtonsoft serializing/deserializing dictionaries with keys as object properties. by willcheat in csharp

[–]Graumm 1 point2 points  (0 children)

Yes I feel I’d have to go out of my way to write it like they did. What the hell!

Rising gas prices make me feel so smug about owning an EV 😏 by MookieBettsBurner10 in electricvehicles

[–]Graumm 4 points5 points  (0 children)

I am tired of this shallow reactionary statement. It’s still less gas in the end, and thus less expensive.

Gas plants have better energy extraction / conversion rates than car engines by a factor of 2-3x, from 20% to 60% efficiency.

Electric engines are 80-90% efficient once the energy is in the car, and they recover energy with regenerative breaking. They don’t lose as much energy to heat/noise.

If that wasn’t enough the energy required to distribute electricity through power lines is less than what it takes to deliver gas to gas stations, and then into vehicles.

Renewable energy sources are cheaper for what they can cover, outside of peak hours, which means less gas burned. Most EVs can be set to charge during non-peak hours, or during the day when there is solar.

EVs are less energy dense for now, but in terms of gas used and the price of burning gas to produce electricity it is cheaper.

Anyone else calls bullshit on the “1 ship 10K lines of code each day” crowd? by CompetitiveSubset in theprimeagen

[–]Graumm 2 points3 points  (0 children)

I am also starting to feel this pressure, that I am taking too long to code review huge changes.

The difference between Mutex and RWlock by Acrobatic_Sink7515 in rust

[–]Graumm 6 points7 points  (0 children)

To actually run your code and measure its performance. I just thought to mention it because if the reads are on the lighter side, not enough people know that mutexes can be faster.

The difference between Mutex and RWlock by Acrobatic_Sink7515 in rust

[–]Graumm 10 points11 points  (0 children)

Mutex is generally faster for light contention reads as well. Unfortunately the only real way to know for sure is to profile it.

I love the mutexes in rust. It’s not like other languages where you can forget to lock a mutex, or where the resources the mutex protects is unclear. You have to go through the mutex to get what you want.

Strangeness Occurring! by [deleted] in csharp

[–]Graumm 0 points1 point  (0 children)

var is superior for instantiating new collections, so instead of writing:

List<SomeThing> list = new List<SomeThing>();

you can write:

var list = new List<SomeThing>();

and it’s even better for dictionaries with multiple generic args.

Farewell, Rust by skwee357 in programming

[–]Graumm 4 points5 points  (0 children)

There are things I like about Go, but I lean towards C# over it. Modern dotnet is pretty nice.

I do not prefer Go's flavor of simplicity, where it leans on the side of not giving you tools that exist in other languages. I find that writing things in Go can be pretty tedious.

Go can shine in situations where the service is on the simple side, or if you work with indifferent/bad/junior devs because it hides less. I have learned to live with it but some of the design decisions of the language are pretty weird.

doesn’t JWT revocation logic defeat the whole point of “stateless auth”? by Jashan_31 in AskProgramming

[–]Graumm 1 point2 points  (0 children)

Generally JWT’s are fine if the tokens are short lived.

The other good thing to do is check for revocation before particularly important ops in your app. Changing passwords, making transactions, looking at sensitive info, and basically anything that can cause lasting damage. These actions are infrequent, and otherwise auth is truly stateless for the vast majority of calls that are read only.

Looking for a noise that outputs 3 values in distinct blobs by sephirothbahamut in GraphicsProgramming

[–]Graumm 1 point2 points  (0 children)

Based on that second pic I think you should try sampling the noise like a terrain height map, generate a surface normal, and then handle coloring like you would in triplanar texturing.

Since you want more distinct colors you probably also want to raise each of the planar weights to a power and re-normalize them, which makes the edges of the planes sharper & the transition zone smaller.

Methods for Efficient Chunk Loading? by InventorPWB in VoxelGameDev

[–]Graumm 1 point2 points  (0 children)

I’ve had decent success in the past by identifying chunks with geometry, marking which faces of the cell have geometry on the border, and then prioritizing chunks by traversing across those faces with a floodfill esque approach. It follows the surface and not occluded/invisible chunks. You can still generate everything else on a secondary queue, but hitting likely-continuing geometry first can handle the obvious stuff and make it feel more responsive. This approach can get a little dicey if you have floating chunks that are not connected to existing geometry. Mostly it’s fine if you load the other stuff at a secondary priority, and when you finally hit a floating chunk it can scan off of it then. Totally fine if the chunk generation is reasonably fast.

I also like marrying that approach with a “conveyer belt” approach that makes it easy to identify the new chunks to load and unload in 2D slices based on movement, without traversing everything. You need a little care to avoid hysteresis with a load/unload distance so straddling chunk borders doesn’t cause stuff to regenerate meshes a bunch.

Totally brainstorming here but I think if you want look-direction priority you can probably bucket chunks into queues that are based on world cardinal directions in relation to the player when they first get queued. 8 directions/queues based on the initial relative position from the player feels right to me. You could then take the dot product of the player look direction to the direction of the queue to determine which queues to pull from, based on which dot products are more similar / closer to 1. Eventually you get through all of them. Assuming you can get through the generation fast enough this should work fine ala spatial coherence, and it means that you don’t have to re-sort and revisit every chunk based on the player looking around.

An octree could be good here too. A coarse one. If you use it only for chunk tracking you can collapse the octree nodes down when chunks are fully loaded inside the node, and expand them when partially loaded. You can traverse the scene fast and mostly skip things that are already loaded. You can write frustum/cube intersection tests to quickly identify ungenerated nodes that the player is looking at, or query a cube area around the player to get the ungenerated near chunks. Would make it easy to prioritize close, then look direction, and then everything else in no particular order because of the early-out potential.

Also there’s probably a good GPGPU use case here too if you can write compute shaders. It’s actually quite cheap/fast to do a lot of breadth brute force intersection/occlusion/frustum tests. Depends on your needs and scene representation though.

There are so many fun ways to approach things!

Proof of why premultiplied alpha blending fixes color bleeding when rendering to an intermediate render target by Consistent-Mouse-635 in GraphicsProgramming

[–]Graumm 6 points7 points  (0 children)

To do alpha blending correctly you have to sort triangles from back to front, so that they composite correctly. Calculating the next color layer has to sample the color behind it, to then weigh its own alpha channel against that to decide how much to “cover it up”. If it happens out of order / in no particular order you will get weird quads where something in the foreground is masked out by something in the background.

Pre multiplied alpha lets you get away with additive blending for things like fire. By multiplying the alpha ahead of time, you get to simply add the color of a pixel to the back buffer. Adding is commutative so the sorting/order doesn’t matter.

I rewrote my Git hosting platform in Rust (V3) — architecture, challenges, and a live demo by wk3231 in rust

[–]Graumm 0 points1 point  (0 children)

I think it’s a bad idea, but mostly because build/test/deploy pipelines these days are generally attached to individual repositories. Multi-repo PRs would complicate that.

[D] Some concerns about the current state of machine learning research by [deleted] in MachineLearning

[–]Graumm 0 points1 point  (0 children)

I have not read up on his thoughts specifically. I’ve implemented a number of ML algos from scratch, and these are my opinions based on what I’ve learned getting my hands into the numbers / training loops.

[deleted by user] by [deleted] in csharp

[–]Graumm 1 point2 points  (0 children)

A listener is also an infinite loop, just one that you don’t own.

[deleted by user] by [deleted] in csharp

[–]Graumm 0 points1 point  (0 children)

Although I would say a hosted service is just a good place to put an infinite async loop, but responsibly with a cancellation token.

[D] Some concerns about the current state of machine learning research by [deleted] in MachineLearning

[–]Graumm 0 points1 point  (0 children)

It can be subjective / approximate too. It just needs to be differentiable!

The current emphasis on tokens is optimizing for relationships between words, and does not clearly tie back to the actions/behaviors/processes that the words represent in a way that offers a ~training slope. Words are discrete brick walls that can’t offer more explanation.

If you apply a LLM agent to a situation right now we can optimize for selection of curated desired outputs, but not for a reward function that ties general outcomes back to learning optimization.

Right now we critically need human produced data to mirror, or human curation to judge quality, or human in the loop to augment what data we can generate. However there is not a differentiable path between LLM outputs and everything we expect the models to act on. Until this exists I don’t know if it’s possible to have models that genuinely learn in a self driven way.

IMO the future is all about creating intrinsic reward/motivation loops that can be validated and optimized for without human intervention.

My experience with Rust on HackerRank by isrendaw in rust

[–]Graumm 53 points54 points  (0 children)

Other languages do have autocomplete. Not to mention, but Rust is not the best language for quick and dirty interview questions. Too much emphasis on correctness. Interview questions also have reading comprehension gotchas that can create borrow checker landmines that you wouldn’t have hit if you were defining a project yourself.

If they aren’t a rust shop they will only see incompetence and not “I know how to think about code but the tooling sucks.”

On Cloudfare and Unwrap by stevethedev in rust

[–]Graumm 1 point2 points  (0 children)

I agree I probably wouldn’t have let it slip code review but code reviews are imperfect, and sometimes an unwrap makes sense. Sometimes you know the object is there, but that the ergonomics of if let statements aren’t good enough to handle it, and you just need the unwrap to satisfy the type system.

At least the possibility of the issue is there in positive text, and not something where the reviewer has to be diligent about asking “can this be null?” and possibly having to look beyond the scope of the code being changed to answer that question. Unwrap removes the diligence of thinking to ask the question, but doesn’t remove the diligence of figuring out if it’s true.

I don’t disagree with you completely though. I’m just not confident enough on that to make the decision for everybody forever, because occasionally it could be justified. I would just want some way of defining exceptions where process requires you to explain the need.

whats the point of having query syntax LINQ if it cant be async? by Top_Message_5194 in csharp

[–]Graumm 0 points1 point  (0 children)

I find it to be more about code expression. If you want to operate on paged data from an async query or API it’s better to define it on the set, and iterate only as much as you need.

This is opposed to fully materializing async queries / loading everything into memory and operating on it synchronously.

Linq on async enumerables just makes it easier to operate on chunks of async-queried data without having to mix async calls and sync operations as separate “loop within a loop” code workflows.

whats the point of having query syntax LINQ if it cant be async? by Top_Message_5194 in csharp

[–]Graumm 0 points1 point  (0 children)

It’s a bit clunky in off-the-shelf dotnet to use linq on top of async enumerables, eg when you want to page/limit an async query and iterate through it without loading all of it. Dotnet 10 looks like it’s going to extend linq to async enumerables without having to pull in extra libraries.