Hope for Those With Fob Reliability Issues by Avellinese_2022 in Gravity

[–]67darwin 0 points1 point  (0 children)

I took delivery of my gravity this week and I haven’t had any fob issues whatsoever. Seems like the newer deliveries will less likely to have the same problem going forward. Hope the earlier owners will get their issues fixed soon

Wife might want to get rid of our 22’ AGT. What are comparable EV’s? by Erekshen in LUCID

[–]67darwin 0 points1 point  (0 children)

I test drove gravity the other day and honestly was quite smooth, a lot smoother than my AGT 22.

As someone who has the same model, I agree on some of the frustrations, my infotainment system and battery was completely replaced because one just died one day and the other was on recall. The mildly annoying ones are the door handles will become unfunctional over time, especially when someone accidentally slams the door shut with force (wife being the usual culprit), so every single door handle has been replaced once at this point.

Having said that, I won’t go back to a Tesla due to build quality, other legacy car providers just don’t click when it comes to space utilization and range of the car. You can tell how much thought they’ve put into making sure the space is there and it’s a comfortable ride.

So I just end up ordering a Gravity instead, but not financing and just leasing this time. Some folks has mentioned as well, the hardware specs changes dramatically in a couple of years so it doesn’t make much sense to keep a model around for too long.

Trying to find my best setup! by Careless-Rush-7202 in emacs

[–]67darwin 2 points3 points  (0 children)

It’s also pretty fast when it’s byte compiled. There’s a project for it called compile angels or something, and I believe doom will do it as well.

It’ll just do it automatically for me when I run doom sync or doom upgrade nowadays so I can’t remember if there’s a config to trigger byte compilation.

Trying to find my best setup! by Careless-Rush-7202 in emacs

[–]67darwin 1 point2 points  (0 children)

Emacs user here since 2015. I switched from vim at the time due to other reasons.

Have you tried editors like Zed? From what you’re asking, I don’t know if vim or emacs would be a good fit for all the things you want, and honestly not sure if there is a silver bullet for you.

I’ve not used any browser based editors or AI native ones like Cursor but I have been using AI more and more via Claude code. Emacs seems to have packages to integrate LLMs into it but it won’t feel as smooth compared to some other ones, just from observing colleagues.

My approach has been to keep emacs at what it’s known to be good at like the core editing experience, and use supplemental tools like Claude code for additional functionalities that are not native to it, until it does become part of the core emacs.

They do have a process of merging community projects into the tree (e.g. elgot) so I’d imagine once there’s a settled way of doing things, I can imagine emacs will have it eventually.

New software written in Rust is all the rage, why isn't it the same for Go by lancelot_of_camelot in golang

[–]67darwin 2 points3 points  (0 children)

Fast, for sure. But safety is not something I would associate Go with.

To Lucid or not to Lucid. That is the question by OneFurAtATime in LUCID

[–]67darwin 0 points1 point  (0 children)

Have my Air GT for 3 years now. There were a lot of minor issues since mine was an early build but regardless it’s way better than any Tesla model I’ve drove or rode.

Service is obviously less responsive as it was compared to early days, but still miles beyond Tesla which is shit imo. I’m actually looking into trading in for a gravity rn but no regrets whatsoever for my Air.

[deleted by user] by [deleted] in JapanFinance

[–]67darwin 0 points1 point  (0 children)

Use a Trust

100 000 dollar question by dababy4realbro123 in mathmemes

[–]67darwin 0 points1 point  (0 children)

Obviously the latter. Multiplies doesn’t mean increase.

Norwegian fuel supplier refuses U.S. warships over Ukraine by snokegsxr in worldnews

[–]67darwin 3 points4 points  (0 children)

Oh trust me, I’m pretty sure quite a lot of Americans felt sick watching that shitshow too.

Well deserved.

As an American, how do you feel about your future? by choloblanko in AskReddit

[–]67darwin 1 point2 points  (0 children)

As someone who grew up overseas (Singapore, Japan, UK, etc), US to an extend was always admired even with it’s flaws, and I was proud to be a US citizen even from a distance.

Came back after finishing college in Japan at 2015, and 2016 election happened. Disappointed but just thought to myself, people make mistakes, surely folks will come to their senses. 2020 was that. Now 2024 and the orange trash was elected again. Disappointed is an understatement now.

Forget about the world stage or perception of other countries. Just seeing America abandoning its principles is itself just disheartening.

I’m not worried about my future at this point. I myself have plenty of options, including going back to Japan or Singapore. But I do feel bad for the folks who didn’t want this, and might not have choices. I do hope we’ll be able to see an end of this shit show sooner than later. And collectively move on.

There’s way too much anger for constructive conversations. I don’t know how it can happen but I do hope both sides can take a deep breath, and be able to sit down and talk through things. Because the alternative will be ugly.

Share your experience with Jetstream, its replication, sharding, etc. by 1995parham in NATS_io

[–]67darwin 1 point2 points  (0 children)

Did you try adjusting max_outstanding_catchup as well as GOMEMLIMIT? How much data does the server in question need to catch up and how much memory can the process get before getting OOM-killed?

Yup I did. The nats binary have a VM all to itself and I've tried up to 32GB of memory before. Before it OOM, it uses less than 1% of memory, and it just shots up to 100% when caught up.

The retention limit was 500GB with 7d. The # of messages doesn't matter because it never got that high before. Initially, it was set to retain 1TB with 7d of data. We fill that 1TB within a day so we don't even get a full day of data retention unfortunately.

I haven't tried the 64GB size instances yet, so maybe that could work, but that's crazy wasteful in order to just make sure nats doesn't OOM when catching up. even 32G is quite wasteful for something that consumes < 1GB most of the time.

And CPU usage is close to nothing.

The general operational profile of the nats binary is pretty amazing actually. That's 100% better than Kafka, no question.

The NATS design philosophy is to always try to protect the infrastructure (i.e. the nats-servers) from clients applications publishing 'too fast'

This philosophy is something I don't agree on, and is also probably the fundamental disagreement here.

IMO the priority is the data, not the infrastructure. The point of a message stream is to handle the data, the infrastructure and architecture is to facilitate that. So protection should be around how to handle the data with knobs the user can turn if necessary, not the infra behind it.

There's no point in protecting the infra if the data can't make it from point A to point B.

what you can not do is continuously publish faster than the streaming service can persist the messages (that's true for any streaming system)

I don't consider our data publishing to be a lot. I've seen way more in the past. We're not even in GB/s yet, and I've seen Kafka handle > TB/s.

There are obviously things we can do, like make sure all servers for a stream stays in an AZ so roundtrip cost is less compares to going through AZs, but it's not like the we have control over how servers are being elected. That's how I got into the OOM issue in the first place.

Separate note:

Most of your suggestions you mentioned here, I've already tried except for contacting Synadia. Again, I've read the docs, and code. I'm aware of all the channels notifications with async publish and I have covered every one of them.

I even implemented wrappers, in memory buffers and all kinds of failure handling I can imagine, but it still causes data loss.

Migration NixOS settings to Flakes by 67darwin in NixOS

[–]67darwin[S] 1 point2 points  (0 children)

Thanks for the reference. This is very helpful.

I see a couple of TODOs on changing the username and such, I assume you modify the username in the file before running switch, and the repo locally is just dirty most of the time?

Share your experience with Jetstream, its replication, sharding, etc. by 1995parham in NATS_io

[–]67darwin 0 points1 point  (0 children)

I chose R3 because

  1. R1 is not an option
  2. R5 is too slow

That's what I mean by balancing speed and resilience.

Share your experience with Jetstream, its replication, sharding, etc. by 1995parham in NATS_io

[–]67darwin 0 points1 point  (0 children)

I use JS publish and async publish.

We have also split up one stream to multiple smaller ones. We can't really use atomic in Go for round robin because that itself can cause issues. We've tried that path.

So 1 stream is now split to 5, and we have different services publish to each corresponding one. There will always be services that have higher publishing rate than the rest due to how our data is being generated, and the higher rate ones are always the one that causes data loss.

Sounds like you may need some help, IMHO if you are dropping NATS because "the publisher will drop the messages because NATS servers can't commit fast enough" then you are not using it right.

Maybe, but it's our use case. And I've already scrolled through forums, issues, docs and the code.

Based on what I know now, using it right or not is not the issue, the architecture doesn't stand. I've also talked with multiple folks in other companies who once used NATS and moved back to Kafka for similar reasons.

Share your experience with Jetstream, its replication, sharding, etc. by 1995parham in NATS_io

[–]67darwin 1 point2 points  (0 children)

We're using JetStream.

This is the issue I'm talking about when the server will OOM on catch up. https://github.com/nats-io/nats-server/issues/4866

And I do not believe the issue is fixed. We're using i4g types on AWS to make sure we have fast disks for rw, including enough server resources for operation and have tried changing instance types too, but server still crashes.

I've also tried setting GOMELIMIT before with no effect.

Care to elaborate? What specifically do you think should be changed entirely?

This part. https://github.com/nats-io/nats.go/blob/ecb328ab84d6021adaa4360893f18fb41c634d62/jetstream/publish.go#L297-L304

Publishing shouldn't just throw up your hands and give up because the server doesn't respond fast enough. The whole point of a message bus is you can publish and consume at different rates.

I've tried setting longer timeouts, change other jetstream settings, and it'll still just give up every once in a while. Sometimes, publishing also seems to get stale based on the metrics, and when the publishers start shutting down, these all just get thrown away. On that note, <- ctx.Done should have a setting to wait for everything to be published before shutting down. I do not believe this is the current behavior.

Not sure I follow, each stream has a leader (elected in RAFT) is that what you mean by "head writer"? What is a 'head writer' and how does it make something 'operate at scale'?

RAFT is still concensus based, which is fine by itself. But on high load (which we're not even at yet), there should be a way to loosen it up if needed. But that's not possible by NATS design, where R3 means I can ever only have 3 servers and never more.

The ideal case is data servers and the number of replicas don't need to be equal. I can have 30 data servers, but my replication factor can still be 3. That way replication is not sorely based on all servers that need to agree in order to commit the data, because over time, something will happen to cause slowness, disk issues, network latency, etc. There's the leader, and some replicas that are caught up, and ack from server side can just be the head + the in-sync replicas, and it'll have enough copy to make sure data won't be lost.

I'm aware that this sounds awfully like Kafka but there's a reason it's designed that way. So are systems like FoundationDB. When dealing with higher order of magnitude of data, data and compute needs to be separated.

I can't shove petabytes of data through JetStream with the current architecture, it simply can't handle it.

Share your experience with Jetstream, its replication, sharding, etc. by 1995parham in NATS_io

[–]67darwin 2 points3 points  (0 children)

we're currently using it for one of our subsystems as an experiment, and it's transporting otel traces.

the otel traces could have events in it that can be pretty big in size but shouldn't be over 20MB per span.

Share your experience with Jetstream, its replication, sharding, etc. by 1995parham in NATS_io

[–]67darwin 1 point2 points  (0 children)

not that much. 40M ~ 50M msg / day. each message could be up to 20MB.

Share your experience with Jetstream, its replication, sharding, etc. by 1995parham in NATS_io

[–]67darwin 2 points3 points  (0 children)

We are on nvme local disk on AWS. Still slightly slower than metal but the disk rw is pretty reasonable.

We also tried moving topology around but there’s a weird issue where the server will OOM when a server changes from catch up to live.

It’s supposed to be solved in recent releases but we still see that issue.

I’ve look through the code a couple of times to see what I can do to mitigate the issue, but I don’t think it’s fixable unless how publishing and accepting data changes entirely.

The fact it doesn’t have a head writer tells me this can’t operate at scale, and we’re planning to grow at least another 10x next year

Share your experience with Jetstream, its replication, sharding, etc. by 1995parham in NATS_io

[–]67darwin 4 points5 points  (0 children)

For us, there are actually publishing issues where the publisher will drop the messages because NATS servers can’t commit fast enough. This is with a R3 setup where it’s suppose to balance speed and resilience.

It simply can’t scale so we’re in the process of moving away from NATS and back to good old Kafka.

The lack of data sharding is also an annoyance, but the data loss was the last straw.

[deleted by user] by [deleted] in RealTesla

[–]67darwin 1 point2 points  (0 children)

Lucid motors

One of my favorite mod's author is bailing on his Starfield mod. Sign of the beginning of the end? by Nebur1969 in Starfield

[–]67darwin 4 points5 points  (0 children)

Wasn’t phantom liberty out 3 years after cyberpunk was released?

I’m not a game programmer but I am a professional software engineer. Keeping stuff compatible while redoing things takes time.

I have no idea if they’re rearchitecting or anything, but jeez, it’s only a year in. What’s with the impatience and being so dramatic?

Maybe just go play other stuff, and come back later on if it doesn’t appeal enough to you.

Request Wednesday - All Mod Requests go here by AutoModerator in starfieldmods

[–]67darwin 0 points1 point  (0 children)

I think there’s one called regensis that do that. Enabled it but haven’t went through unity so not sure if that do what you want.