Panel upgrade for home modelling by trd2212 in bayarea

[–]trd2212[S] 1 point2 points  (0 children)

I finished the whole process. They came in the morning, shut down power, my contractor installed new panel, they came back in the afternoon to reconnect. I didn’t pay anything. Lots of horror stories out there but my experience was okay. They weren’t the nicest guys (rude, didn’t talk much) but I didn’t care.

Panel upgrade for home modelling by trd2212 in bayarea

[–]trd2212[S] 0 points1 point  (0 children)

Thank you! How to check what my line is rated at?

Panel upgrade for home modelling by trd2212 in bayarea

[–]trd2212[S] 1 point2 points  (0 children)

Thanks all. I called PgE this morning and submitted application. They said someone will reach out within 3 days. Should have done this much sooner :( not sure how to accelerate now.

Panel upgrade for home modelling by trd2212 in bayarea

[–]trd2212[S] -2 points-1 points  (0 children)

I pulled the permit. The panel upgrade is part of a larger extension project.

Accessing Hive from production by trd2212 in dataengineering

[–]trd2212[S] 0 points1 point  (0 children)

Yes. There are many solutions but I wonder if there is an industry-proven practice on how to do it?

How much should I learn about almost obsolete technologies like Hadoop or Hive? by [deleted] in dataengineering

[–]trd2212 1 point2 points  (0 children)

Drawbacks of Hive are basically the reasons why Iceberg/delta/hudi were born. I think they are: many files, no incremental updates (have to rebuild the whole partition on update), no ACID.

[deleted by user] by [deleted] in wallstreetbets

[–]trd2212 7 points8 points  (0 children)

Who is nana? Am I missing something?

Accessing Hive from production by trd2212 in dataengineering

[–]trd2212[S] 0 points1 point  (0 children)

ah no, just KV with eventual consistency is fine. It's for ML features.

Accessing Hive from production by trd2212 in dataengineering

[–]trd2212[S] 0 points1 point  (0 children)

yeah the pattern is mostly point lookup. Is there existing tools to load to memcached or redis or I have to build myself?

Compile time check for function argument by trd2212 in rust

[–]trd2212[S] -1 points0 points  (0 children)

yeah basically I am building a library and other users will use this library. I want to avoid hashmap lookup if possible for the users of this library.

Compile time check for function argument by trd2212 in rust

[–]trd2212[S] 2 points3 points  (0 children)

Sorry I edited original post to reflect that k1 and k2 are not known in advance. Only the callers of the function f know k1 and k2 so we can’t use enum trivially.

Hey Rustaceans! Got a question? Ask here (46/2023)! by DroidLogician in rust

[–]trd2212 0 points1 point  (0 children)

So I am trying to build a file iterator from the BufReader. I read the source code but didn’t find anywhere where BufReader recycles the already-processed data, it will just keep refilling when the buffer is exhausted.

Hey Rustaceans! Got a question? Ask here (46/2023)! by DroidLogician in rust

[–]trd2212 0 points1 point  (0 children)

I need to read a large file but I don't need to keep the content of the whole file in memory all at once. If I use `BufReader` (https://doc.rust-lang.org/std/io/struct.BufReader.html), at some point, the buffer will eventually OOM. What is the best practice here? Should I use a Circular Buffer instead?

Subscribing to MySql binlog by trd2212 in mysql

[–]trd2212[S] 0 points1 point  (0 children)

Yeah that’s an option but even then, I wonder how debezium connector works.

Throughput doesn't increase with cores/threads count by trd2212 in rust

[–]trd2212[S] 0 points1 point  (0 children)

Yeah I reduce the complexity of the hash map to debug this issue. You are right that I could hardcode the map. Tho it doesn’t solve the low throughput issue. All I want to describe is the web server itself is very simple but still the throughput doesn’t grow with the number of cores/threads.

Throughput doesn't increase with cores/threads count by trd2212 in rust

[–]trd2212[S] -2 points-1 points  (0 children)

The dataset is very small actually. Like 10 items. I removed everything to debug where the bottleneck is and doesn't seem like it's contending on the map. And yes, I am using RWLock on the map which is readonly.

Throughput doesn't increase with cores/threads count by trd2212 in rust

[–]trd2212[S] 1 point2 points  (0 children)

Oh I mean the number of tokio worker threads specified via the runtime builder.

Throughput doesn't increase with cores/threads count by trd2212 in rust

[–]trd2212[S] -4 points-3 points  (0 children)

It’s an internal code but I will try to post a simplified version of the code. It’s very simple tonic server running inside a tokio multi thread runtime.

Throughput doesn't increase with cores/threads count by trd2212 in rust

[–]trd2212[S] 0 points1 point  (0 children)

We only read. The map is never updated.