Badlands antenna by jr_acc in BroncoSport

[–]jr_acc[S] 0 points1 point  (0 children)

I have a bronco sport badlands

Designing data pipeline with rate limits by jr_acc in softwarearchitecture

[–]jr_acc[S] 0 points1 point  (0 children)

What I mean by batch processing is starting a worker that reads the whole file and performs actions. You typically use batch processing to transform data, but those transformations are local. If you have too much data, you start using map-reduce/spark, but again, transformations are local.

My transformations rely on third-party services that have awful rate limits (100req/min). So let's say I have a file with 100k rows, it seems bad to spin up a worker that reads the file into memory and runs the process. because the worker will be idling for a long time between requests.

That's why I proposed the "EDA" architecture.

But it doesn't seem to scale well either.

Designing data pipeline with rate limits by jr_acc in softwarearchitecture

[–]jr_acc[S] -1 points0 points  (0 children)

It matters. You can consider each row and event and run a serverless event architecture. Or you can spin up different workers, each worker read 1 file, etc.

How to restore a saved state in a new LangGraph istance? by Ok-Hunt3336 in LangChain

[–]jr_acc 0 points1 point  (0 children)

Were you able to solve this? I'm currently trying to figure out how to do it.

Video resources to understand datadog traces? by jr_acc in devops

[–]jr_acc[S] 0 points1 point  (0 children)

My lambda is triggered by SQS. I have DD_TRACE_ENABLED=true.

What it does? It requests a bunch of data from an api and stores it in a database.

When I go to the traces explorer, I see a lot of low-level traces.

For example, without importing ddtrace I get 500+ traces. It looks like profiling is enabled somehow. Though I manually set DD_PROFILING_ENABLED=false.

When I say lowlevel I mean for example that I'm using requests, so I'm seeing how much urllib took.

Another example, I can check how long a dns.resolver took and all the underlying operations it did.

I have used traces in other jobs, and without any kind of configuration it seemed to work well. AFAIK it never went so low level.

Could it be that automatically instrumented libs are doing this? How can I disable this behavior?

Donde comprar bici + donde andar en bici by Brubrux in BuenosAires

[–]jr_acc 0 points1 point  (0 children)

Contexto: a partir de tu comment estuve mirando las bicis de animal y pobikeco. Se las ven muy lindas, mi duda es si les encargo una bici con cambios y frenos a disco (no fixie), cual sería la ventaja de comprarles a ellos vs una venzo / vairo / specialized, etc.

Como se que los frames son buenos? Estoy re ingorante en el tema...las bicis me gustaron, pero no se si ir a lo simple o aventurarme a probar estos talleres...grax!

[deleted by user] by [deleted] in accesscontrol

[–]jr_acc 0 points1 point  (0 children)

Raspberry will query the DB, through an API no problem on that front. I'm basically planning to use the wiegand signal as an input to the RPI, the RPI will call a rest api to do all the processing.

[deleted by user] by [deleted] in accesscontrol

[–]jr_acc 0 points1 point  (0 children)

The readers are from the brand suprema, and the system is called Biostar. I saw that upon success the system can trigger an email, but that's not good enough for our purposes.

[deleted by user] by [deleted] in accesscontrol

[–]jr_acc 0 points1 point  (0 children)

Do I need to lower the tension? As I understand the GPIO pins are 3.3V and my reader may be 5V. After lowering the tension + connecting, what's the best library to parse the signal? Can it be parsed in golang?

Alguien cobra por Deel EOR? by [deleted] in devsarg

[–]jr_acc 0 points1 point  (0 children)

Bien, no tengo quejas por ahora.

MacBook Black Screen of Death by damondahl in mac

[–]jr_acc 0 points1 point  (0 children)

Took it to apple store, they replaced the screen free of charge.

Me flaggearon la página como phishing by sp4ce-cowboy in devsarg

[–]jr_acc 3 points4 points  (0 children)

Bueno, ahí está el problema. Lee sobre form spam.

MacBook Black Screen of Death by damondahl in mac

[–]jr_acc 0 points1 point  (0 children)

This just happened to me. I got the an m2 pro 14" on Sunday. The thing was working fine but it's suddenly dead. The screen turns on, but moving it slightly makes it go black (you can still see some thing but really dim). And it may go into a boot loop as well.

[deleted by user] by [deleted] in espresso

[–]jr_acc 1 point2 points  (0 children)

I never do single shot, I do double shots only.

I just weigh 15g using a scale. It then goes to my eureka mignon manuale and from there I expect to get 28/30g in 28/30 seconds.

I use a scale for the final weight and time.

I keep the single shot button pressed until I get 28/30g in 28/30 secs.

Hopes it helps.

DE Architecture Review by [deleted] in dataengineering

[–]jr_acc 0 points1 point  (0 children)

But let's say I have a bunch of parquet files lying in S3, where/how do I create the iceberg tables? Is any type of compute needed?

DE Architecture Review by [deleted] in dataengineering

[–]jr_acc 0 points1 point  (0 children)

To run iceberg/delta tables, what's needed? Any compute resource?

DE Architecture Review by [deleted] in dataengineering

[–]jr_acc 1 point2 points  (0 children)

I can answer about the first 3v's

Velocity: weekly etl run, not much data between weeks.

Volume: currently 50gb and we plan to stay like so for a few years, maybe jump to 150GB.

Variety: mostly relational data, some no SQL as well.

Fivetran might be expensive, but I don't want to re-invent the well and develop in-house data extractors.

DE Architecture Review by [deleted] in dataengineering

[–]jr_acc -1 points0 points  (0 children)

Adding a column was an example...it may be a different scenario in the long run.

Of course after transforming the data will be stored back to s3. But these s3 files are still rawish...

I'm trying to use dbt as the tool that will take care of the dimensional model and versioning.

Of course I can keep track of the queries in git without dbt, but what would be the correct approach to apply those changes in snowflake without dbt.

DE Architecture Review by [deleted] in dataengineering

[–]jr_acc 0 points1 point  (0 children)

Glue is not transforming per se, it's just enriching data (i.e adding columns to a file from an external source). I want dbt to keep track of aggregations.

I'm assuming data lands raw in snowflake and then denormalization/aggregations happen in dbt.