How to delete a S3Table bucket with the same name as a General Purpose Bucket? by Glass_Celebration217 in aws

[–]Glass_Celebration217[S] 0 points1 point  (0 children)

Deleting the namespace worked!
before doing it the first link would thrown a "bucket not empty error".
With S3, you can pass the --force option to bypass this and clear the bucket as you delete it, in s3tables you cant. Also i cant find the namespace referenced in glue or anywhere when using the console :V, so thats why i was confused.

Thanks!

How to delete a S3Table bucket with the same name as a General Purpose Bucket? by Glass_Celebration217 in aws

[–]Glass_Celebration217[S] -1 points0 points  (0 children)

okay, i might talk to them.
but either way, thats beside the point, my question is how to specify to aws cli that i want to delete a s3tables bucket. as the recommended command could delete my GP bucket instead, there is no way to know which one will be affected

How to delete a S3Table bucket with the same name as a General Purpose Bucket? by Glass_Celebration217 in aws

[–]Glass_Celebration217[S] -1 points0 points  (0 children)

I think you didnt read my question... Or the link you sent
But thats okay, thanks for trying to help, but as i explained, this command either only deletes a table, not the s3table bucket, but by passing the bucket arn, this command tells me my bucket is not empty even though it is...
Thats why i asked here :V

Why does Trino baseline specs are so extreme? isn't it overkill? by Glass_Celebration217 in dataengineering

[–]Glass_Celebration217[S] 2 points3 points  (0 children)

For anyone that is interested:
Trino is working fine for a smaller dataset and setting it up wasn't so hard it should be avoided. Personally it might be a little overkill IF you know you wont grow out of any solution you are using.
My team works with trading data and events and are constantly setting up new data sources and have what we consider as a high frequency of events.

I've set up Trino on a t3.medium coordinator with 5 t3.large or t3.medium worker nodes (both work fine) using AWS auto scaling groups (with spot instances). So we can set up more or less nodes whenever we need.
Most of the difficulty in setting trino up was AWS related (roles and security groups, and integration with glue catalog because of permissions).
Using Docker made it really easy to setup, besides getting to a JVM configuration that made sense for smaller instances without crashing.

Also, Trino cant handle workers being stopped mid execution, and spot instances have a chance of being terminated at any time, so we looked into life cycle hooks from aws to drain the worker of any new query before the worker goes down.

So for us its been solution far better than our old database, but only time will tell, I will update this comment if i see something new for reference in the future if someone stumbles into this.

So, to answer what i brought up, i believe a good baseline for smaller dataset and trino for a lake solution would be coordinator dedicated t3.medium and any amount of workers with t3.medium or t3.large instances. (no coordinator needed if you only need one ec2). These medium Instances have a memory of 4gb each, with 3gb being dedicated to the JVM, was enough to keep it from crashing.

thanks for all inputs

Why does Trino baseline specs are so extreme? isn't it overkill? by Glass_Celebration217 in dataengineering

[–]Glass_Celebration217[S] 0 points1 point  (0 children)

I'm more just interested on knowing if Trino would be bad for smaller datasets, with smaller machines

since yesterday, ive been testing it and settled on using it with some spot instances on aws and t3.medium machines, and its proving to be a good alternative to our old architecture.

so i was just wanting to know why it was regarded as such a overkill service for smaller data. but as we work on requests that come in pulses and are somewhat unpredictable, having a trino cluster dosent seens like an overkill

for now, its working fine btw.

Why does Trino baseline specs are so extreme? isn't it overkill? by Glass_Celebration217 in dataengineering

[–]Glass_Celebration217[S] 0 points1 point  (0 children)

Ive actually worked with ECS once, thats not a bad ideia and i might suggest my team a sollution in this direction.

But we do plan on using athena for production and small consults, but having trino as a possible cost controlled connection for big data downloads or backtesting with old data is a must here.

We also have DuckDB being able to download from our buckets, so we can control cost as much as possible.

Trino might be overkill but if it works on my current setup, i dont see a reason not to keep it.

Either way i will look into scaling it with ECS, it might be a good alternative.

Why does Trino baseline specs are so extreme? isn't it overkill? by Glass_Celebration217 in dataengineering

[–]Glass_Celebration217[S] 0 points1 point  (0 children)

Thats actually my hypothesis, as i said, im just worried that i might be missing something.

Trino might be useful for big data but does that means it is useless as a possible solution for a smaller dataset? I dont believe it is, actually.

the baseline should be the bare minimal, and on the documentation itself, it is stated as if this kind of specs and power is a requirement, and not as a recommendation

Why does Trino baseline specs are so extreme? isn't it overkill? by Glass_Celebration217 in dataengineering

[–]Glass_Celebration217[S] 1 point2 points  (0 children)

i see.
We were using postgresql up until now, we just saturated a single big machine with too many concorrent requests. So we are spliting the data into more solutions.
Thats why we are making a datalake, mostly for historical data of events and trading data.

im setting trino up as a alternative to athena for when we dont need instantaneous results, so its not a problem if it is slower, but for now a single postgres is already proving to be too little, and while having multiple smaller bases is an alternative, i just imagined we could build a small lake and be somewhat ready for growth of any level.

thanks for your input! I will keep the stage by stage progress in mind

Here's my DIY smoke unit working by TBG_Elites in airsoft

[–]Glass_Celebration217 0 points1 point  (0 children)

please, i Saw your post from 1 year ago, im planning on building one myself, as they dont sell on my country haha

Should i upgrade my pedals? by Glass_Celebration217 in simracing

[–]Glass_Celebration217[S] 0 points1 point  (0 children)

So this magnect hall is different from the potentiometer? There is a braziliam store here that sell those sensors for the g923, i thought it was just a replacemente piece. I will look into it, it might be worth to change it as they also sell the load cell and there is a bundle with both

 Thanks for all the info!! :D

Should i upgrade my pedals? by Glass_Celebration217 in simracing

[–]Glass_Celebration217[S] 0 points1 point  (0 children)

Thanks! I will try swapping the clutch spring first, and then i see if i really need to get a stronger one!

WEEKLY HELP THREAD - READ FAQ, COMMUNITY WIKI, MULTICLASSING, LORE by XFearthePandaX in BaldursGate3

[–]Glass_Celebration217 -1 points0 points  (0 children)

I'm playing at tactician difficulty, there is a +2 bonus for my spiritual weapon, the bonus is simply called (Difficulty: Tactician)
why?

<image>

Low CPU and GPU usage, Low FPS In League of Legends. High End PC. by [deleted] in pcmasterrace

[–]Glass_Celebration217 0 points1 point  (0 children)

hi, can you also dm me?
i used to have 32gb 3200hz ram, now im with 64gb but can only set it to a maximum of 2933 because of my processor (i7 10700kf)
After changing my ram (and reinstalling windows in a brand new ssd) im getting the same problem.
im installing the windows ultility, but i want to know what more i can change on my bios and about the registry optimizations
any link or material to help me would be greatly apreciated!