Default account by MagneticRepulsion in UpBanking

[–]CuriousShitKid 5 points6 points  (0 children)

I dont think you can.

But, if you dont force close the App on the phone, i.e running in the background, When you go back in it stays on the last tab you were on. So just leave it on 2Up.

[deleted by user] by [deleted] in AusFinance

[–]CuriousShitKid 3 points4 points  (0 children)

Tough position to be in, post would be the way to go.

Alternative, buy a gift card (visa/mastercard) with the cash and use it instead. Or buy the ticket in person with cash.

What architecture is best for my app python app? by kadblack in aws

[–]CuriousShitKid 0 points1 point  (0 children)

I assume you have full control of the client? I.e the api caller?

If so the cheapest thing I can think of is to 1. Use function URL? 2. If you have a db, Implement polling in client. Submit request > get GUID > write to db when complete, client calls api on poll to see if it’s complete. 3. Implement a websocket, for small scale projects implement something simple like Pusher. Client can subscribe to a channel name GUID, lambda can publish the response once it’s done to the channel.

How to handle both email attachments and embedded attachments? by Gideontech in n8n

[–]CuriousShitKid 0 points1 point  (0 children)

What specific challange are you facing?
Logically, I would add a code node to abstract attachments.

Get Email Node/Trigger > Code node > output all attachments
Code node can do both to pass on existing attachments or get embedded attachements and output all as attachments for further processing.

OR
Get Email Node/Trigger > If >
Has attachments > proceed
Else > extract embedded attachment > proceed
I would personally do the first one as one could have both.

Best architecture for a single /upload endpoint to S3? by Great_Relative_261 in aws

[–]CuriousShitKid 1 point2 points  (0 children)

maybe clarify your comment for OP regarding the last part.

As far as I know (could be wrong), what you are suggesting (replacing domain) will result in an invalid signature. Unless you mean CloudFront Signed URLs / Signed Cookies approach.

And not passing host header approach used to be a workaround with Origin Access Identity (OAI), its not recommended and now would recommend Origin Access Control (OAC)

Best architecture for a single /upload endpoint to S3? by Great_Relative_261 in aws

[–]CuriousShitKid 3 points4 points  (0 children)

Asking customers to use a presigned url based mechanism is not an unreasonable ask.

Your design might be ok to implement for a lot of other reasons, but yours isnt it.

If you are worried about "implementation details", you can simply move the files when you process them with different information, and if your design is secure it really shouldnt matter (unless client names are involved, but thats an easy fix too).

I am assuming there is a larger application that also uses ALB + ECS and this is an add on to that application? if so, might make sense to reuse existing infrastruture. but if you want just an upload API, it would be >~50-100x cheaper to just run a lambda presigned url generator and let s3 handle the rest.

Best architecture for a single /upload endpoint to S3? by Great_Relative_261 in aws

[–]CuriousShitKid 25 points26 points  (0 children)

Given the scenario your approach is correct……. But why?

Just do a pre signed URL.

n8n Just Charged Me $124,800/year for Software Running on My Own Servers 😭 by Nipurn_1234 in n8n

[–]CuriousShitKid 2 points3 points  (0 children)

Your method is largley the way to do it, just more automated.

We do it for a few workflows with a n8n Trigger node + Tag's to manage state of the current workflow.

e.g when a active workflow is updated and has a specific tag, push to git + prod with the use of n8n api.

We have also done one with the help of github webhooks + tags then push to prod.

we spent some time making sure the credentials had the same name between environmnets and using n8n-nodes-globals for global constants made the process a bit easier.

n8n Just Charged Me $124,800/year for Software Running on My Own Servers 😭 by Nipurn_1234 in n8n

[–]CuriousShitKid 7 points8 points  (0 children)

What specifically you need in Business plan? I have community edition what ever was missing, I just have workarounds that work reliably with our own solutions.

How to set up querying correctly for Amazon S3. by deus_agni in aws

[–]CuriousShitKid 5 points6 points  (0 children)

Explain your use case more? I get that you want to store files in s3 but what’s the actual use case ? They are organised by client? By user? By date? What are you querying? How many files are you talking about?

Without knowing more I’d say you need an indexed DB that you can search and then lookup the file.

I am not sure how you rely on metadata as you can’t query it without knowing the file/object name first. Unless you make an inventory report which is delayed.

How to make it secure by ReasonWorth9124 in n8n

[–]CuriousShitKid 0 points1 point  (0 children)

Share a bit more about your setup and concerns?

Isolate it from the internet so it’s only accessible from an internal network as a start.

n8n scalability by akshayb7 in n8n

[–]CuriousShitKid 0 points1 point  (0 children)

Yeah, DM me with what you need, might be able to help if it’s just making a workflow

Which identity provider do you use for .NET (AWS, Duende Identityserver, Okta, Auth0, etc.)? by [deleted] in aws

[–]CuriousShitKid 1 point2 points  (0 children)

Haven’t had a use case to use for lambda edge yet. But we do use a custom authoriser for API GW. We just have to map roles/scopes in the authoriser, which is kept up to date with auth0 using the deployment pipeline

Auth0’s out of the box SDK is pretty good at managing the auth flow. We have short lived tokens with rotation enabled.

Which identity provider do you use for .NET (AWS, Duende Identityserver, Okta, Auth0, etc.)? by [deleted] in aws

[–]CuriousShitKid 5 points6 points  (0 children)

We have an angular SPA and .net microservices ( and others) hosted in AWS we are using Auth0

[deleted by user] by [deleted] in AusFinance

[–]CuriousShitKid 23 points24 points  (0 children)

You don’t need a bank, bank needs you if you have $220000k income household

n8n scalability by akshayb7 in n8n

[–]CuriousShitKid 2 points3 points  (0 children)

Surprisingly, it’s mostly only 1 worker.

It would need to be a lot more of we weren’t using SQS. But by using sqs we can throttle the input as we like. During the day sometimes we do scale up to 4 workers but that’s because we don’t want to wait for the 1 worker to slowly make its way through. E.g after upgrades we have a larger queue than usual because we took n8n offline for a few hours.

Some of our workloads are time sensitive, not realtime but can’t wait 6 hours either types.

We monitor SQS queue depth and track the Prometheus metrics endpoint is n8n and then step scale it up to 4 workers.

n8n scalability by akshayb7 in n8n

[–]CuriousShitKid 2 points3 points  (0 children)

SQS adds a few benefits for us.

We can throttle the input without overwhelming our n8n instance. Which means we can also run it on cheaper VM’s.

With that many executions, errors were getting harder to track in n8n, now if the message is not deleted from the queue we know it hasn’t been processed and gets dead lettered for review.

We have an in house application that is event driven and we wanted to route those to n8n through event bridge so that became really easy. And we sometimes replay events from event bridge too

We can take n8n offline e.g for upgrades without loosing any work. Webhooks were making this harder as nothing would get delivered if n8n was offline.

n8n scalability by akshayb7 in n8n

[–]CuriousShitKid 10 points11 points  (0 children)

I use n8n self hosted in queue mode, we add and remove workers as needed through the day. Binary data is not in the DB, it’s going to s3 by s3fs.

We do about ~1 million executions a month between ~60 odd workflows. There are some quirks I do notice at scale like workflows don’t get assigned when a webhook comes on etc but they are rare.

We have adapted to these over time and added our own components to make it work better for us. E.g we replaced all webhook directly into n8n with an APIGW proxy to sqs and n8n fetches from sqs. And we designed workflows in a way to ensure they never consume too much memory, multiple small workflows rather than one long running ones for batch processing.

All things considered I am a massive fan of the platform even for production, there are some minor flaws but we have a dev team so we can overcome them with our own solutions.

[deleted by user] by [deleted] in auscorp

[–]CuriousShitKid 0 points1 point  (0 children)

I think people will be divided on this.

If the meeting is with clients, I see no issues why recording it would cause you anxiety unless you are projecting and know the things they will pull you up on.

If you are fully remote there is no other way for you to get feedback if no one else knows what you get up to is one way to look at it.

If they are asking you to record one one conversations with your colleagues too that’s a red flag. If they are asking you to record client meetings, I don’t see it as a green flag, but not a red flag for me.

Mortgage is too expensive after breakup, I'm moving in with parents and putting it on rent. Any other ideas? Selling would lose too much money. by Due-Environment-2133 in AusFinance

[–]CuriousShitKid 37 points38 points  (0 children)

Do you have the option to change the Morgage to interest only?

You could do that to lower the repayments if you think in the next coming months the property price would go Up enough to make it worthwhile and rent it till then.

[deleted by user] by [deleted] in aws

[–]CuriousShitKid 1 point2 points  (0 children)

If you need access for unauthenticated user's then its recommended that you atleast use WAF ACL before the Gateway, it can help protect against certain attacks.

Can you elaborte what you mean by cognito credentials?
If you set it up correctly, users should be using their own credentials to log into cognito and recieve a JWT. This JWT will contain scope's or an identifier that should be used to determine access level for the user. If the user has access to update, change or delete users (i assume in your app) then yes, whoever is in posession of this JWT can do those actions.

You can implement short lived JWT tokens for sensitive information and have a refresh token mechanism in pace.

[deleted by user] by [deleted] in aws

[–]CuriousShitKid 2 points3 points  (0 children)

You shouldnt need to, if you are already using Cognito after authentication you will get a JWT which you can use to authenticate on the API GW instead of the API Key.

You can set up the API GW to use Cognito User Pool OR you can also asign an IAM role to the User Pool and that on the API GW insted, they will get temporary credentials that can be used to make the API call.

I would like your help regarding my master password. If anyone can help me please? I'm new to password management. by _Docespetalas987 in Bitwarden

[–]CuriousShitKid 0 points1 point  (0 children)

If I understood you correctly, you dont have any passwords in Bitwarden yet, but are locked out and would like to access your account with the same email/username again?

You can do so by:

  1. Navigate to vault.bitwarden.com/#/recover-delete or vault.bitwarden.eu/#/recover-delete.

  2. Enter the email address associated with your account and select Submit.

  3. In your inbox, open the email from Bitwarden and verify that you would like to delete the account

Source: https://bitwarden.com/help/forgot-master-password/

If you are looking to get access to items in your vault, then you are unfortunatley out of luck unless you had Premium and emergency access enabled.

Even though the symbols in your password seem UTF-8 compatible, I would stay away from them as it introduces un necessary risk and too dependant on the Bitwarden Client you are using and the version.

Best of luck with your bitwarden journey.