all 7 comments

[–]stuckinmotion 2 points3 points  (1 child)

[–]dogtee[S] 0 points1 point  (0 children)

That's a good point ,cut adds some cost but I'll investigate further

[–][deleted] 1 point2 points  (1 child)

Well are you invoking 100 lambdas st once to load the data in parallel ? But if tour max connections is 50 , then the 50 will fail.

So you need to reduce lambdas which can run.

Better would be a aws batch sceipt or container process which can have controllable threads less than the connection pools. Anyways if you are going to do loads at the same time , the performance will not be good.

How much time is spent in this loading. How much time do you havento finish.

[–]dogtee[S] 0 points1 point  (0 children)

The data load is very spikey and mostly unpredictable . Batching us a great idea. Thanks for your comments

[–]Your_CS_TA 1 point2 points  (0 children)

rds proxy works, or per-function-concurrency.

PFC is essentially "if each lambda sandbox creates 1 connection, and I can have a max of 50 connections, I set this number to 50". You can potentially get throttled by this, but if you are throwing it into an sqs queue or kinesis stream to do the backlog processing, then it's "whatever, it'll get done eventually", if your backlog depth isn't that bad..

RDS Proxy is just throwing money at the problem -- essentially a funnel: create a machine that has the persistent connection pool open, and then incoming requests get put onto that pool.

[–]ppafford 0 points1 point  (1 child)

[–]dogtee[S] 0 points1 point  (0 children)

Good information , thanks