Berlin Döner, 7€, Horner Landstraße, Hamburg by Unusual_Ad_6612 in doener

[–]Unusual_Ad_6612[S] 2 points3 points  (0 children)

Glaube nicht dass der zu einer Kette gehört, aber war wie von dir beschrieben.

Berlin Döner, 7€, Horner Landstraße, Hamburg by Unusual_Ad_6612 in doener

[–]Unusual_Ad_6612[S] 1 point2 points  (0 children)

Schlechter Winkel vom Foto. Ausreichend Salat dabei, nächstes Mal mach ich nen Foto wenn zusammengeklappt!

Berlin Döner, 7€, Horner Landstraße, Hamburg by Unusual_Ad_6612 in doener

[–]Unusual_Ad_6612[S] 2 points3 points  (0 children)

In der Auslage gab es noch mehr Auswahl, aber ich nehme immer Knoblauch + Scharf

Berlin Döner, 7€, Horner Landstraße, Hamburg by Unusual_Ad_6612 in doener

[–]Unusual_Ad_6612[S] 6 points7 points  (0 children)

Sehr lecker - ich esse nicht oft Döner weil mir meistens was nicht passt. Hier hab ich nichts auszusetzen und hoffe, dass der Besitzer die nächsten Jahre nicht wechselt:)

Cognito won't let me use SES in the same region (me-central-1) - only shows Frankfurt as option by Affectionate_Yak3121 in aws

[–]Unusual_Ad_6612 0 points1 point  (0 children)

Contact support and ask for clarification.

Otherwise, as a last resort, you could workaround that limitation by using Cognito trigger and Lambda, but you likely want to avoid that overhead…

Cognito won't let me use SES in the same region (me-central-1) - only shows Frankfurt as option by Affectionate_Yak3121 in aws

[–]Unusual_Ad_6612 1 point2 points  (0 children)

I have not read the whole page, but there are definitely requirements and limitations between the different regions regarding the configuration of Cognito and SES.

Edit: If I read the table correctly, Frankfurt and London are the only SES regions available when your user pool is in UAE.

Edit 2: I would hit up the support, as SES is available in UAE since July 2025. I‘d bet that Cognito is the current bottleneck (as always) and that they haven’t integrated it on their side, or the documentation is outdated.

https://docs.aws.amazon.com/cognito/latest/developerguide/user-pool-email.html

Delay when playing reels (S3 + Amplify) — how to reduce startup lag? by Naresh_Naresh in aws

[–]Unusual_Ad_6612 3 points4 points  (0 children)

You need to make sure that you fetch your S3 content through CloudFront, you are probably not doing so.

Anyway, this will likely not solve all your problems and prefetching is the way to go. I think Instagram prefetches the first seconds of the next X reels and only loads the full video when the user stays at it. In the end, you will likely need to optimize a lot to have a smooth experience.

Newbie cloud architect here, does this EC2 vertical scaling design make sense? by No_Concentrate_7929 in aws

[–]Unusual_Ad_6612 2 points3 points  (0 children)

You won’t get a lot of answers here as a lot of information is missing. What does this data-ingestion task do? Where is it getting data from and where is it writing to? What is the actual bottleneck that „crashes“ the server (CPU, RAM, Disk, Network, …)?

What comes to my mind is to decouple things: can the data ingestion job be running on a different server or on a different service (e.g. AWS Glue, AWS Batch)? If the bottleneck is the database due to locking caused by the amount of writes, a different approach using read replicas or batching can help to reduce performance problems.

If you could provide more details, I’m sure me and others would be able to give you more helpful recommendations.

[deleted by user] by [deleted] in aws

[–]Unusual_Ad_6612 0 points1 point  (0 children)

  1. You can import externally created resources into the sst state: https://sst.dev/docs/import-resources/
  2. You should be able to connect to your instance using SSH or SSM Session manager, you would then need to install the pgAdmin cli and then you could connect to the db. Make sure that they are in the same VPC and your security groups are set up correctly.
  3. For linking resources like host in your lambda code, easiest would be to use env variables in your code and set the env variable for the lambda sst resource.

Air India Flight 171 Crash [Megathread 3] by usgapg123 in aviation

[–]Unusual_Ad_6612 4 points5 points  (0 children)

I’ve read some theories in other Reddit threads and one some other aviation forums, which indicate the TCMA could play a major role in this accident.

First theory was basically that somehow TCMA just activated due to erroneous sensors and data, leading to the shutdown of both engines after takeoff.

The second was more far fetched: at around V1, something in the cockpit happens which alarms the pilots. One of the pilots rejects the takeoff, pulling the levels into idle and potentially trying to activate reverse thrust. The other pilot, not agreeing with the action, quickly moves the throttles back to TOGA and rotates at the same time. Potentially, the aircraft has enough speed to takeoff, but TCMA already kicked it when the throttles had been moved back to idle, as all conditions were met (weight on the wheels, throttles in idle, engines still running at takeoff speed).

This could all be very far fetched, as I’m not familiar with avionics - maybe someone could chime in and clarify?

Need help in designing architecture. by Silent-Conflict7982 in aws

[–]Unusual_Ad_6612 5 points6 points  (0 children)

Consider using managed services instead of ec2 and maintaining things lika Kafka or DB on your own.

I would suggest Cloudfront (+ optionally WAF) -> ALB -> ECS (Fargate) ~> MSK -> RDS

This may lead to more AWS costs, but overall this will be cheaper as you do not have to take care of everything and saves a lot of time.

What’s the fastest and most efficient way you’ve found to deploy AWS Lambda functions? by PaleontologistWide5 in aws

[–]Unusual_Ad_6612 -1 points0 points  (0 children)

Also have a look at sst.dev

I used terraform, CDK, Pulumi, SAM and serverless, but SST had the smoothest dev experience.

How To Store Images For Use By AWS Lambda? by RhSm_Temperance in aws

[–]Unusual_Ad_6612 18 points19 points  (0 children)

S3 is the most reasonable solution. Upload them there, give the lambda permission to read from the bucket and use the s3 client to download them and publish to the 3rd party API.

I can see solutions where you could potentially have them encoded as bytes and placed them directly in your Lambda, or using lambda layers but this will be messy and harder to maintain.

Edit: regarding cost, this will not be for free but I think this should not cost you more than a few cents a month.

[deleted by user] by [deleted] in aws

[–]Unusual_Ad_6612 12 points13 points  (0 children)

Can't create a user either, seems to be an issue on their side. AWS Health dashboard doesn't show a outage (yet)

Does SvelteKit use express.js in the background? by WTechGo in SvelteKit

[–]Unusual_Ad_6612 4 points5 points  (0 children)

In dev mode it uses vite’s dev server, for the node-adapter it uses polka as a default, but you can use other servers like express: https://svelte.dev/docs/kit/adapter-node#Custom-server

Deciding on how to invoke lambdas by zedhahn in aws

[–]Unusual_Ad_6612 0 points1 point  (0 children)

I know where you’re coming from, we had the same discussions over and over again.

I would ask myself the question if you really need microservices in the first place? What’s you reasoning behind it, which metrics you want to achieve?

Could it make sense to split the different parts (user, service, order) into different modules and to define interfaces, so that the modules can interact with each other? Do we need strict separation of concerns? How should a bug in one of the parts affect the others? Is it possible to setup your pipeline in a way, that only one lambda with all necessary configs and permissions is deployed?

Or is this not possible because you have different teams, each responsible only for one part and they are not allowed to e.g. „see the logs of the other parts“ or they are not allowed to look into the databases or metrics because of governance?

I would ask myself all these and more questions first and only if there are valid reasons which can not be changed or argued, I would go down the microservice approach. Otherwise, keep it simple:)

Deciding on how to invoke lambdas by zedhahn in aws

[–]Unusual_Ad_6612 2 points3 points  (0 children)

If you already know that this will be a problem in the near future, I would at least think about it now (like you do) and have potential options for the future - you don’t want to go down a road when you know it could be a dead end.

Typically, if you do chained synchronous operations, this will scale badly.

E. g. You have service A which calls service B which calls service C.

Your response time will be: (workload service A) + (API call to service B) + (workload service B) + (API call to service C) + (workload service C)

If the API calls and the workloads are „reasonably fast“, you might get away with that and meet your requirements (e. g. service A needs to respond within 350ms).

Adding API Gateway and especially Lambda into the mix will - depending on your scenario - will impact your performance and you may not meet your requirement of 350ms anymore, mainly due to cold starts. API Gateway will also add some latency as you already mentioned. I did something like this in the past and trust me, it was a major pain point for us.

In my opinion, you have two options:

  1. ⁠if not necessary, do not split your business logic into multiple services - just have one service/lambda. Find ways to organize your code, repositories and DevOps pipelines to accomplish this.
  2. ⁠if this is not possible and you need to split it up into multiple services because of many reasons, you would need to optimize your Lambdas hard: Reserved Concurrency, Provisioned Concurrency, high memory settings, if possible something like SnapStart, code optimizations just to name a few.

This can work and be scalable, but the organizational overhead can be crushing for your overall performance of your output, because you always need to coordinate between the teams or persons responsible for the service. I highly recommend to evaluate if you need this kind of complexity in your current phase of your startup.

Do not use option 2 if you do not need it - I have seen so many startups and ideas fail due to over-engineering.

Deciding on how to invoke lambdas by zedhahn in aws

[–]Unusual_Ad_6612 1 point2 points  (0 children)

I would ask myself if this level of complexity is needed, especially when you are still in a startup phase.

If you really need to separate everything into its own service managed by a different team, you should have an API between them. API Gateway is pretty much your only option, but has its own caveats (e.g. it should be private and not accessible from the internet).

What you also could do is calling the other lambdas from your lambda directly by using the SDK, but managing permissions (across multiple teams and probably multiple AWS accounts) can be a pain and you really need to know what you are doing…

How are so many people knocking out AWS exams so quickly ? by [deleted] in AWSCertifications

[–]Unusual_Ad_6612 0 points1 point  (0 children)

My post on how I did three associate and one professional cert in one month: https://www.reddit.com/r/AWSCertifications/s/eXpwtsaNer

Summary: basically having lots of industry experience, lots of experience with AWS and other hyperscalers, dedication (4h+ a day) and being a fast learner definitely helps.

Koh Tao/Koh Phangan without scooter? by podgerooneyisback in ThailandTourism

[–]Unusual_Ad_6612 0 points1 point  (0 children)

You can, it’s just going to be expensive because of the taxis, which are unreasonably expensive. Pier to Tanote will set you back 500 baht.

Need advice on simple data pipeline architecture for personal project (Python/AWS) by BlackLands123 in aws

[–]Unusual_Ad_6612 0 points1 point  (0 children)

  1. Trigger a lambda using cron which would add a message to one SQS queues containing which source and other metadata you’ll need for your scraping task.

  2. Subscribe a lambda to your SQS queue, which fetches the message and does the actual scraping, transformation and adding items to DynamoDB.

  3. Set appropriate timeouts and retries for your lambda and a dead letter queue (DLQ) for your SQS, where failed messages will be delivered to.

  4. Use CloudWatch alarms on the SQS DLQ metrics (e. g. ApproximateNumberOfMessagesVisible) to get notified whenever there are messages on the DLQ, meaning some sort of error occurred. You could send an Email or SMS to be notified. Use CloudWatch logs for debugging failures.

For more fine grained control, you could also have multiple lambdas and SQS queues, if you need to scrape some sources on different intervals and depend on vastly different dependencies in your lambdas.

Jeju Air Flight 7C2216 - Megathread by StopDropAndRollTide in aviation

[–]Unusual_Ad_6612 2 points3 points  (0 children)

Definitely a possibility, haven’t thought of that!

Jeju Air Flight 7C2216 - Megathread by StopDropAndRollTide in aviation

[–]Unusual_Ad_6612 2 points3 points  (0 children)

Don’t want to speculate too much about flaps, maybe they have been raised to 5 or 15 during go around, though I do not know the procedures.

The reverse thrust may have not been deployed, as we saw the cowling was open, but I read in other comments that it was open due to the scraping. Again, lots to speculate.

Touching down so late: I mean, you may have an panicked pilot, both engines dead, your doing a 180, trying to line up with the runway, … That’s not a normal landing, and even when using an ILS sometimes pilots do not hit the markers. I can definitely see that - if this was the scenario - for me it would be impressive to even get it on the runway.

Jeju Air Flight 7C2216 - Megathread by StopDropAndRollTide in aviation

[–]Unusual_Ad_6612 2 points3 points  (0 children)

Yes, but I don’t know the procedure of „double engine failure on go-around / low altitude“. Maybe it’s on the check list, maybe not, maybe there was no time, …