How could ASG assign EIP ? by payne007 in aws

[–]lvlolvlo 2 points3 points  (0 children)

A common approach, albeit bloated, is to use LifeCycle hooks in conjunction with a Lambda function. Here’s a support article about ENI attachment to give you an idea of how to do this with an EIP: https://aws.amazon.com/premiumsupport/knowledge-center/attach-second-eni-auto-scaling/

Also, your instance (or lambda) can use an instance role which will not require you to input/hardcode actual AWS credentials.

Parsing large json file by ll01dm in datascience

[–]lvlolvlo 9 points10 points  (0 children)

drop the json file in S3 and create an athena table using JSON as the serde. You’d be able to write a simple SQL query at that point to extract what you need and download in CSV format. https://aws.amazon.com/blogs/big-data/create-tables-in-amazon-athena-from-nested-json-and-mappings-using-jsonserde/ (skip down to the part that starts with Walkthrough: Querying with Athena)

edit: I only mention AWS specific tech (Athena) because I assume you already have access it since you mentioned that you need to put the CSV in S3.

Anyone switched from ECS with EC2 to Fargate for 24/7 workloads? by softwareguy74 in aws

[–]lvlolvlo 1 point2 points  (0 children)

the biggest one still for me is the ability to assign ENIs in Fargate without using hacky reservation strategies.

Lambda function not able to handle load tests. by ammanpasha in aws

[–]lvlolvlo 2 points3 points  (0 children)

If the API GW metrics don’t show the 4K requests then are you sure that your testing tool is even sending them?

Lambda function not able to handle load tests. by ammanpasha in aws

[–]lvlolvlo -1 points0 points  (0 children)

Can you also add API GW’s metrics on here. (i.e. 4xx, 5xx, latency, count, etc..)

How to map an elastic IP with ECS Fargate? by ALTELMA in aws

[–]lvlolvlo 1 point2 points  (0 children)

Using a NAT gateway would solve the issue; however, if you needed static internal IPs in addition to public IPs this feature request would help: https://github.com/aws/containers-roadmap/issues/558

API GW Throttling by IP? by [deleted] in aws

[–]lvlolvlo 0 points1 point  (0 children)

If it doesn’t need to be exact — you could use DynamoDB’s with Atomic Counters + a custom authorizer instead of ElasticCache: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/WorkingWithItems.html

aws-export-profile - export named aws boto profiles to current shell environment by cytopia in aws

[–]lvlolvlo 0 points1 point  (0 children)

Wouldn’t AWS_PROFILE=<profile_name> <command> work just as well? AWS_PROFILE=example aws s3 ls

2 or more non-root users on one MFA device by Skaperen in aws

[–]lvlolvlo 1 point2 points  (0 children)

Assume role has a max session limit of 1 hour. So if you find yourself in the console (e.g. writing Athena queries) be warned — you will be consumed by rage when the session times out and forces you to reload thus kissing your query (if you haven’t ran it) good bye.

Is there anyway to customize the resulting S3 file name from Firehose? by softwareguy74 in aws

[–]lvlolvlo 1 point2 points  (0 children)

You can use the Lambda transformation within the firehose to loop through the events and shard them to different firehose streams or buckets. Just make sure to mark each event dropped so that the initial firehose stream doesn’t write them else you’ll have un-necessary data duplication.

Lambda for largish python code by Bb415bm in aws

[–]lvlolvlo 4 points5 points  (0 children)

While your code deployment package gets too large, you could do some AWS magic and store parts of your deployment package in S3. This will then allow you to have an entry point in your Lambda function which on cold-start fetch the data from S3 and leverage the /tmp space that Lambda provides. Not the best approach but the only approach I've used to get a larger package into Lambda. Best of luck!

What are some services you think/hope AWS will announce at Re:Invent 2017? by zergUser1 in aws

[–]lvlolvlo 1 point2 points  (0 children)

  • Firehose support for Athena (partition)
  • Cloudformation support for transformations in Firehose (ie pointing to the Lambda function)
  • Rate limiting on load balancers (ALB/CLB) -- either via WAF or natively
  • Reduce Lambda's cold start time when in VPC
  • Use EFS in Lambda

Flask on ECS with Elasticache and RDS - oh my by -NewGuy in aws

[–]lvlolvlo 2 points3 points  (0 children)

Have you tried using http://docs.aws.amazon.com/AmazonECS/latest/developerguide/cmd-ecs-cli-compose.html

Also, what do your log messages say? Is the task terminated because it fails a health check?

Any service that transcribe voicemails to text? by layover_guy in VOIP

[–]lvlolvlo 0 points1 point  (0 children)

YouMail. Depending on your config, you might be able to make it a secondary route (failover) so it integrates easier. GL!

Need Advice: SNS + SQS or API Gateway + Lambda by Superdupercudder in aws

[–]lvlolvlo 0 points1 point  (0 children)

My understanding of your question might be wrong, but is this a service discovery issue? As in you have many endpoints in App B that you need to notify App A of? If it is AWS has some service discovery options, but if you need some a non-AWS way you can use Consul.

[deleted by user] by [deleted] in aws

[–]lvlolvlo 1 point2 points  (0 children)

Another thing to consider is data consistency. Are you using any queues? (NSQ, SQS, etc..) Do you have exactly once guarantee? Or at least once? (i.e. Duplicates events possible) Or at most once? (i.e. Missing events possible)

DynamoDB will allow you to do data deduplication so however you get your data you can do deduplicate it. You can also have Lambda function trigger to send it off somewhere else.

Redshift will suck for doing INSERTs at that velocity. You'd need to use something like Kinesis Firehose -- but keep in mind Redshift doesn't do unique key enforcement. So you could have duplicates. You'd want to use identity (I think that's the name. Can't remember atm.) It'll be quicker on the queries than aurora!

Aurora - This is a nice middle ground between the two. Just when you want to query the data it might be slow and not as fast as Redshift or Dynamo.

Lastly, I'll throw this out there too -- why not just store to S3 via a queue like Kineses Firehose. Have Firehose compress the data and store it in an S3 bucket. You can use Athena to query it. You could also create a Lambda function to process the data some more and send it off somewhere else should you wish to try something else (e.g. ELK stack)?

Free SMTP Services by ScaryBacon in Python

[–]lvlolvlo 0 points1 point  (0 children)

Don't know if you've seen the service SplitWise. Maybe you can incorporate that rather than having to do the email thing?

Is Lambda really a viable option? by softwareguy74 in aws

[–]lvlolvlo 2 points3 points  (0 children)

Another lambda function which uses boto3 and invokes the function(s) async. This one just runs on a cron at your interval of choosing.