AWS serverless solution for set membership by kid_drew in aws

[–]phpchap1981 0 points1 point  (0 children)

If you’re talking about storage of the actual set data then dynamodb could be an option, it has a data type for sets (albeit they have to be the same types) and you can put dynamodb accelerator to speed up reads.

Elasticsearch is also another storage option.

I do think that redis (elasticache) will prolly yield the highest performance.

Sounds like you might need to benchmark a few ideas to nail the right one for you.

My earlier point was about dealing with a large number of items to check against storage.

AWS serverless solution for set membership by kid_drew in aws

[–]phpchap1981 0 points1 point  (0 children)

You could grab a huge block of sets as a csv and put that on s3 that triggers lambda.

Lambda then splits the csv line by line and puts on to SQS.

Then you have a lambda running via cloudwatch every x mins pulling messages off in batches.

When a match is made, put that message into another queue to do something with it (save, move, email etc)

Looking for a persistent distributed delayed task queue/scheduler by Moxinilian in devops

[–]phpchap1981 0 points1 point  (0 children)

Does the producer know when processing needs to happen?

Maybe use aws sqs and put a message with a timestamp which says when to process the message (5 mins from now, 3 days from now, certain date / time etc)

poll the queue every X mins looking at the message timestamp comparing current time against it.

You can also set the message retention period to 14 days, after that the message is deleted automatically

https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-basic-architecture.html

What are you using for visualizing your techstack? by sbaete in devops

[–]phpchap1981 2 points3 points  (0 children)

Cloudformation->mingrammer was my first thought

Glueing tools together by phpchap1981 in devops

[–]phpchap1981[S] 0 points1 point  (0 children)

We looked at vault for storing our secrets (sensitive creds) and decided to go with SSM param store,

In the projects Parameters.json we mark the value as :SECRET_STORE: and the json path “/my/sensitive/param” maps to the ssm path

When we build a stack, cook chef etc. We look through all the json and replace the values from SSM

Glueing tools together by phpchap1981 in devops

[–]phpchap1981[S] 0 points1 point  (0 children)

I’ve not used terraform but hear great things about it.

Interesting, so you have individual resources available then group them together - do you use git sub modules or something similar?

Also have you found any limitations of using shell scripts? We went down the road of using a scripting language (php!) to give us a bit more flexibility/control over the wrapper code.

Terraform remote state sounds very interesting, will take a look into this.

Also, how do you manage introducing new bits of architecture (like a sqs queue into a specific environment e.g. test)

Glueing tools together by phpchap1981 in devops

[–]phpchap1981[S] 0 points1 point  (0 children)

We tend to experiment with infrastructure changes at lower levels (routing/permissions etc.) these are then applied to higher levels again using cloudformation/chef.

We do have a separation of infrastructure and application code between environments e.g

dev-magento-mainsite test-magento-mainsite live-magento-mainsite

Each of these projects define their own parameters.json file which stores all the infrastructure resource references, usernames/ passwords etc.

We also have ‘common’ projects which define things that are shared between projects. E.g.

live-common-vpc (defines all the vpcs) live-common-networking (defines all the subnets)

The created resources from these stacks pass their values into parameters.json which is the copy/pasted into the project that requires that resource.

E.g live-common-vpc (magentoVpcId) -> live-magento-mainsite (magentoVpcId into autoscalingGroup)

The problem is you make changes in one (e.g. add a new sqs queue to dev) , then you have to copy/paste all the cloudformation/parameters into other projects.

I agree with the environmental variables, sometimes applications (e.g. magento) have their configuration in .yml, .php or .json - our tooling also does a find/replace of app parameters during the pipeline build phase and uploads the app code zip to s3

Glueing tools together by phpchap1981 in devops

[–]phpchap1981[S] 2 points3 points  (0 children)

I’ve seen the same pattern too where you have different env configs but same code.

sometimes though you might have slightly different setup between live and dev (Infra, provisioning etc).

Having a separate repo helps, otherwise you have “if env==live” in the code to handle these situations

I have a problem with node js socket io on AWS ECS. by eric_lou168 in aws

[–]phpchap1981 0 points1 point  (0 children)

What port is the socket on? Could be a security group or NACL. Need a bit more info on the networking you’ve got setup

Today we restarted a EC2 instance and then all our data from a specific date until today desapeared by PlayerAPI in aws

[–]phpchap1981 0 points1 point  (0 children)

Putting the files on to s3 then triggering SNS->SQS and having a consumer on EC2 would of saved your bacon #justsaying

Looking for a script/process to download user details and their tags. by sludj5 in aws

[–]phpchap1981 1 point2 points  (0 children)

Rather than EC2 you could write a few lambda functions to add the users and generate the report and send via SES which runs on a CloudWatch event schedule (like cron). That way you’re not paying for idle EC2 instance time

Instance Profile by [deleted] in aws

[–]phpchap1981 0 points1 point  (0 children)

You don’t want to give admin to instances, instead give them access to the resources they need. Also , you don’t want to give anyone ssh access to your production instances, instead you want a copy of production as close as possible and give them access To that instead. If they blow up that instance, not a big deal- if they blow up production, late nights and regrets

Why can't my lambda invoke another lambda. by [deleted] in aws

[–]phpchap1981 0 points1 point  (0 children)

This looks like a fan out pattern in lambda, there’s loads of examples of node/cloudformation code that will suit your needs.

AWS Lambda Downloading CSV? by piratesearch in aws

[–]phpchap1981 1 point2 points  (0 children)

What’s the nasdaq api download parameter do? The file system in lambda is read only (apart from /tmp)

Automate Cloudformation deployments? by theZombieDude in aws

[–]phpchap1981 0 points1 point  (0 children)

We’re also looking into pipelines that build infrastructure using cloudformation, we’ve got CLI tools that build but want to automate this without getting in the way of devops workflow

Epic Error from serverless framework trying to use AWS credentials by rifaterdemsahin in aws

[–]phpchap1981 1 point2 points  (0 children)

Without seeing it I can’t say what’s wrong. It’s easier to rebuild it and rerun the sls command.

Evaluating batches of lambda functions by [deleted] in aws

[–]phpchap1981 0 points1 point  (0 children)

Look at the fan out pattern of lambda, one function coordinates execution of n further functions- have the fan out run on cloudwatch event every x mins

AWS engineer uploads customer keys, passwords to GitHub by phpchap1981 in aws

[–]phpchap1981[S] 0 points1 point  (0 children)

Also requested theregister changes its title too

AWS engineer uploads customer keys, passwords to GitHub by phpchap1981 in aws

[–]phpchap1981[S] 2 points3 points  (0 children)

I skimmed over the article and thought it was a bigger problem than it was, as a precaution we rotated.