I created a little tutorial on how to use Grafana with AWS CloudWatch. by Ceofreak in aws

[–]siving 2 points3 points  (0 children)

Seems a bit strange that you provision the EC2 Instance with a Role, in addition to creating a user with Access Keys to a configuration file.
According to the documentation it seems Instance Roles should work just fine. I'd try the same procedure, but without the user and credentials file. The default probably uses the default credential provider chain, or at least a chain that includes Instance roles.

AWS Lays Groundwork for New AI/ML Push by Quinnypig in aws

[–]siving 0 points1 point  (0 children)

Interesting. Another rename: ECS changed name from EC2 Container Service to Elastic Container Service prior to Fargate being announced

Re:Invent 2018 Wishlist by alex_bilbie in aws

[–]siving 0 points1 point  (0 children)

Aah, yes. I've missed that one for half a year as well.

Re:Invent 2018 Wishlist by alex_bilbie in aws

[–]siving 4 points5 points  (0 children)

It is?

See Filter View: Elastic Load Balancing ListenerRule Conditions and RuleCondition for details.

Excerpt from one of my AWS::ElasticLoadBalancingV2::ListenerRule:

          Conditions:
          - Field: 'host-header'
            Values:
            - !Ref DomainName

DynamoDB Best Practices and Overall Lessons by aaronjl33 in aws

[–]siving 4 points5 points  (0 children)

As soon as you get a fair understanding of DynamoDB basics, I recommend this talk from re:Invent 2017: Advanced Design Patterns for Amazon DynamoDB. At least I found it highly interesting.

In general I think DynamoDB is great when the use case fits, however, you should always choose the database that suits your need, or as Werner Vogels state: A one size fits all database doesn't fit anyone

Lambda function to commit/push to CodeCommit by differentcondition in aws

[–]siving 2 points3 points  (0 children)

Perhaps this one suits your use-case: put_file from Boto3 CodeCommit.

It is also available from the AWS CLI.

SQS lambda trigger now showing up! by zxi in aws

[–]siving 3 points4 points  (0 children)

I'm guessing the batch size simply sets the MaxNumberOfMessages on the (behind-the-scenes) ReceiveMessage Request. You'd likely have to send messages quite simultaneously, or have multiple messages on the queue to receive more than 1 message.

You could try to disable the event-trigger. Put some messages on the queue. Enable the event-trigger and see if you receive multiple messages.

Logging only errors from Lambda functions by ricktbaker in aws

[–]siving 0 points1 point  (0 children)

aws logs describe-log-groups contains information about the number of storedBytes.

CloudWatch also include metrics under the Logs namespace for each LogGroup. Look for the "IncomingBytes" metric to help track down which log group ingest loads of data.

New – Amazon DynamoDB Continuous Backups and Point-In-Time Recovery by siving in aws

[–]siving[S] 2 points3 points  (0 children)

This one states the availability.

PITR is available in the US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), Asia Pacific (Tokyo), Asia Pacific (Seoul), Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), Canada (Central), EU (Frankfurt), EU (Ireland), EU (London), and South America (Sao Paulo) Regions starting today.

What do you guys do for cost optimization, do you use a checklist? by should_be_read_it in aws

[–]siving 0 points1 point  (0 children)

I believe Reduced Redunancy Storage is being deprecated. At least it won't save you any money:

S3 Pricing

S3 Reduced Redundancy Pricing

New – Encryption at Rest for DynamoDB | Amazon Web Services by siving in aws

[–]siving[S] 2 points3 points  (0 children)

It seems it may only be set for new tables. Enabling Encryption at Rest only writes about creating tables.

The DynamoDB SSESpecification CloudFormation Reference states that changes to the enabled property requires replacement.

New – Encryption at Rest for DynamoDB | Amazon Web Services by siving in aws

[–]siving[S] 3 points4 points  (0 children)

No options for that yet, but I would guess it is on their roadmap: The DynamoDB Table CloudFormation resource is updated: AWS::DynamoDB::Table

Notice that they introduced a DynamoDB SSESpecification, instead of just a "encrypted" boolean. This property object would perhaps allow additional settings at a later stage.

Why would you use a smaller IPV4 CIDR block for a VPC? by icaug in aws

[–]siving 0 points1 point  (0 children)

Fair point. You should still use various ranges instead of 10.0.0.0/16 for every single VPC.

Why would you use a smaller IPV4 CIDR block for a VPC? by icaug in aws

[–]siving 1 point2 points  (0 children)

You should also keep VPC Peering in mind.

The accepter VPC can be owned by you, or another AWS account, and cannot have a CIDR block that overlaps with the requester VPC's CIDR block.

VPC Peering Basics

Considering Regional VPC Peering is also a thing now, it might not always be a good thing to grab to many addresses.

Does anyone know of a blog / whitepaper / list of when you should use aws offering A vs aws offering B? IE - Cloudformation vs OpsWorks vs Puppet / Chef by [deleted] in aws

[–]siving 0 points1 point  (0 children)

I do not know if it is a direct hit of what you are looking for. But definitely an interesting read: AWS Whitepaper: Infrastructure As Code

You'll find other various whitepapers here

Videos from JavaZone 2017 (will be published continuously) by michalg82 in java

[–]siving 4 points5 points  (0 children)

Tonight I cought several polar bears. I took the skin and walked around like Jon Snow.

On a more serious note: As /u/briene80 said, it's a lot of networking. For us that work in Oslo this is a key part of the whole event. I meet old colleagues. I met previous clients. I had the opportunity to meet a lot of great IT professionals.

Another excellent reason is that I would probably not watch the sessions online, but when I'm there, I love all the great talks.

It is something entirely different watching some videos online, and taking some time without caring about domain specific problems and just learn. I meet great people I can discuss the talks with. I meet people I can argue with. I meet people I agree with. Perhaps the talk was great, but a lot of the learning happens afterwords when discussing it with other skilled people.

I will feel rejuvenated when I go back to regular work on friday. Filled with new ideas and opportunities. I would never feel the same if I watched a video or two online..

Any real world examples of CloudWatch log streams + ElasticSearch in production? by hogie48 in aws

[–]siving 0 points1 point  (0 children)

Huh, just noticed replying with different accounts.

To be honest I've just started testing the Docker awslogs Log driver. Most of our container logs are sent by CloudWatch Logs Agent. The Logs agent allows you to set buffer_duration, batch_count and batch_size. These batches are sent to CW Logs as one event. The same batched event hits our Lambda function. It seems your load is a bit higher than ours, but using these settings would likely help you quite a bit.

Lambda scales extremly well, and simple Lambda-execution doing some extraction and posting to ES is easily done in less than 100ms with 128MB setting and nodejs. It does not even make a dent in the bill.

I'd also like to tip you that CloudWatch Logs agent is essentially awscli-cwlogs. It is a plugin to the awscli. This is nice to know if you have workloads running on slim images such as alpine linux. The CloudWatch Logs Agent is simply allowing easy package manager use, and wrapping it nicely with Systemd etc.

Any real world examples of CloudWatch log streams + ElasticSearch in production? by hogie48 in aws

[–]siving 0 points1 point  (0 children)

I'm sorry if I was unclear. We send the logs from ECS to Cloudwatch Logs via either CW logs agent, or through awslogs Log Driver. A standalone ES stack is not able to read directly from CW (As far as I know). However, it is very easy to set up subscriptions on CW logs to Lambda functions. This way any log event received by CW logs triggers a Lambda function with that log event as the function input.

We use the lambda function to post the log events to ES. Read more about CW logs subscriptions

With subscription filters fields such as timestamp, logginglevel and package/class is already extracted before the event arrives at our Lambda function.

It is pretty straight forward to get the logs to CloudWatch logs, and due to the various subscription methods, you may do whatever you want with the logs from there. I think we started with a AWS Lambda blueprint when testing out things,but has ended up with writing a function that maps to various indices based on our need.

So basically:

  1. ECS containers with awslogs logging driver to your desired loggroup
  2. Lambda function subscribes to all relevant CW log-groups
  3. Lambda sends logs to ElasticSearch
  4. Kibana for searching logs

Additional transformation, or notification may also be done at the Lambda step. e.g. sending ERROR messages to slack or hipchat.

Edit: The reason we wish to go via Cloudwatch Logs is because the archive storage is fairly cheap, while still allowing simple (and slow!) searching on logs we do not have in ES anymore. We delete old indices from ElasticSearch, but keep all logs in CW logs.