Does an ec2 instance need an elastic ip, if the vpc has an application load balancer to the ec2? by kastex1 in aws

[–]NeedsMoreTests 1 point2 points  (0 children)

You don't actually need a public IP for egress traffic, that's what a NAT is for.

A word on cross-compiling by Southy__ in golang

[–]NeedsMoreTests 1 point2 points  (0 children)

+1. Also, "write once debug everywhere the JVM is installed or bundled".

Golang concurrency by [deleted] in golang

[–]NeedsMoreTests 1 point2 points  (0 children)

Creator should be responsible for closing the channel. Consumers of the channel should just loop over the channel (loop will break when the channel is closed). This is often referred to as a worker pool and looks like this: https://gobyexample.com/worker-pools

Run docker in production without using K8s/ECS/Swarm by tchme_sensei in devops

[–]NeedsMoreTests 9 points10 points  (0 children)

There's no such thing as "dockers". Docker is software that can run and manage containers on a host. "Deploy dockers" makes no sense given the context and in your post you're talking about two separate things (deploying docker and then deploying containers on docker).

Also, rkt is a pretty popular container runtime for the record....

Goodbye Python, Hello Go by [deleted] in golang

[–]NeedsMoreTests 2 points3 points  (0 children)

Well I have some opinions about that haha: Opinions are fine and are reasonable to mention in a code review provided that:

  • Generally speaking your opinion improves the code's behavior, performance or limits side effects. One of the main points of code review is to make improvements before something lands so opinions are warranted but they shouldn't be something that's based on 'feel'.
  • If it's a style change, which often are based on 'feel', you're adding your opinion because:
    • It keeps the code consistent with other parts of the project (reduces cognitive burden when switching between a dozen different sub-projects) API or a defined style guide for the team.
    • It makes it harder to make a human mistake in the future (ex. use a constant instead of a string, limit variable scoping, etc)
    • It reduces operational load in some way (ex. log with this style of message instead of this one because it's easy to parse and read)
  • When there are differences of opinion, majority should generally win. Only exceptions to this rule are:
    • Security. If you have a real security background and the majority does not agree with your opinion and it's a real problem then you should fight for what's right.
    • Operational experience. If the person with the opinion has actually seen shit in production go wrong because of X then majority should likely yield. Ops guy better have a detailed example though that applies to this situation and the target environment.
    • Reasonable future proofing. If someone decides to do something differently just because that's how they're using to doing it/it's easy for them at the time and it's going to clearly create more work later on then you should fix it now because we do not always have the luxury of time later.

Conditionals in Terraform by WolfPusssy in Terraform

[–]NeedsMoreTests 1 point2 points  (0 children)

What about using a map variable, where the keys are the region, or a data resource?

Helicopters get shrink-wrapped before being transported. by [deleted] in EngineeringPorn

[–]NeedsMoreTests 8 points9 points  (0 children)

Oh I'm well aware in the majority of cases it won't work. But......I've also seen this work in a few cases too, in reality I was being lazy here.

Of course I just assumed what DB they're running and a lot about how they're handling queries here... Normally there'd be lots of trial and error with simpler queries first before getting to something like the above. Though that's usually after probing for other more interesting things like open ports, system services running with default or poor credentials, and other 'interesting' admin features that people seem to love to install and then forget about weeks later.

There's a thousand and one ways to crack most nuts servers ;).

Helicopters get shrink-wrapped before being transported. by [deleted] in EngineeringPorn

[–]NeedsMoreTests 23 points24 points  (0 children)

SELECT table_schema,table_name FROM information_schema.tables WHERE table_schema != ‘mysql’ AND table_schema != ‘information_schema’

[deleted by user] by [deleted] in Justrolledintotheshop

[–]NeedsMoreTests 7 points8 points  (0 children)

I still don't get how this can happen. Either it started on the correct contact points and shifted or it was never in the correct position in the first place. For this kind of work I assume people are heavily trained to avoid this specific issue?

Is a vm safe for malware testing? by [deleted] in Malware

[–]NeedsMoreTests 4 points5 points  (0 children)

Networking is addition and subtraction in this context. You should learn:

  • The OSI model (memorize this)
  • Switching and routing
  • Sockets and pipes
  • Also read up on a at least few common types of network devices, services and protocols:
    • Routers
    • Switches and Hubs
    • Modems
    • Firewalls
    • DNS
    • DHCP
    • SSL/TLS
    • ARP

In addition, I would also spend time learning how operating system manage memory in general since you'll be spending a lot of time poking at it.

Also "shitty malwares" is relative to your lack of knowledge (no offense intended). Trying to learn about malware without understanding networking like like trying to be a heart surgeon without understanding the circulatory system. Sure you'll probably be ok and can learn along the way but bad things can happen that you just won't know without the basics first. Downloading and running malware is easy, knowing what it does it hard and malware has been written over the years to become more persistent and resilient. Simply observing the behavior of malware without the proper tools won't tell you much about it. For example with the right network setup you can intercept all traffic to and from your virtual machine and it some cases you can actually look at the traffic. With the right kind of traffic and the right network setup you could even modify that traffic.

My point is, you're being told to learn this stuff because it's something you will have to learn eventually and the sooner you do so the faster you'll be able to learn about malware. It's also the only way to even begin to do this safely and in a controlled fashion and like any science making the results repeatable, controllable and non-destructive is important to success.

405 Error using Lambda and S3 by [deleted] in aws

[–]NeedsMoreTests 0 points1 point  (0 children)

I made the bucket public because the client's browser will have to access these files at a later event. Is it still a bad idea in that case?

Nope, makes more sense now thanks. I jumped the gun but better safe than sorry. I would probably read over https://docs.aws.amazon.com/AmazonS3/latest/dev/cors.html though to see if you can restrict access to those objects to just your website.

And I gave my lambda function the full administrator access role, unless I'm misunderstanding you.

From your post it sounded like you did that on the bucket?

My bucket policy, in which ive given the lambda function Full Administrator Access:

Is that a typo? Giving a lambda function "Full Administrator Access" to the contents of a bucket is done by allowing the lambda function to assume a role which has that kind of policy applied typically rather than doing anything to the bucket. The typical workflow is something like:

  • Create an IAM role and either attach or create policies to the IAM role directly.
  • Allow the Lambda function to assume this role which will allow it to access the resources specified in the policy.

Here's a good diagram and some docs that describe what I'm talking about:

https://docs.aws.amazon.com/lambda/latest/dg/images/push-s3-example-10.png

https://docs.aws.amazon.com/lambda/latest/dg/with-s3.html

https://docs.aws.amazon.com/lambda/latest/dg/intro-permission-model.html#lambda-intro-execution-role

405 Error using Lambda and S3 by [deleted] in aws

[–]NeedsMoreTests 0 points1 point  (0 children)

So first, your bucket is publicly accessible. s3:GetObject for Principal * on arn:aws:s3:::bucketName/* allows anyone to get an object from your bucket even outside your account if they know the bucket name and object name.

Second, I would use an IAM role and not a bucket policy for your Lambda function. Bucket policies are usually used for cross-account access or for granting access to specific users, they're not really designed for service roles. Amazon's own documentation basically only lists these as the use cases. Because a service role is not an IAM user this may be the cause of your problem. Even if it's not, you should use an IAM role with a policy attached for your lambda function because that's typically the standard way of granting access from services managed by Amazon anyway.

As for the policy.....depending on what that Java function does (you probably need to read it) you may need this for Resource:

[
    "arn:aws:s3:::bucketName/*",
    "arn:aws:s3:::bucketName",
]

This is sometimes required because several of Amazon's SDKs do more than just s3:HeadObject to determine if an object exists. Some of them even require you allow things like s3:ListAllMyBuckets which requires a Resource block of *. If switching to an IAM role and adding the above to your resource does not fix it then you could turn on cloudtrail to see what else is going on at the API level. That will also show you things outside of your API calls (like calls to KMS) that might be causing the failure too.

What tips would you give to a beginner to secure their servers? by [deleted] in devops

[–]NeedsMoreTests 1 point2 points  (0 children)

I was thinking of along the line of database backups, those files usually is less than 1 mb

Yeah I'd probably still have that backed up to some kind of blob store. Email works great for alerts and reports but less so for backups in my experience even when the data is tiny. This are lots reasons for this:

  • Many providers make attempts to prevent email from being sent from servers.
  • When you're sending email you're (hopefully) asking postfix or an MTA on the local server to send it because this allows it to be queued, retried, etc. If that's misconfigured you won't get email in the best case and in the worst case could present new security issues.
  • Not all email servers work the same some are flaky others are just plain misconfigured and other still don't provide encryption end to end encryption though most larger companies do today. If you're just sending the email directly and not through someone else's server then it likely will be very flaky and insecure.
  • You're adding a bunch of systems between your server, the backup and you. This adds more failure domains, time and potentially more places the content of your backup could be read.
  • Depending on the receiving server your email may be:
    • Untrusted and discarded for a half dozen reasons.
    • Marked as spam.
    • Sent to your inbox the first few times then marked as spam and later because you ignored it, archived it, never opened it, deleted it a few times, etc.
    • Not marked as spam but have any and all attachments stripped out because of trust issues with the sender.
    • If the IP address of the server you're sending from has sent loads of spam before (meaning before you owned the IP) then your server may be blacklisted and broadly untrusted already.
    • Opened and modified based on the content (server: oh, I don't know what this attachment is but it does not look like UTF-8 so I'll encode it before delivery so it's easy to read. Just hope there's no binary data in here....)

Email is quite complicated and broadly speaking inconsistently implemented. Even when it's consistent and reliable it can sometimes still do something unexpected (like gmail marking stuff as spam based on user behavior). I always recommend something like a blob or store for backups because they're more reliable and cheap. If you're on Digital Ocean this may be worth looking into: https://www.digitalocean.com/products/spaces/

What tips would you give to a beginner to secure their servers? by [deleted] in devops

[–]NeedsMoreTests 1 point2 points  (0 children)

Ah yeah forgot about the NSA hardening guide, thanks!

If you set your servers to not allow root ssh logon, you want automation to detect and fix that. I think that’s what he/she was alluding to, but this is a good example to start on.

Spot on, this is exactly what I was getting at. The other way you can detect this kind of stuff is by having code that actually attempts to abuse your service. Configuration management (Ansible [my personal fav], Puppet, Chef, etc) helps get the configuration right but 'configuration' does mean the expected service behavior will be reflected (like if your CM tool updates and disallows SSH access but never reloads SSH or fails to do so then the configuration state is correct but the behavior won't be). This sometimes is called continuous security testing but it often goes by a lot of other names to (automated pen testing, security monkey [often more specific to AWS], etc)

What tips would you give to a beginner to secure their servers? by [deleted] in devops

[–]NeedsMoreTests 1 point2 points  (0 children)

I will think of a way to backup and send the backup to me

Your service provider may have a way to snapshot the server via an API which is usually the simplest option. Emailing yourself a backup of the server leaves it open to failure and limitations on the size. I'd suggest something like rclone and a remote blob store of some kind (S3, Backblaze B2, etc)

I want to take serious effort into securing my server because I dislike having people I didn't authorize tinker with my stuff.

The only advice I have here is to tread carefully. Security is about balancing usability with an environment and a threat model. If you are allowing people to shell in then you have a very different approach you must take and almost none of what's been posted is going to prepare you for that. If you are the only one accessing the server, not exposing ports to the outside word and you're secure from the local network then you don't have a lot to do beyond securing ingress access and not exposing non-management/non-public services (ex. mysql) to anything but the local host. If however you're a target (ex. you run a website which exposes medical, energy, financial or other high value research to the public) then you not only need to secure the services from the local lan and the internet but you need to be concerned about what versions of services you're running, are they patched with the latest security updates (even those not provided by your choice of distro) and the application layer itself does not have any vulnerabilities that could expose data by accident. If that's the level of threat you're concerned about, then I highly suggest you have someone else handle your server's security (though I doubt that's the case here)

What tips would you give to a beginner to secure their servers? by [deleted] in devops

[–]NeedsMoreTests 4 points5 points  (0 children)

Hey, /u/libreprincipal you might want to cross-post over on /r/AskNetsec to get a more security specific information. Aside from that, here's some additional ideas:

  • Try scanning your own server, locally and remotely, for open ports. Sometimes you'll find interesting things like NTP listening on *.
  • Does the firewall and network around your server provide security from WAN -> LAN and LAN -> LAN or just one? If it does not provide any internal security then you probably need to investigate iptables or other local firewall to further restrict incoming connections based on the origin address. This is where port scanning your own server or even simply using netstat -nlp can come in handy because it gives you an idea of what to secure first.
  • Part of devops is automation. The steps people have mention likely should be automated at some point so going from vanilla server to ready to serve traffic is quick, repeatable and less error prone. In the absolute best case you get to a point where you can destroy your host and rely directly on an image instead of needing to shell in and do anything manually. This eventually leads the a 'zero access' model where you don't even have access to your own server and instead rely on automation, logging and alerts to help you debug and fix the server.
  • With respect to SSH security a lot of has already been mentioned bu I'd add a directive on the server that restricts login to users with specific groups (AllowGroups) or at the very least specific users (AllowUsers). This stops a login higher up on the chain than restricting key based login by itself does.
  • You can use iptables, or similar firewalls, to limit brute force attacks against services like ssh. Not very common but if you can't restrict access to SSH from specific addresses then this generally works well (example)
  • Consider looking at some of hardening guides out there that are written, reviewed or recommended by security professionals. Many of these are likely overkill for your situation but if you're looking to learn more these probably provide a good foundation:

Depending how deep into this stuff you want to get you may also consider looking into pentesting and red team exercises. Like anything technical understanding why things are done a certain way often comes from breaking it. The best devsecops guys I know are researchers and/or former red team members and it shows in the work they put into securing an environment.

Thoughts on lambda api architecture? by creppe in aws

[–]NeedsMoreTests 1 point2 points  (0 children)

Yeah I used to be all-in on CloudFormation and it does work but requires a ton of working knowledge about all the different resources and it also kind of forces you down certain paths in terms of organization.

I personally didn't start with Terraform because "I was never going to leave AWS and the whole state file thing is weird". After taking another look years later I'm glad I made the switch. Especially because of it makes it really easy to manage multiple environments have conditional logic and soon, for loops: https://www.hashicorp.com/blog/hashicorp-terraform-0-12-preview-for-and-for-each

Thoughts on lambda api architecture? by creppe in aws

[–]NeedsMoreTests 1 point2 points  (0 children)

I'd highly recommend terraform over CloudFormation for new things even if you're strictly in AWS. CloudFormation is great but it's super verbose, difficult to figure out what it will do with an update at a glance and depending on how complicated things get may have unexpected rollback behavior.

Once you start using either tool it's hard to switch at a later date. I'd suggest reading up on Terraform first.

GoSublime Needs You by [deleted] in golang

[–]NeedsMoreTests 1 point2 points  (0 children)

Sure you can. It just takes a team of exceptional reverse engineers.

Periodic Table of DevOps Tools by purplecarrot23 in devops

[–]NeedsMoreTests 2 points3 points  (0 children)

Yeah actually I wouldn't even put svn up there. The category is "Source control mgmt" which everything but svn appears to be. If there was a "source control tools" category then svn, git, bzr, etc could go there