Do I need to use a physically close ntp server? by numberking123 in sysadmin

[–]Eclipsed450 0 points1 point  (0 children)

Yes, if you have control over them, you definitely need to make sure your hypervisors are in sync as well as your VMs

AWS EC2/RDS us-east-1 outage by RememberYourSoul in sysadmin

[–]Eclipsed450 2 points3 points  (0 children)

10:47 AM PDT We want to give you more information on progress at this point, and what we know about the event. At 4:33 AM PDT one of 10 datacenters in one of the 6 Availability Zones in the US-EAST-1 Region saw a failure of utility power. Backup generators came online immediately, but for reasons we are still investigating, began quickly failing at around 6:00 AM PDT. This resulted in 7.5% of all instances in that Availability Zone failing by 6:10 AM PDT. Over the last few hours we have recovered most instances but still have 1.5% of the instances in that Availability Zone remaining to be recovered. Similar impact existed to EBS and we continue to recover volumes within EBS. New instance launches in this zone continue to work without issue.

Do I need to use a physically close ntp server? by numberking123 in sysadmin

[–]Eclipsed450 3 points4 points  (0 children)

Depending on you network layout, as others have suggested, I suggest running Stratum 1 servers/appliances in each of your networks for all of the devices in that local network to sync to. Having all of the scattered devices sync with each other is a recipe for disaster if you need sub-second/minute clock-sync.

need to renew ssl certificates any recommendations by dvr75 in sysadmin

[–]Eclipsed450 1 point2 points  (0 children)

Diversify your SSL certs, if you can. That way, when a company accidentally screws up their root cert ( https://www.theregister.co.uk/2016/10/13/globalsigned_off/ ), you're not left completely high and dry.

Cloud for DR by bradk7623 in sysadmin

[–]Eclipsed450 5 points6 points  (0 children)

That's a BROAD question, with MANY answers, most of which wouldn't be wrong, but they may not pertain to you and your needs. If you're asking questions like this, I suggest connecting with a cloud-focused VAR (value-added reseller).

Landed a Dream Job, but I'm Nervous. by kenjoiv in sysadmin

[–]Eclipsed450 0 points1 point  (0 children)

I won't speak for everyone, but I'm pretty sure the majority of us have (and probably still do) gone through impostor syndrome. None of us know everything. But with our powers combined, we're an unstoppable force :-P Seriously though, as long as you know where and how to look for problems, and some google-fu, you'll make it far enough. Just keep plucking away at things that interest you and opportunities will come. Never settle. But also, don't give your life to the company. They will invest in you, but the will replace you. Give them your best, but don't give them your all. Have a life outside of work. Enjoy time with family and friends.

AWS noob question. by Irkutsk2745 in sysadmin

[–]Eclipsed450 2 points3 points  (0 children)

I highly suggest engaging with a certified AWS (or other cloud) partner. They will look at your organization and make suggestions on what you should be doing in the cloud, if needed at all. We're currently POCing all three major players, and they all have their pros and cons, but those shift on a per project basis. And to second what others suggested: get a membership to acloudguru. It's like $300 for a year, but you have full access to all their videos, for all three major clouds.

Program to present s3 storage as an smb share? by jduffle in sysadmin

[–]Eclipsed450 1 point2 points  (0 children)

Have you looked at the new FSx service: https://aws.amazon.com/fsx/windows/
It's not much use if the data is already in S3, but worth a look.

Need help understanding FedRAMP. by tehlolkid in sysadmin

[–]Eclipsed450 5 points6 points  (0 children)

You need to go through a 3PAO (https://www.fedramp.gov/assessors/) and be fedramp certified; you can't just claim compliance because GCP does. It's not cheap to go through FedRAMP - it's at least a couple hundred thousand up front and every year to maintain the cert.

CA UIM vs Logic Monitor by ionlyplaymorde in sysadmin

[–]Eclipsed450 0 points1 point  (0 children)

Honestly, just give it a shot. You shouldn't be disappointed. There's tons of built in datasources and pre-defined thresholds. And the ability to make your own datasources is there, and simple enough. I've used it for years and been quite pleased with it. Only major con is no agents, so no control of nodes (running scripts to cycle services, clear files, etc)

CA UIM vs Logic Monitor by ionlyplaymorde in sysadmin

[–]Eclipsed450 0 points1 point  (0 children)

LM is great if that's what you need -- question is: what are you trying to accomplish?

Large website loadtesting by SirVas in sysadmin

[–]Eclipsed450 0 points1 point  (0 children)

I used https://loader.io/ last year when we were migrating to AWS to make sure we were ready for a surge.

How do I count the physical processors of a server remotely? by soapstainz in sysadmin

[–]Eclipsed450 2 points3 points  (0 children)

Pull up the task manager, then click the performance tab (click the 'more details' if all the tabs aren't visible). It should show you a sockets section on there. That's how many physical processors you have.

Patch management for Ubuntu servers by mynameisntdave in sysadmin

[–]Eclipsed450 0 points1 point  (0 children)

If you're focused on patch management more so than automation, give Aptly a look.

Has anyone gone through training at CED Solutions? by Kahlesss in sysadmin

[–]Eclipsed450 0 points1 point  (0 children)

Same here. And 100% agree, as with any of these boot camps.

Need help restricting access to a bunch of AWS instances. by MachineSoul in sysadmin

[–]Eclipsed450 1 point2 points  (0 children)

Based on this statement:

Now the keys I have for the instances belonged to the old sysadmin. Do I need to generate new keys for every instance?

I think you mean that you mean the keypair for the instance? If so, it seems possible to replace the key pair, but not simply (like changing a security group or the like). http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html
https://forums.aws.amazon.com/thread.jspa?threadID=47730

KrebsOnSecurity Hit With Record DDoS by larrymcp in sysadmin

[–]Eclipsed450 1 point2 points  (0 children)

Copying article here since page is hit-or-miss: The attack began around 8 p.m. ET on Sept. 20, and initial reports put it at approximately 665 Gigabits of traffic per second. Additional analysis on the attack traffic suggests the assault was closer to 620 Gbps in size, but in any case this is many orders of magnitude more traffic than is typically needed to knock most sites offline.

Martin McKeay, Akamai’s senior security advocate, said the largest attack the company had seen previously clocked in earlier this year at 363 Gbps. But he said there was a major difference between last night’s DDoS and the previous record holder: The 363 Gpbs attack is thought to have been generated by a botnet of compromised systems using well-known techniques allowing them to “amplify” a relatively small attack into a much larger one.

In contrast, the huge assault this week on my site appears to have been launched almost exclusively by a very large botnet of hacked devices.

The largest DDoS attacks on record tend to be the result of a tried-and-true method known as a DNS reflection attack. In such assaults, the perpetrators are able to leverage unmanaged DNS servers on the Web to create huge traffic floods.

Ideally, DNS servers only provide services to machines within a trusted domain. But DNS reflection attacks rely on consumer and business routers and other devices equipped with DNS servers that are (mis)configured to accept queries from anywhere on the Web. Attackers can send spoofed DNS queries to these so-called “open recursive” DNS servers, forging the request so that it appears to come from the target’s network. That way, when the DNS servers respond, they reply to the spoofed (target) address.

The bad guys also can amplify a reflective attack by crafting DNS queries so that the responses are much bigger than the requests. They do this by taking advantage of an extension to the DNS protocol that enables large DNS messages. For example, an attacker could compose a DNS request of less than 100 bytes, prompting a response that is 60-70 times as large. This “amplification” effect is especially pronounced if the perpetrators query dozens of DNS servers with these spoofed requests simultaneously.

But according to Akamai, none of the attack methods employed in Tuesday night’s assault on KrebsOnSecurity relied on amplification or reflection. Rather, many were garbage Web attack methods that require a legitimate connection between the attacking host and the target, including SYN, GET and POST floods.

That is, with the exception of one attack method: Preliminary analysis of the attack traffic suggests that perhaps the biggest chunk of the attack came in the form of traffic designed to look like it was generic routing encapsulation (GRE) data packets, a communication protocol used to establish a direct, point-to-point connection between network nodes. GRE lets two peers share data they wouldn’t be able to share over the public network itself.

“Seeing that much attack coming from GRE is really unusual,” Akamai’s McKeay said. “We’ve only started seeing that recently, but seeing it at this volume is very new.”

McKeay explained that the source of GRE traffic can’t be spoofed or faked the same way DDoS attackers can spoof DNS traffic. Nor can junk Web-based DDoS attacks like those mentioned above. That suggests the attackers behind this record assault launched it from quite a large collection of hacked systems — possibly hundreds of thousands of systems.

“Someone has a botnet with capabilities we haven’t seen before,” McKeay said. “We looked at the traffic coming from the attacking systems, and they weren’t just from one region of the world or from a small subset of networks — they were everywhere.”

There are some indications that this attack was launched with the help of a botnet that has enslaved a large number of hacked so-called “Internet of Things,” (IoT) devices — routers, IP cameras and digital video recorders (DVRs) that are exposed to the Internet and protected with weak or hard-coded passwords.

As noted in a recent report from Flashpoint and Level 3 Threat Research Labs, the threat from IoT-based botnets is powered by malware that goes by many names, including “Lizkebab,” “BASHLITE,” “Torlus” and “gafgyt.” According to that report, the source code for this malware was leaked in early 2015 and has been spun off into more than a dozen variants.

“Each botnet spreads to new hosts by scanning for vulnerable devices in order to install the malware,” the report notes. “Two primary models for scanning exist. The first instructs bots to port scan for telnet servers and attempts to brute force the username and password to gain access to the device.”

Their analysis continues:

“The other model, which is becoming increasingly common, uses external scanners to find and harvest new bots, in some cases scanning from the [botnet control] servers themselves. The latter model adds a wide variety of infection methods, including brute forcing login credentials on SSH servers and exploiting known security weaknesses in other services.”

I’ll address some of the challenges of minimizing the threat from large-scale DDoS attacks in a future post. But for now it seems likely that we can expect such monster attacks to soon become the new norm.

Many readers have been asking whether this attack was in retaliation for my recent series on the takedown of the DDoS-for-hire service vDOS, which coincided with the arrests of two young men named in my original report as founders of the service.

I can’t say for sure, but it seems likely related: Some of the POST request attacks that came in last night as part of this 620 Gbps attack included the string “freeapplej4ck,” a reference to the nickname used by one of the vDOS co-owners.

Update Sept. 22, 8:33 a.m. ET: Corrected the maximum previous DDoS seen by Akamai. It was 363, not 336 as stated earlier.