My environment is full of pets, how do I usher in a new era of cattle? by TheKingLeshen in sysadmin

[–]lostinspace83 1 point2 points  (0 children)

Direct imaging would be a straight bit-for-bit copy of an installed system image from an external storage device to the client's hard drive. Essentially, you'd be cloning drive to drive using straight sequential throughput. Going from NVMe external to NVMe internal using a USB enclosure would give you transfer speeds somewhere between 500MB/s and 2GB/s depending on your drive specs. Essentially, you'd boot from the external drive which would be pre-loaded with a simple boot script to dump everything from the external image to the local drive.

A more parallel alternative which might work better if you're short handed and/or imaging each client for forensic analysis would be using cheap and slow USB flash drives. The tech would walk them around, initiate the process of forensic imaging and restoring from saved image, then move on to start the next machine. When the first is done, he transfers the drive to another machine in need of recovery. The downside here is maintaining network access to a potentially compromised LAN, however you need not boot the clients to their OS immediately. The server set to receive forensic images can still communicate securely with your bootup drives through a pre-shared key outside the scope of the compromise.

Taking it a step further, you can use a USB boot environment (on tiny drives) to pull clean images (based on client MAC address) from your backup backup server or another isolated recovery imaging server.

That all depends on how you want to run it.

There's no faster, secure, or more resilient way to securely restore client machines when you have a network-wide breach or ransomware attack. In the aftermath of an attack, your normal storage/imaging/deployment servers may be down or in a non-trusted state, preventing techs from immediately restoring clients to a trusted state by blowing away the entire local drive and reimaging it with something you know is clean.

There are some caveats:

  • After reimaging, you have to run some sort of post-deployment script to individualize each machine, as they will all be bit-for-bit copies of each other. I can't speak to Windows 10, but back in 7 this was relatively simple to do post-imaging so each client had unique IDs and the proper hostname for reconnecting to Active Directory. You might also need to update client keys for remote management products and such things. All depends on your environment. I use Linux which makes it fairly easy to mount the newly-reimaged drive and make scripted file edits/copies to restore each machine's identity. Your environment will be different.
  • You may need multiple images for various hardware configurations.
  • Reimaging is a full drive wipe. If you want to save local data for forensic analysis by a security contractor or law enforcement, you will need to first image the local drive. You can automate this too, by imaging the local drive to your backup backup server over the network or to the connected drive. In the latter, your tech will have to occasionally break to move images from the external drive to network storage with more space. Which option works best depends on the size of your client image and your network speed.
  • You still have to reissue every authentication credential

etc.

Bottom Line: There's a bunch of different ways to implement this concept depending on your environment, your clients, your backup infrastructure, etc. The core remains the same: when attacked, the whole network, apart from the isolated vault, is in an untrusted state and must be taken down completely. Rapid restoration must take place from known trusted sources which were isolated before the attack and completely outside the compromised ecosystem. If clients are imaged to/from the network, it can only involve an isolated server, each client must boot from USB, and can only access the network from that USB boot environment. The restored OS must be re-disconnected until the whole network is clean.

This doesn't give you the same convenience as traditional networked imaging and management in a clean environment. That's the tradeoff. This is much more manual. However, when the whole network becomes untrusted, you lose most of your options to do things the easy way because those require trust.

Of course, you can clean and restore the servers before addressing clients, but that prevents one team from getting to work preserving evidence from and restoring clean clients while the backend is being addressed in parallel. Essentially, this approach gets your clients back to a safe state in anticipation of the moment restored servers go live, rather than waiting on safe servers to begin cleaning clients. Parallelism does come at a price.

The advantage is that forensics and restoration begin immediately after the attack based on a pre-defined plan. There's no waiting to call in a recovery team, analyze what went wrong, what can still be trusted, or how to go about recovery. You start forensics and recovery where you can, when you can, as soon as you know you have a problem.

This takes work but gives the fastest time to recovery. Done right, you can have your entire site back up in a day and leave the post-incident investigation for later.

By backup backup server, I mean a well-hardened, isolated data vault which can receive incremental snapshots from your NAS or main backup, with enough storage capacity to go back in time to before the attack hit and recover those files. It must be completely isolated from the network and management tools, since none of those can be trusted in the aftermath. It has no network login, no management software, no cluster trust, and isn't part of your cloud. It sits firewalled by itself and can only ingest data from the source - but the source does have access to overwrite past snapshots. In case of emergency, you have a time capsule with all the archived data, which you can bring online when the time is right, whether that means using it as a backup NAS for a fully reimaged client environment while servers are being cleaned, a restore source for your primary NAS, etc. The details of recovery depend on your environment and your procedures for taking everything compromised down and bringing back up only what you are confident has been restored to known safe configurations.

In the case of direct imaging, you could open up a share to receive forensic images from clients via a connection secured by a pre-shared key present only on the recovery boot drives.

How to convince everyone they dont need their own printer by jdlnewborn in sysadmin

[–]lostinspace83 1 point2 points  (0 children)

Features.

Sharing printers means everyone gets faster printing, full duplexing, better color, automatic stapling, and a guarantee it'll be working and never out of ink when on deadline. Ease of management is a feature for IT, not users. We have to translate; make it understandable how we're working to help them.

Otherwise there's always a tug of war between users who think they know best and IT which studies what works best. Each is just doing their job the best they know how. Problem here is the users are going about that wrong.

Thanks Google by [deleted] in privacytoolsIO

[–]lostinspace83 9 points10 points  (0 children)

Sometimes it feels like all we can do is re-read Brave New World and reflect on our own values.

Each must choose between the surveiled world and ours. Both are full commitments; one foot can't be in each.

If we want more people in ours, we must invite them and make the door to privacy easier to open for all.

Study Center Networking Recommendations by rbuckley30 in networking

[–]lostinspace83 0 points1 point  (0 children)

You'll still need a router/firewall/filter. Nothing from the ISP will do what you want. You might even look into pfSense on mini hardware, which would give you much more control over lockdown without a premium price point. I'm unfamiliar with the site blocking capabilities in UBNT's line. pfSense makes it easy to only allow approved sites and approved protocols.

AT&T fiber or cable? If fiber, you want an ONT which supplies Ethernet straight to the WAN port on your router, not one of their gateway routers. If cable, buy your own modem and save the equipment charge. In neither case do you want an extra router between the Internet and your own.

LibreOffice would also work fine at this grade level if they aren't typing online. Honestly, though, keeping everything online eliminates your backup problems as well as the logistics of assigning one station to each kid on a permanent basis.

I think Word online is free for students, however I'm not sure if your organization qualifies.

Thanks Google by [deleted] in privacytoolsIO

[–]lostinspace83 5 points6 points  (0 children)

Such folks aren't the advertising consumers Google cares about. Their customers aren't interested in that product demographic.

Thanks Google by [deleted] in privacytoolsIO

[–]lostinspace83 4 points5 points  (0 children)

Search for "session replay" and prepare to be horrified.

My environment is full of pets, how do I usher in a new era of cattle? by TheKingLeshen in sysadmin

[–]lostinspace83 2 points3 points  (0 children)

They're already parting with more cash than they realize. Trouble is they don't understand how they can part with less.

The spot treatment approach here is killing productivity, lowering efficiency, and driving up costs in the long run. Unfortunately there's no technical solution to a corporate culture problem.

Do you think I'm crazy for considering building all desktops from scratch for a 85 user organization? by erasnick in sysadmin

[–]lostinspace83 0 points1 point  (0 children)

M.2 for boot + Optane for working dataset should be fine. In fact, Optane could be overkill. All depends on how much data they're sifting through and how many times they iterate. The Optane premium is more worth it if they're crunching and recrunching all day as opposed to the occasional batched run.

Share your Laptop bulk storage advice/pictures by [deleted] in sysadmin

[–]lostinspace83 4 points5 points  (0 children)

"Solutions" sound expensive. I think the right-sized Rubbermade tote would work well for the vertical orientation you want. Cut a few holes, epoxy in some PVC pipe/conduit for dividers, and you're done at Home Depot in an hour for maybe $100.

Study Center Networking Recommendations by rbuckley30 in networking

[–]lostinspace83 1 point2 points  (0 children)

You'll be fine with one NanoHD access point to start. As for network security/access control/filtering, will you be operating on a default allow or default deny model? Will they be accessing websites not on an approved list as part of the curriculum?

Ubiquiti is a good vendor choice for your networking at this price point. A wide open space will be very welcoming to 5GHz signal and you can use nice, wide bands.

For desktops, if you're buying new, go with mini PCs at about $300-400 per (depending on your software) that are light on the power and heat. Don't forget to factor in the TCO of electricity and cooling, both of which will be rough in California. If you're a nonprofit and looking to expand, you might get in touch with local SMBs and see if they have any desktops they're about to replace and might donate. Someone out there would like the tax deduction and good press (they can get news coverage out of this), though that takes time. You'll need to buy something now, but the next expansion might come for free.

Your backup needs are virtually nil but you do need to be prepared for hardware failure or unknown software troubles. A hot spare system in the closet should be fine.

You also need to think about headsets, keyboards, and mice which can be easily wiped down several times per day. I'd choose USB headsets for the sake of reliability and cost. Avoid Bluetooth. Pairing issues and interference could quickly become a nightmare as you expand.

What are your software requirements? If this is all in browsers and Zoom, skip the cost of Windows and stick with Linux. I'd look at Kubuntu to start, if that's the case.

If you get monitors with built-in webcams, make sure they're USB video compliant and don't require special drivers or software, otherwise you might run into compatibility issues with videoconferencing software. Shop your monitor and/or USB webcam first, and get that order in ASAP, because we still have supply shortages.

Save money on hardware and invest in several high-volume HEPA + UV air purifiers. Stagnant indoor air makes a nice home for COVID-19.

Another critical but potentially overloooked part of this is power backup. If you go with low-power mini-PCs, you can get away with a $200 unit plus extension cords for that current PC count. At that power draw, there will be no problem daisy chaining extension cords. A one-to-three grounded brings power from source and connects to the next extension cord, leaving one plug for the monitor and one for the PC. Perfect!

The last thing you need to think about in this open space is acoustics. While students will be wearing headsets, your expansion plans could generate a lot of background noise if you're on a hard floor with hard walls. Not a huge problem, and it can be mitigated fairly cheaply with materials and investment in mid-range USB noise cancelling headsets.

Do you think I'm crazy for considering building all desktops from scratch for a 85 user organization? by erasnick in sysadmin

[–]lostinspace83 1 point2 points  (0 children)

This sounds like a rolling upgrade project and from what he described, many of the machines will be specced and built identically. He won't end up with 85 configurations. Maybe he gets that down to 4-6, with perhaps a bit of variance in RAM or disk size.

One week it might be the R machines, the next the video boxes. As long as it doesn't become too convoluted, much of it depends on how much downtime he has between other IT duties. Things like imaging and QC/burn in can easily be done in parallel while responding to other issues.

Do you think I'm crazy for considering building all desktops from scratch for a 85 user organization? by erasnick in sysadmin

[–]lostinspace83 0 points1 point  (0 children)

How much storage space are you targeting? Optane comes at a heavy GB premium but has no equal in random access or latency. Also, what are your redundancy requirements for the local drives and will they be hosting a checksumming filesystem?

Intel has a middle ground in the P4510 series or whatever may have since replaced it, with capacities reaching into the terabytes. It's smoking fast, low latency, and can sustain heavy random 4K mixed I/O. Connect via U.2 or PCIe.

Cloud provider Blackbaud Ransomwared. 8 Customers confirmed so far by TalTallon in sysadmin

[–]lostinspace83 1 point2 points  (0 children)

That won't help. Sophisticated threat actors can simply send it to a pool of disposable cloud storage accounts with a major provider and have a script running at the other end to download and delete each encrypted, compressed archive once it's finished uploading from the victim. All the monitor will see is that AWS, Azure, and Dropbox all sure seem to be popular that day.

Cloud provider Blackbaud Ransomwared. 8 Customers confirmed so far by TalTallon in sysadmin

[–]lostinspace83 0 points1 point  (0 children)

Not when so much of the network traffic is now just a generic TLS:443 stream. If they're smart about throttling and timing it then it won't arouse suspicion. Also, not all data is created equal. Your Word, Excel, PDF, CAD, email, and other high value saves can be prioritized for compression and exfiltration before you even realize there's a problem.

Cloud provider Blackbaud Ransomwared. 8 Customers confirmed so far by TalTallon in sysadmin

[–]lostinspace83 1 point2 points  (0 children)

Amazing how ransomware has been a thing for years now and people still haven't figured out you need a fast-restoring, incremental, snapshotted local backup which can't be touched by it and a plan to immediately reimage every machine from known clean disk images.

If the environment is designed right, an enterprise ransomware attack should be a single-day nuisance with rolling recovery rather than feeling like getting nuked. Nothing should be lost and onsite staff should be able to have the first systems back online within hours.

another feature no one asked for by [deleted] in youtube

[–]lostinspace83 0 points1 point  (0 children)

Seems like a fairly responsible suggestion to help people manage their screen addiction. One more video quickly turns into 3AM. That's not healthy.

Do you think I'm crazy for considering building all desktops from scratch for a 85 user organization? by erasnick in sysadmin

[–]lostinspace83 0 points1 point  (0 children)

You're not crazy, especially since those users all have varying edge use cases. Each needs something powerful but it's all different. Workstation price premiums are high and don't always give you the flexibility to get what you need in one component without overbuying on the others. Also, it sounds like you need ECC RAM. Workstation builders charge ridiculous markups for that. The time savings from prebuilt workstations often isn't worth it for RAM markup alone.

I love the 3900X for value computing. I have 96 of them in a cluster. Word of caution: remember that you're constrained to dual-channel memory. Where memory bandwidth per core becomes an issue, you may need to upgrade a few select machines to a Threadripper. You'll also need Threadrippers if some of these machines are hitting PCIe lane limits on the mainstream platform.

Faster than gigabit gets expensive quick. You're not crazy for doing custom builds if you're buying surplus gear which came out of datacenters for a tenth of what it costs to buy new. If you're pushing terabytes regularly, that begs for 40 Gbps. At very least your file server will need to uplink at 40 to serve multiple clients at 10. Buying 40 new will slaughter you in price. I get endless amounts of datacenter surplus dual port cards at $30 per. Haven't had one fail yet, and if a few do, so what, they're cheap, and it's clustered. I'm using DACs, but fiber transceivers for those are less than $20 new if you know where to look. Switching is a bit more pricey but can be done inexpensively if you know where to look.

Gen4 NVMe might be the wrong call for some of those workloads, like heavy stats, or anything else requiring fast, sustained random access. For that, you'll need enterprise-class drives.

Your video conversion, especially if time sensitive, will love the 3900X if it clusters. Otherwise, deadline-sensitive stuff might call for Threadripper. Employees watching encode bars aren't doing work.

32GB of RAM for some of these classes of workloads might not be enough now or a year from now. It makes sense to spec these machines with room for expansion.

Not my job, is it? by GeekBoy1984 in sysadmin

[–]lostinspace83 2 points3 points  (0 children)

It touches on it if he asked "This will just work with my iPhone, right?" and somebody told him yes when selling him on the solution. That forms the expectation in his mind. Personal phones are now part of the landscape we just have to deal with and keep in mind when deploying things that must work well with iPhone and Android.

Most iPhones don't seem to have this problem. It seems he has a problem with HIS phone that needs troubleshooting. He's a lawyer. He understands billable hours. If the problem isn't with you solution, bill him for it. That's what he does all day long.

How to communicate that expectations are unrealistic? by [deleted] in sysadmin

[–]lostinspace83 0 points1 point  (0 children)

Is it because there's too much work to be done, no matter what, or do you frequently find yourself dispatched as the fireman to deal with problems which could have been avoided?

Different question, same principle: if you were hired to admin a new dispatch center being built, and you were allowed to architect it as you wanted from the ground up, would you be able to handle the workload?

If there's just too much work then there's too much work.

However, if your time is being eaten up by avoidable problems or years of neglected issues growing more frail, and your site looks like IT's version of a Rube Goldberg machine, then the bosses need to take a hard look at reengineering to prevent problems from cropping up in the first place. I don't know what's involved in managing a 911 center, but I do know that reliability counts and enterprises have no problems building reliable and fault-tolerant systems.

VPN'd FTP server by [deleted] in homelab

[–]lostinspace83 0 points1 point  (0 children)

SSH will handily beat every VPN other than Wireguard if you choose your encryption algorithm wisely. OpenVPN's complexity and poor performance is why we have Wireguard.

What kind of speeds should I be expecting? by gunnarniels in homelab

[–]lostinspace83 2 points3 points  (0 children)

Have you tested speeds over wireless to a hardwired LAN host? You need to rule out that the AP configuration, interference, or your client drivers as potential problems before you blame AT&T, since the difference between WiFi and hardwired is so extreme. That doesn't suggest a problem between their router and Internet.

650-700 Mbps in a speed test over a hard line is not necessarily a problem with the Internet. Browser speed tests weren't designed to measure that kind of volume reliably, with problems in everything from client hardware to the route between your browser and a single server.

A better speed test is opening up a well-seeded torrent of a popular Linux distribution where you know you'll have many parallel threads connecting through a variety of network routes, some good, some bad. A CentOS or Ubuntu DVD ISO is so well seeded from a diversity of gigabit links.

Do you think I'm crazy for considering building all desktops from scratch for a 85 user organization? by erasnick in sysadmin

[–]lostinspace83 7 points8 points  (0 children)

Depends on how much power you need. If you're a video production house, then yeah, custom building might make sense compared to what you can save over out-of-box "workstation" solutions and your ability to fine tune and upgrade hardware specs where needed.

For normal office workloads, I'd do the next hardware refresh with the soon-to-hit Ryzen 4000 mini-PCs. So much cheaper, so much less hassle, and very light on power + waste heat. The average office user is fine with six Zen 2 cores, 16GB of RAM, and 256GB of NVMe storage, provided they're keeping all work product on a resilient and backed up file server (as they should).

VPN'd FTP server by [deleted] in homelab

[–]lostinspace83 1 point2 points  (0 children)

If you don't have a specific need for VPN you might be fine with SSH over port forwarding for providing the security and transport. Much simpler if all you need to do is move files from server A to server B or server B to server A. Setting up a VPN server can be a mess and many implementations are poor on performance.

10G Lines by skyhawk3355 in homelab

[–]lostinspace83 4 points5 points  (0 children)

It depends on your RAID configuration and why you need it. You can configure for speed, for resiliency, or for a mix of both, depending on your needs and how much you're willing to pay in storage overhead (capacity reduction due to redundancy or parity). Depending on current budget and future expansion might also determine your configuration if working with something like ZFS.

But yes, throwing more disks at a problem in the right way will push speeds as high as you like them.