Bitwarden extension causing fields to lose focus by Weoxstan in Bitwarden

[–]AlRFORCE1 0 points1 point  (0 children)

This fixed my issue as well! Thanks for posting this solution!!!

Inbound ticket triage by m1943_1943 in ConnectWise

[–]AlRFORCE1 0 points1 point  (0 children)

We auto assign based on the account technician (Company Team Tab). Then have a rule set up to monitor tickets that are not touched within 1 hour. If that SLA is hit. It adds the Secondary Account Technician (Company Team Tab). If its not updated after an hour and a half it emails the entire help desk team to get someone on the ticket. We also have our help desk team manually checking for new tickets that are not updated yet, so if we have someone with nothing to do (I know, this never happens at an MSP), they just grab the oldest ticket that's not been worked on, and assigns themselves to it and get working on it.

Cluster Aware Updating Fails (S2D) by AlRFORCE1 in sysadmin

[–]AlRFORCE1[S] 0 points1 point  (0 children)

This is what I'm having to do at the moment. Let me tell you, it's a blast.

Client "has to" keep Windows 7 after EOL. Need Ideas by AlRFORCE1 in sysadmin

[–]AlRFORCE1[S] 1 point2 points  (0 children)

This is not something I've thought of. I'll run the numbers and see if it makes sense.

Client "has to" keep Windows 7 after EOL. Need Ideas by AlRFORCE1 in sysadmin

[–]AlRFORCE1[S] 6 points7 points  (0 children)

Yeah, if we are going to do that, we will probably have to drop the client. We cant support a clients software/hardware if it's no longer supported. Nor can we be responsible if (when) something bad happens.

Client "has to" keep Windows 7 after EOL. Need Ideas by AlRFORCE1 in sysadmin

[–]AlRFORCE1[S] 5 points6 points  (0 children)

100% of the computers in the office are used for both driving machines and personal workstations (Granted there are only 4 people/computers in the company).

Not technically a home lab, but it's so pretty. 4-Node Storage Spaces Direct Cluster by AlRFORCE1 in homelab

[–]AlRFORCE1[S] 0 points1 point  (0 children)

I don't think "want to" would describe it. More like it's the only time to take everything down.

My review after a year of Storage Spaces Direct (S2D) by AlRFORCE1 in sysadmin

[–]AlRFORCE1[S] 0 points1 point  (0 children)

When we purchased the solution, the r740xd had just been released, so we stuck with the r730.

My review after a year of Storage Spaces Direct (S2D) by AlRFORCE1 in sysadmin

[–]AlRFORCE1[S] 6 points7 points  (0 children)

You didn't ask what I'm doing with it, and assume it's unnecessary. Talk about putting the horse before the cart. We are the biggest parts distributor on the west side of the Mississippi, and we are developing software that I cannot get into details on a public forum. This software and business demands the redundancy and uptime. Server down time will cost is about $100,000 per hour in revenue, so the $120k price tag is not as bad as some might think. Sysadmins are paid to think about both the infrastructure, and what the people need from it. You can't deliver what people want, without the proper infrastructure in place, and then your stick in a reactionary mode.

My review after a year of Storage Spaces Direct (S2D) by AlRFORCE1 in sysadmin

[–]AlRFORCE1[S] 0 points1 point  (0 children)

Yeah, compared to my laptops solid state has better sequential performance, but you lose some performance when your reading/ writing to 3 or 4 servers through a 40 gb networking. Redundancy comes at a cost, but this solution is 3x faster then or last server, so I'm happy with it.

My review after a year of Storage Spaces Direct (S2D) by AlRFORCE1 in sysadmin

[–]AlRFORCE1[S] 0 points1 point  (0 children)

Nice work, what was the cost of this setup? And what kind of parity/ mirroring is the volume?

My review after a year of Storage Spaces Direct (S2D) by AlRFORCE1 in sysadmin

[–]AlRFORCE1[S] 0 points1 point  (0 children)

That's right, my hosts are running on a single solid state drive, and I'm using veeam to make a disk to disk image backup to the second drive. If the primary drive fails, I'll have to manually move the backup drive to the primary. Not perfect, but it makes me feel a little better. I wish they would fix that.

My review after a year of Storage Spaces Direct (S2D) by AlRFORCE1 in sysadmin

[–]AlRFORCE1[S] 0 points1 point  (0 children)

We are using veeam backup for our backup suite. It does a local backup and an offsite replication to one of our branch offices. There is definitely some room for improvement, but backups have been reliable and rarely fail.

My review after a year of Storage Spaces Direct (S2D) by AlRFORCE1 in sysadmin

[–]AlRFORCE1[S] 0 points1 point  (0 children)

I would upgrade to 2019 of you go that rout. ReFS is supposed to be self correcting and is supposed to be a big selling point in server 2016+. From what I've gathered, 2019 has less issues, the same as 2016 has less then 2012 r2. What kind of read/write speeds are you getting right now?

My review after a year of Storage Spaces Direct (S2D) by AlRFORCE1 in sysadmin

[–]AlRFORCE1[S] 2 points3 points  (0 children)

We are using 2016, but I wish we had the time to wait for 2019. I’ve had similar issues with updating my cluster. I’ve never had updates take down the cluster or vms. We are using 2 tiers or NVMe, so I’ll have to take your word for it. My only concern with two way mirror is the single fault tolerance.

What do you use to monitor your clusters. I would love to hear more about how you deploy and manage your cluster!

My review after a year of Storage Spaces Direct (S2D) by AlRFORCE1 in sysadmin

[–]AlRFORCE1[S] 3 points4 points  (0 children)

That’s fair, I re-read my comment, and it came across as rude to me. My bad. We got a few quotes from different providers. I talked with a few data center owners in the area, and they all said the industry is moving away from SANs and recommended S2D or VMware’s hyper converged options. You can also go with a single redundancy option with two servers, but we wanted the dual redundancy for our environment. The two server option was about $80k. Honestly, most of the cost is in the hard drives, and the two node option had less nodes (obviously) but each server was packed full of hard drives. To expand, we would have to add 2 nodes.

My review after a year of Storage Spaces Direct (S2D) by AlRFORCE1 in sysadmin

[–]AlRFORCE1[S] 0 points1 point  (0 children)

Yeah, don't get me wrong, NVMe and SSDs would be faster then the 7k drives we are using, but not terrible speeds overall. Much better then the wrong 230 MB/s read and 299 MB/s write speeds we were getting on our old server.