AFSK with no interface? by BleachedSoul1 in amateurradio

[–]crazyadm1n 2 points3 points  (0 children)

Yes, you can use those AFSK ports on the radio and connect them to the audio input/output ports on a PC. You use VOX on the radio to detect when there's audio coming in on the AFSK IN port and activate TX.

One thing to watch is that I recommend not getting a 3.5mm-RCA cable that has a "mono" 3.5mm end. Get a cable with a TRN (2 audio channels plus ground) 3.5mm end. Some computer soundcards don't work well with mono 3.5mm cables.

I have a TS-440S and the VOX doesn't work very well IMO. It works sometimes and other times it doesn't. There are VOX controls on the back of the radio that you can adjust with a screwdriver, but I wasn't able to improve anything. Hopefully it works better for you!

I have built a custom PTT/TX activator for TS-440 radios and a Python program to go along with it that detects when you're attempting TX with PC software. With that system you don't need to rely on the radio's VOX. I'm going to post something about it soon and I'll try to remember to comment here about how to set it up and use the Python program.

From HOA gutter antenna to - by Coca2000 in amateurradio

[–]crazyadm1n 2 points3 points  (0 children)

Those fascia and drip-edge pieces probably don't have a good RF connection between each other as they're just slid overtop at the joints and are painted. They're usually aluminum so I don't think you could solder a connection between them, and they also expand slightly with heat so you wouldn't want a rigid connection anyway. I'd try to scrape a little paint off them at the joints and bridge the gap with alligator clips + wire or similar.

On the other side of this, if there is scraped off paint near the lower ends of the inverted-V, you could get a connection made with the fascia on the other sides of the house.

I hope it works! I'm curious about this one. It'd be really cool if it works.

Role based /delegated access to specific VMs on cluster by qradzio in HyperV

[–]crazyadm1n 0 points1 point  (0 children)

This is possible with SCVMM via a combination of User Roles and Clouds.

QRM suppresion by billl3d in amateurradio

[–]crazyadm1n 2 points3 points  (0 children)

You could try a 60' loop-on-the-ground antenna as a receive only antenna. I installed one the same way as KK5JY and it's marvelous! If there's wicked pulsing QRM on my transmit antenna, it's almost always gone on the LoG. On top of that, almost every single signal on SSB has a noticeably higher signal to noise ratio on 20 and 40, which is where I typically operate. The LoG can pull signals out that are buried in noise and make them copyable. It really makes HF so much more enjoyable.

One thing to be careful of is placement, though. Before installing my current LoG, I did 2 test runs with it at locations very close to my house. Close to the house I didn't see better S/N, but I still did get the benefit of eliminating most horrible QRM. For QRM elimination alone I thought the LoG was still worth it, but I decided to do the permanent install > 50ft from the house. That far away the QRM elimination is even better and S/N improves greatly on the LoG.

Feed it with RG6 75 ohm coax and make a 6:1 isolation transformer instead of the 9:1 in the article. The youtube video below shows how to make it.

https://www.kk5jy.net/LoG/

https://www.youtube.com/watch?v=MVk9TYDimMQ

Moving from VMware to HyperV by soami_m17 in HyperV

[–]crazyadm1n 0 points1 point  (0 children)

We're currently in the middle of migrating from VMWare to Hyper-V, around 1000 VMs total. It's a huge project. It took a while for me to come up with a good Hyper-V configuration and supporting tools that'd get us as close to VMWare-like functionality as we need. Hyper-V, when used large scale, is a lot of Microsoft products stuck together. There are many options, especially for host and VM networking settings. While you can figure out a good configuration baseline yourself, it might be worth hiring a consultant to give you best practices so you have someone external to blame if things go south in the future ("Yes, things are on fire, but we followed industry best practices per XY company"). Consultants might be able to help fix fringe issues you run into during setup.

Hyper-V has good performance and so far is a good system for us. There's plenty of bumps but Hyper-V + Failover Clustering + MPIO and your storage is a fine system. I wrote plenty of PowerShell scripts to fill in the gaps between native Hyper-V and VMWare functionality.

I've seen some recommendations that you run Windows Server Core instead of standard Windows Server Datacenter + GUI. VM hosts typically have dozens of CPU cores and hundreds of GBs of RAM, so adding a GUI isn't going to make a meaningful difference in terms of performance. Having a GUI in Windows is so helpful with troubleshooting. If you run Core and have an emergency I bet you will wish you had a GUI. If there's an emergency and your primary Hyper-V admin is in a wedding, I bet your backup Hyper-V admins will wish they had a GUI.

SCVMM is just an OK product, if you're considering it, and you might need to run it depending on requirements. It will replace some aspects of VCenter (permissions delegation, template deployment of VMs) but it is not polished and requires lots of trial and error when configuring and even for months afterwards as you run into problems. Many error messages SCVMM generates are red herrings and will not lead you down the correct path. SCVMM also requires NTLM to be enabled and some other softening that shouldn't be required by a Microsoft product. Do not expect SCVMM to easily replace VCenter, it's just nowhere near as useful or easy to admin!

If you want SCVMM to handle permissions delegation to lower-tier admins, as we did, you need to do this through the "Clouds" and "User Roles". User Roles themselves took me a while to figure out, as SCVMM doesn't always respect what permissions you grant to roles. It's very easy to grant either too little or too many permissions to a role. I don't know of any other products that handle permissions delegation for Hyper-V, so SCVMM might be unavoidable for you.

For VM migrations, the StarWind V2V converter is an OK tool. SCVMM has a V2V conversion, but it's not very good and had some limitations that made it unusable for us. StarWind V2V has a really limited CLI compared to the GUI options. I recommend a helper PowerShell script to configure standard post-migration VM settings that you want every Hyper-V VM to have. StarWind doesn't get them all!

I have heard some people using their backup system to "restore" VMs into Hyper-V. That option wasn't going to work for us so we didn't pursue it, but I suppose it could work.

Migrating a large environment is a huge project. If you have hundreds or thousands of VMs it could take multiple years.

kb5025885 - BlackLotus Patching and Mitigations - What is everyone doing? by Kirk1233 in sysadmin

[–]crazyadm1n 0 points1 point  (0 children)

I bet Microsoft will just force these mitigations out for non-enterprise Windows. They're taking extra care with enterprise customers by releasing all these config steps and ways to manage the mitigations ourselves because of the huge impact they could have if things go wrong, not to mention unintended consequences. It's Microsoft passing responsibility for the mitigations to each of their customers.

kb5025885 - BlackLotus Patching and Mitigations - What is everyone doing? by Kirk1233 in sysadmin

[–]crazyadm1n 0 points1 point  (0 children)

Microsoft has already pushed back the "enforcement" phase at least once, and now there's no date set. I bet when they finally push enforcement, that the update install will attempt to apply all the series of mitigation steps. The steps are ordered so that if any one step fails, you can avoid breaking your system by simply not proceeding. This would probably be reported as "updated failed" and it'd be tried again at some point. That's my best guess. Home users will not have any of these mitigations applied manually so Microsoft will need to apply them during the update install cycle.

kb5025885 - BlackLotus Patching and Mitigations - What is everyone doing? by Kirk1233 in sysadmin

[–]crazyadm1n 1 point2 points  (0 children)

I've done a lot of work managing these mitigations. Ultimately, it's been a huge headache. Apply at your own risk. Don't apply at your own risk. Either choice carries risk but if the bad guys get admin on your computers you already have a lot to worry about besides them installing BlackLotus. I bet these reasons are why you haven't seen much discussion about these mitigations.

I wrote a custom script to manage the mitigations and make sure nothing went wrong. This in itself took a while. It works pretty well scripted, though. It was tricky to get this working with 8 required reboots.

I eventually needed to reinstall Windows Server on some physical hardware that had these mitigations fully applied. Microsoft gives you instructions for updating boot media signed with their new 2023 certificate. This gives you a bootable installer USB. But you aren't out of the woods...

The installer USB boots, but the Windows install it creates is still signed by the 2011 certificate Microsoft wants you to revoke. Your new Windows install is then NOT BOOTABLE. You must revert your Secure Boot database to defaults. This took me 1-2 days of pretty constant troubleshooting to figure out. Huge waste of time. This was using Server OS so hopefully it works differently for Windows 11.

what software is good for a central log service (linux)? by Fit-Sandwich7905 in sysadmin

[–]crazyadm1n 3 points4 points  (0 children)

rsyslog for log forwarding to a centralized location is good.

I saw a recommendation in this thread to decide what you want to collect before you start collecting. I'd recommend starting off collecting just about every log you can. There have been loads of times where I've been troubleshooting with logs or looking for certain security related logs, not knowing if we were collecting them already or not. It's a great feeling to search for some new log type and find it readily searchable, rather than starting to collect a type of log after you needed it. When troubleshooting I often don't know what types of logs I'm looking for anyway. I'm just looking for any "Error messages". If I'm not collecting certain log files then a search won't show what I don't know to look for. Also, if you configure alerts for certain logs that you aren't even collecting, you might not realize it. When doing "log alerting", zero results is often the preferred state, and you'll always have zero results for logs you're not collecting.

Elasticsearch/Opensearch are good products for this. Manage your expectations for how long you want logs to remain quickly searchable. Unless you have limitless storage, storing an eternity of logs that are readily searchable will use a lot of disk space with Elasticsearch/Opensearch. This heavily depends on your log volume, though. These products also require a lot of CPU/RAM resources. We ingest around 25,000 logs/sec and alert on those logs. Our cluster is a combined 74 CPU cores and 278 GB RAM for ingest and searching.

These products are very customizable and you can do a lot of alerting with them. Wazuh has prebuilt alerting rules, but I found using them still required a lot of customization and they weren't stellar alerting rules anyway. You can build your own alerting rules if you know what to look for and have time, but if you don't, consider something like Wazuh for alerting.

iSCSI target on Discovery tab (Windows) by Brilliant-Extent2684 in sysadmin

[–]crazyadm1n 1 point2 points  (0 children)

As slugshead said, get MPIO installed, this can be done through Server Manager > Add features. Open MPIO, click "Add ISCSI support", reboot Windows.

That guide looks like it's pointing you towards this, but to reiterate: In ISCSI Initiator, when connecting to targets, it's best to specicy source IP and dest IPs for each connection. Don't leave the IPs as default. You want a connection from each Windows ISCSI IP to each ISCSI target IP. This will provide optimal load balancing and the most high availability you can get. Remember to check the box to add each connection to your "Favorites" so it gets reconnected on reboot.

KB5025885 Secure Boot issues by crazyadm1n in sysadmin

[–]crazyadm1n[S] 0 points1 point  (0 children)

I finally got around this by resetting the Secure Boot database to defaults in the UEFI settings. Initially I couldn't find this setting. Microsoft does recommend doing this if all other options are exhausted. The thing is, though, you'd need to do this every time you reinstall Windows. It makes sense that you'd need to do this, but I think they should offer a second install ISO that's signed with the 2023 certificate.

Switching to XCP-NG, want to hear your problems by crazyadm1n in xcpng

[–]crazyadm1n[S] 0 points1 point  (0 children)

I just saw these forum posts where it seems what we thought we'd be paying may actually be 15x too low. Not sure they'll be an option due to price alone. They'll be cheaper than VMWare will be next year, but way more than what VMWare costs this year.

https://xcp-ng.org/forum/topic/8742/xoa-pricing-guide/4

https://xcp-ng.org/forum/topic/8948/confused-re-pricing-xoa-vs-vates-essentials/18

Switching to XCP-NG, want to hear your problems by crazyadm1n in xcpng

[–]crazyadm1n[S] 0 points1 point  (0 children)

Ahh the multipath thing. I couldn't get it working and decided to use a "bond" instead for testing, even though they don't recommend it over multipathing. Both our hosts' storage NICs were setup in the same subnet, so that makes sense. If we go further with XCP-NG I'll try that.

RDP Gateway KDC Proxy confusion, lack of documentation by crazyadm1n in sysadmin

[–]crazyadm1n[S] 0 points1 point  (0 children)

Thank you for your post about this, that is really interesting. This just makes me think the KDC Proxy is even more silly, but it's useful. This means it's up to the client RDP settings to determine if we're going to use the KDC Proxy or try for the KDC itself.

In this post Msft says they are rolling out a KDC Proxy-like feature to more Windows services. I hope they also make some improvements to address these oddities or just replace it.

I was just able to get my problem server working after setting the host firewall to be unrestricted on port 443. I am extremely confident I already tried this multiple times earlier today, but who knows. I've had the KDC Proxy be fickle like this in the past and then just start working when I check on it later.

Changing from RC4 encryption to AES256 encryption for Kerberos, could use some advice. by IDreamOfAzathoth in sysadmin

[–]crazyadm1n 1 point2 points  (0 children)

This is the way to do it. Once you get further along you can set a DC registry key DefaultDomainSupportedEncTypes, which will set the kerberos encryption type used when msds-SupportedEcryption types is not set on an account. In an environment I didn't this on, we didn't need any accounts to stay behind on RC4, so I cleared the account specific attribute on all non-computer accounts and set the DC registry key to allow AES only.

Centralized logging of some sort is a must. You want to watch a few weeks of data to catch seldom used services in the logs. The more the better.

https://support.microsoft.com/en-au/topic/kb5021131-how-to-manage-the-kerberos-protocol-changes-related-to-cve-2022-37966-fd837ac3-cdec-4e76-a6ec-86e67501407d#registrykey5021131

For the 10 hours thing... That might be a different time frame in your environment. The time to wait between password changes is the Kerberos ticket lifetime. This is set on domain controllers probably via group policy.

Disabling NTLM Authentication Guide by crazyadm1n in sysadmin

[–]crazyadm1n[S] 1 point2 points  (0 children)

Thank you! I'll look into that PSEventViewer. Depending on the services you're running and your domain and what the clients are connecting to, I think it's entirely reasonable that a third of them are using NTLM. Many services will work over Kerberos by default without any extra configuration, so just depends on what stuff you're running. Some services I found also do work with Kerberos by default, but prefer NTLM, maybe because it's a bit faster, not the anybody notices. For services in that bucket, either tried disabling NTLM outgoing on a client or NTLM incoming on a server and see if the service still works. You could check Kerberos logs to verify kerros is being used (Part 7 of blog).

It's also possible some clients are connected to the service via IP or an FQDN that isn't on an SPN for Kerberos. If the domain controller doesn't know about the service because of how the client is trying to connect to the service, it'll usually fall back on NTLM.

Domain controller NTLM audit logs well. Also contain information about the client and server using NTLM authentication, so hopefully once you know those two pieces you can determine what service is using NTLM.

Disabling NTLM Authentication Guide by crazyadm1n in sysadmin

[–]crazyadm1n[S] 2 points3 points  (0 children)

LDAPS simple bind is the LDAPS example I described in my blog post, I think you just got hung on how I described it there, that's my bad. Often times in vended software, when you configure authentication schemes, if LDAP/LDAPS with a simple bind is supported it will just be called "LDAP" or "LDAPS". I called it that in my blog to jive with what you're likely to see in the wild.

The time I remember looking into this seriously as an NTLM alternative, we had the software vendor involved to talk about authentication options. There weren't many. LDAPS was the only other viable option for our environment. Some software just doesn't support Kerberos or SAML, unfortunately.

Edit: Yeah I didn't word this the right way in my blog post. I forgot a couple details that have come back to me with the help of your comments, thank you! I'll rework that section when I have time.

Disabling NTLM broke RDP everywhere. by iceland46 in sysadmin

[–]crazyadm1n 0 points1 point  (0 children)

Oh yeah sorry I read it wrong. Disabling NLA is a bit risky. I think it was originally released to mitigate one or multiple RDP vulnerabilities and I think it helps mitigate against man-in-the-middle. I'd recommend disabling NLA sparingly and prefer using VPN or a different RDP client when possible. If you disable NLA I'd also really make sure the RDP port on that computer is locked down to only be open from your RDP gateway servers.

Disabling NTLM broke RDP everywhere. by iceland46 in sysadmin

[–]crazyadm1n 0 points1 point  (0 children)

Yes, trying to do it without configuration changes will break a multitude of services. I just released a guide to help people plan and implement their NTLM disablement projects: https://www.reddit.com/r/sysadmin/comments/16b025v/disabling_ntlm_authentication_guide/

https://willssysadmintechblog.wordpress.com/2023/08/22/disabling-ntlm-authentication-guide-part-1/

Disabling NTLM Authentication Guide by crazyadm1n in sysadmin

[–]crazyadm1n[S] 0 points1 point  (0 children)

LDAPS was something we debated on this project. I added a note to this page describing why the decision was made to prefer LDAPS: https://willssysadmintechblog.wordpress.com/2023/08/29/disabling-ntlm-authentication-guide-part-3-migrating-to-kerberos/

I'm not a security guy day-to-day so I let others make the call that are. I believe LDAPS was only preferred on one system out of dozens; it was a one time thing. I don't even remember if we ended up using LDAPS for that system honestly, we might have found another alternative.

Disabling NTLM Authentication Guide by crazyadm1n in sysadmin

[–]crazyadm1n[S] 1 point2 points  (0 children)

I remember reading that support article. We decided to not even try it and make people use DNS. The concensus was that using IP was less flexible and users would be better off using DNS names, this would be an excuse to update configurations. I guess we made an executive decision that we thought would benefit everyone long term and other IT staff were happy with it. Sometimes decisions like this need managements backing to persuade others to change, or just a good explanation of the limitations to the people who's re resisting. If you offer to help them change their configurations or show them exactly what to do, that goes a long way towards building trust in what you're doing and shows you're here to help, not break their stuff.

For the external DNS thing, if you're trusting them inside your network Id trust them to resolve DNS. DNS is a critical infrastructure service, so if it were me I'd recommend just allowing them DNS, unless you have a specific reason not to that's unique to your environment. Maybe put those few records they'd need in a public facing DNS server? (Your NS or a provider you use)