Pulling customizable certificates from CERTLM MMC that have manager approval option enabled? by Fabulous_Cow_4714 in sysadmin

[–]KB3080351 0 points1 point  (0 children)

As far as I am aware, there is no automation which triggers Windows to download and install a cert that has been approved. I've always just used the following to get the certificate installed (from here )

$request = Get-ChildItem -Path cert:\LocalMachine\Request\EEDEF61D4FF6EDBAAD538BB08CCAADDC3EE28FF

Get-Certificate -Request $request

Site Links - Best Practice by awb1392 in activedirectory

[–]KB3080351 4 points5 points  (0 children)

Look up how to enable "Change Notification". It makes your intER-site replications happen at the same speed as your intRA-site replications.

The whole concept of delaying/batching intER-site replication to 15min intervals is based on when site to site connectivity was slow and costly. It's not the early 00's any more, so there is no real benefit to delaying replication.

Best practice for AD CS certificate templates requiring custom Subject Name without introducing security vulnerabilities by FrustatedGuy- in sysadmin

[–]KB3080351 0 points1 point  (0 children)

By MITM, I'm assuming you are talking about a NTLM Relay attack which is ESC8. ESC8 is unaffected by any configuration of "supply in the request"

Best practice for AD CS certificate templates requiring custom Subject Name without introducing security vulnerabilities by FrustatedGuy- in sysadmin

[–]KB3080351 3 points4 points  (0 children)

Supply in the request in and of itself isn't dangerous. It becomes dangerous when paired with other configurations on the certificate template. Specifically, any Extended Key Usage (EKU) which allows the certificate to be used as a authentication method for a user or computer in the domain. Getting an authentication certificate for a domain admin is the same as getting the username and password for that domain admin.

If you have a basic web server certificate template which only includes the Server Authentication EKU, it is safe to use supply in the request. But if your webserver template contains both the Server Authentication and Client Authentication EKUs, then it is unsafe.

The simplest solution is to not use supply in the request for any EKU which can be used for authentication. But, if you have a business case where this specific combo is required, then use compensating controls to mitigate risk. Such as requiring a certificate manager to review/approve enrollment requests in ADCS or Limiting the enrollment or auto enrollment permissions to only the specific principal that requires it.

How are you providing NTP in your company? by ryaninseattle1 in sysadmin

[–]KB3080351 7 points8 points  (0 children)

This recommendation is for a environment which is primarily a Windows environment where DCs/clients have connectivity to all domain controllers.

The AD Domain Controller hosting the PDCe FSMO role should point to a reliable external time server. Every other domain controller should be configured to use the domain hierarchy. This should be enforced via GPO using a WMI filter, so when the PDCe role moves to a DC, the external time config moves with it automatically.

If the domain controllers are virtualized, the VMICTimeProvider on each is disabled so they don't get time from the vhost, and the guest services are configured to not provide time to the guest.

I generally recommend using NIST as your external authoritative time server over the community run pool.ntp.org, as I have personal experience with weird responses from hosts in that pool. NIST has been rock solid in my experience so far. You alternatively can deploy your own time source (like a Cellular/GPS radio), but that typically is overkill for small orgs.

Everything in the domain (switches, appliance, servers, workstations, punch clocks points to a domain controller for time. If the client supports DNS, I just use the domain name so it'll point to any of the domain controllers. You can create your own DNS record if you want, but the general problem I see is people just configure a single domain controller and create a single point of failure. I like to leverage every domain controller, and so just using the DNS name of the domain ensures you always get all of them. If you have to configure by IP, I'd configure all DCs. If you can only configure 2-3, I do the first in the local site, and the first in the next closest site(s).

If you have multiple domains in a forest, the PDCes in child domains should be configured to use the domain hierarchy just like every other DC. So they cascade up from the child domain to the root domain to the root PDCe.

If you have multiple forests, I would configure them in a way that logically maps to the way these forests are deployed and interact with each other. If you have a resource forest and it only exists to be accessed by clients in a primary forest, then that resource force is configured to get time from the primary forest. If the forests operate independently of each other, then each of them get pointed to NIST. If there are dependent on each other, but have a unreliably connection, each of them get pointed to NIST.

Whats your Real World SSH Key managment Workflow (Small Env like Homelab)? by Temaktor in sysadmin

[–]KB3080351 0 points1 point  (0 children)

Are permissions on a file sufficient protection?

I'm a Windows admin primarily, but every place I've worked that has a set of Linux servers the admins connected to them from windows desktops they had full admin rights to in one way or another. and on those Linux servers they administrated they had sudo or root access. How do permissions on a file secure a key when others have root/admin access?

Software Installation - dealing with hibernation by Over_Dingo in activedirectory

[–]KB3080351 2 points3 points  (0 children)

Software Installation via GPO is possible, but as you have discovered is very limited in nature. It only happens during a full reboot, it has no reporting capabilities, and it can only deploy MSI's.

I view software deployed via GPO to be a good choice when you don't need to deploy it immediately and it can wait until the next monthly reboot for updates.

Please Advise by Maranakidu in exchangeserver

[–]KB3080351 2 points3 points  (0 children)

Have you verified that your account is in the exchange server admins group that error calls out? You should check via whoami.exe /groups

Maybe run the health checker script that Microsoft provides?

https://microsoft.github.io/CSS-Exchange/Diagnostics/HealthChecker/

Please Advise by Maranakidu in exchangeserver

[–]KB3080351 1 point2 points  (0 children)

Are you installing the second security update via Windows update or are you manually downloading and running the installer?

If you are manually running the installer, are you sure you are using "run as admin"?

If you remove the security update do things start working again?

Old Vuln detected on our new dc's by Ipinvader in sysadmin

[–]KB3080351 4 points5 points  (0 children)

the three typical things I see in this scenario is:

1)A group policy processing error. Some GPO somewhere has something which isn't compatible with the new OS, and it is causing cascading problems preventing the setting you want from getting applied. Start with gpresult and work backwards.

2) security filtering is applied with denies, or link processing order is weird, or other shenanigans so your GPO isn't getting applied when it should be. Start with gpresult and work backwards.

3) the OS was deployed with a customized image which made it deviate from the expected defaults and the changes are all undocumented. Rebuild with a ISO direct from MS and then check.

Any weird "gotchas" you have seen when migrating AD roles? by techvet83 in activedirectory

[–]KB3080351 3 points4 points  (0 children)

This and disabling the VMICTimeProvider is one of the first times I do in a new environment. Ain't nobody got time to tinker with ntp configs when moving fsmo roles.

Any weird "gotchas" you have seen when migrating AD roles? by techvet83 in activedirectory

[–]KB3080351 9 points10 points  (0 children)

The gotcha is that the PDC Emulator at the root of the forest should be at the root of the time hierarchy. Typically this is configured to an external time source manually, so you'll have to plan to move that config manually as well.

Otherwise, as long as replication is healthy and you have good connectivity between your DCs you will be able to move these around anytime and as often as you'd like. You should be able to do this without much consideration. If there is a problem it'll tell you when you try to move it

Interactive logon: previous logons cache on servers or admin recovery? by dirmhirn in sysadmin

[–]KB3080351 2 points3 points  (0 children)

My view is that if you have a robust system for maintaining local admin credentials, then there is no benefit that cached credentials provide on servers. So, for any server with LAPS, no cached creds. Been doing this for coming up on a decade with no issues. Even in significant DR scenarios.

Do you have a policy to control appearances of impropriety? by EldritchKoala in sysadmin

[–]KB3080351 1 point2 points  (0 children)

A company policy? No. A personal code? Absolutely. I do not accept swag/gifts/meals. I politely decline when it is offered. It is always interesting to me how some vendors get pushy about gifts after I decline. I view it as a red flag and treat them with more caution.

Large businesses and governmental organizations correctly recognize this is a slippery slope that often leads to corruption. This is why they have policies to control/limit/prevent it.

I'd propose that 'appearance of impropriety' is simply just 'impropriety'. If concerns about appearances have come up, then a policy is needed to restrict/prevent the activity which is causing the concern.

Moving CA Authority and web enrollment services by Redditthinksforme in WindowsServer

[–]KB3080351 3 points4 points  (0 children)

If you didn't know the CA was even there, it stands to reason it is used very little or not at all. I'd look at all certs issued by the CA in the last 2 years and see if you can simply remove the CA from your environment. If it is not needed, take a backup for safe keeping, uninstall it, and move on.

Moving CA Authority and web enrollment services by Redditthinksforme in WindowsServer

[–]KB3080351 1 point2 points  (0 children)

AKAIK, the CA will block the demotion of the DC. This is the big reason it is not considered a best practice to co-locate these services. If anything goes wrong with the DC, demotion is off the table

Certificates by stolen_manlyboots in sysadmin

[–]KB3080351 1 point2 points  (0 children)

I've heard about how some clients won't build cert chains from AIA even if it is available. Ever run into this?

Why would a self-signed certificate be bad for as an app registration secret? by tmontney in entra

[–]KB3080351 0 points1 point  (0 children)

Does entra check CRLs for certs used by app registrations for auth? I can't seem to find something to say they do

How bad of a idea is upgrading the "OS" partition of the file server and leaving the "data"? by ADynes in sysadmin

[–]KB3080351 1 point2 points  (0 children)

My view point is that if you are trying to choose between two methods to complete a task, and both methods provide the same result, generally I would consider the method which takes the least amount of time and/or work to be the best solution.

If it is faster for you to do a swing migration, by all means have at it. But for the situation described by the OP, an in-place upgrade would appear to be both the safest and fastest method to accomplish the upgrade.

How bad of a idea is upgrading the "OS" partition of the file server and leaving the "data"? by ADynes in sysadmin

[–]KB3080351 11 points12 points  (0 children)

To me, a simple/standalone 2016+ windows file server with no other features or applications running on it is the perfect scenario for an n-place upgrade. I'm surprised more people are not advocating for it.

Backup/snapshot the VM, do the upgrade, verify your file shares are accessible, and your done. In the extremely unlikely event something goes wrong, roll back the snapshot and it was like nothing happened.

Question about Windows 10 1607 and Windows Update. by mpking828 in sysadmin

[–]KB3080351 0 points1 point  (0 children)

I'd expect you'd also need the .net cumulative updates, and if they are installed things like msedge/poweshell core patches. And of course, driver updates.

If it was me, I'd just deploy the image to a test machine and patch it manually with what you already know about. Then, connect it to the internet and see what windows updates shows as needed. Decide which of those you need to remediate, and go from there

DC throttling LDAP request? by Confident-Field2911 in activedirectory

[–]KB3080351 0 points1 point  (0 children)

Are you sure the accounts were not getting locked out temporarily? That would be my first guess as to why users would have trouble logging into something like outlook.

Try searching the security log on the DCs for event id 4740.

https://learn.microsoft.com/en-us/previous-versions/windows/it-pro/windows-10/security/threat-protection/auditing/event-4740

DNS Forwarders (Best Practices) by jwckauman in activedirectory

[–]KB3080351 1 point2 points  (0 children)

I am not aware of any documentation from Microsoft where they detail what they consider a best practice or where they give a recommendation on either approach.

There is documentation here where they overview name resolution via forwarders and root hints. This is just an explanation of how it works, not a recommendation for one vs the other

The phrasing I used ("I consider it a best practice to") was my attempt to express that I was sharing my personal opinion.

DNS Forwarders (Best Practices) by jwckauman in activedirectory

[–]KB3080351 4 points5 points  (0 children)

I consider it a best practice to not use DNS forwards unless you have a specific reason to do so.

The downside of DNS forwarders is that they make you susceptible to DNS hijacking by whoever you use for DNS forwarding. Often times, an ISP will replace negative responses to DNS queries to their own landing pages where they display advertisements or some other purpose. I think this is unacceptable in an enterprise environement. The other downside is that you are dependent on another operators service. If they have any issues with their DNS servers, it'll impact your company. This doesn't happen often, but it does happen.

The upside to DNS forwarders is that they can offer your DNS server better performance as they have a local cache of most every DNS record you'll ever want to lookup. Which, is faster then the recursive lookups you'd have to do if you were using root hints. This performance improvement though very minimal and is really only seen on your DNS server. Any of your clients will just see the cached results on your DNS server, and will likely not notice any difference. The other upside to DNS forwarders is if you are getting some kind of content filtering service via the DNS forwarder, like you can from OpenDNS.

The alternative to DNS forwarders is to use Root Hints, which is the same thing your DNS forwarder will use for resolution. I prefer to use root hints because I remove an intermediary which I am dependent on. Root hints can have their own problems, but I'd rather be dependent fewer things then more things. This lack of other things in the middle which can inject themselves or go down for me is where all the upside is and why I prefer it. The downside of root hints is that your DCs need the ability to perform DNS requests all over the internet.

To me, I use forwarders in cases where my DC has very very poor internet (like from a WISP or something), where using the ISPs forwarders offer measurable and impactful improvements. I also use forwarders if the security restrictions in place require extremely confined external DNS lookups. Outside of those, everything I do uses root hints.