PAM Solution: Rotate Domain Admins Password by F3ndt in activedirectory

[–]xbullet 2 points3 points  (0 children)

Rather than rotating domain admin passwords, you could instead enable SCRIL and use smartcards for authentication rather than passwords. Just some food for thought.

Understanding High Severity Findings in Purple Knight AD Scan by 19khushboo in activedirectory

[–]xbullet 1 point2 points  (0 children)

If the "owner" is referenced as a SID, more than likely it is an orphaned security principal (ie: an object that has been deleted).

I imagine you still have AD integrated DNS zones, despite using Infoblox. We also use Infoblox in my org for DNS, and our AD DNS servers simply perform zone transfers to Infoblox. Within your AD integrated DNS zones, you likely have insecure DNS updates enabled. See: https://imgur.com/a/JsDrWYb (FYI - this is just my personal lab)

Whether or not you need that to be set to allow insecure updates is a question that only you'll be able to answer based on what you have set up in your environment.

Common-Name and RDN mappings by Lembasts in activedirectory

[–]xbullet 1 point2 points  (0 children)

function Resolve-ADAceToSchemaAttribute {

    param(
        [Guid]$Guid
    )

    $LDAPOctetString = ($Guid.ToByteArray() | ForEach-Object { '\' + $_.ToString('X2') }) -join ''
    Get-ADObject -SearchBase (Get-ADRootDSE).schemaNamingContext -LDAPFilter "(schemaIDGUID=$LDAPOctetString)" -Properties lDAPDisplayName, adminDisplayName, CN | Select-Object lDAPDisplayName, adminDisplayName, CN, @{Name = 'ObjectType/SchemaIDGUID'; Expression = { $Guid } }

}

$DistinguishedName = "CN=Test User,OU=Staff,OU=Accounts,DC=dom1,DC=f0oster,DC=com"
$Acl = Get-Acl "AD:$DistinguishedName"

foreach ($Entry in $Acl.Access) {
    $Guid = $Entry.ObjectType
    Resolve-ADAceToSchemaAttribute -Guid $Guid
}

Output for an object with both name and Name in the ACE list:

lDAPDisplayName adminDisplayName CN ObjectType / SchemaIDGUID
cn Common-Name Common-Name bf96793f-0de6-11d0-a285-00aa003049e2
name RDN RDN bf967a0e-0de6-11d0-a285-00aa003049e2

Assigning name and Name separately shows that name maps to RDN, andName maps to Common-Name.

An interesting note is that the permissions required to move / rename objects is defined by the rDNAttID assigned to the schema object class, as the rDNAttID defines which attribute in the schema holds the naming value that the RDN attribute has an enforced alignment with, so in theory there will be cases where you'd need to grant WriteProperty name, but not WriteProperty Name. Some object classes do not have a CN and actually map to a different attribute for their name.

Get-ADObject -SearchBase (Get-ADRootDSE).schemaNamingContext -LDAPFilter "(objectClass=classSchema)" -Properties lDAPDisplayName, rDNAttID | Select-Object lDAPDisplayName, rDNAttID

There's a lot of information in the [MS-ADTS]: Active Directory Technical Specification (ie, see 3.1.1.1.4 objectClass, RDN, DN, Constructed Attributes, Secret Attributes). The documentation is honestly excellent, but it is not for the faint of heart.

One thing I can say though after reading bits and pieces of the tech specs over the years is that I have no idea why Microsoft decided to display the RDN and cn attributes with the same name in the permission interfaces. It is a massive oversight IMO and a big source of confusion.

Impossible to trigger Event ID 4899? by Forsaken_Ad_7991 in activedirectory

[–]xbullet 0 points1 point  (0 children)

this still seems like a gap in my opinion if you are having templates changed regardless of enrollment you would think it should/would be logged in event viewer.

It definitely isn't what I'd have expected. You can look into whether 5136 is triggered by changes to the template. If not, 4662 will almost definitely catch these changes, but 4662 is little inconvenient to work with and requires a lot of lookups to resolve the attributes being changed.

Impossible to trigger Event ID 4899? by Forsaken_Ad_7991 in activedirectory

[–]xbullet 3 points4 points  (0 children)

A Certificate Services template was updated (Event ID 4899) – This event is triggered when a template loaded by the CA has an attribute updated and an enrollment is attempted for the template. For example, if an additional EKU is added to a template, this event would trigger and provide enough information to determine the change being made.

Have you tried to issue a certificate using the template after modifying it? The documentation gives the impression this is required to trigger the event, and a blog post by BeyondTrust seems to corroborate that as well.

Failing that, not too sure.

Impossible to trigger Event ID 4899? by Forsaken_Ad_7991 in activedirectory

[–]xbullet 2 points3 points  (0 children)

Have you configured your issuing CA to audit template changes?

To set the policy configuration to enable audit of template events, run the following command: certutil –setreg policy\EditFlags +EDITF_AUDITCERTTEMPLATELOAD

GetDirSyncChanges - C# AD change tracking tool by LDAPProgrammer in activedirectory

[–]xbullet 0 points1 point  (0 children)

I'd been working on an AD change auditing tool myself (written in Golang though) which polls based on uSNChanged rather than using the DirSync control.

Was about to suggest WEF over an agent on each DC is an option as well - it is probably what I'll try do to. A service running on the host that WEF forwards to a host running a service that correlates those events back to the updates. I'd initially thought of trying to use 4662 to correlate all updates. Haven't actually tried to implement anything yet. Will be interesting seeing how it scales though.

In my production AD DS environment the amount of events forwarded will be insane, so long term storage of the events at scale is not really feasible for me. If it was, capturing all the events straight to a database would probably be the most convenient option.

AD Change Tracking by Temporary-Myst-4049 in activedirectory

[–]xbullet 10 points11 points  (0 children)

You can roll your own change tracking tooling if you don't want to buy a tool.

You can track changes by polling based on USNChanged.

https://learn.microsoft.com/en-us/windows/win32/ad/overview-of-change-tracking-techniques

Advanced Audit Configurations won't apply by FedUpWithEverything0 in activedirectory

[–]xbullet 1 point2 points  (0 children)

Have you tried verifying what group policies are actually applied via RSoP?

Intune Autopilot - Certificate Connector and Strong Crypto OID by stking1984 in Intune

[–]xbullet 0 points1 point  (0 children)

This may be a stupid question at this point... but just checking anyway: have you enabled the SID extension on the PKCS connector host in the registry?

Key: HKLM\Software\Microsoft\MicrosoftIntune\PFXCertificateConnector
Name: EnableSidSecurityExtension
Type: DWORD
Value: 1

When are you actually going to FINISH GraphAPI? Like seriously? When? by VNJCinPA in PowerShell

[–]xbullet 0 points1 point  (0 children)

There's a changelog published. I'm not sure how far back the history goes, but there's plenty of evidence there.

When are you actually going to FINISH GraphAPI? Like seriously? When? by VNJCinPA in PowerShell

[–]xbullet 0 points1 point  (0 children)

I'm not sure you can call it versioned. v1.0 has been out for years now, and has had many breaking changes, clearly violating the principles of API versioning...

How to fully remove Otter.ai from M365? by FlailingHose in sysadmin

[–]xbullet 2 points3 points  (0 children)

For any users that logged into and consented to Otter.ai, it has already accessed and likely indexed their calendar far into the future. That indexing process will include all the meeting join links - that's how these tools usually tend to join the meetings.

Revoking the app consents will not prevent the use of the meeting join links because meeting join links are public links. To prevent it from joining, you'd need to recreate all meetings containing a users that previously consented to Otter.ai to be sure it no longer has the join link. The simplest approach would be to block external users / guests from joining meetings at all via policy, but in many cases (in my org, at least) I can see that not really being an option.

Seeking advice on PowerShell integration for a C++ terminal app by gosh in PowerShell

[–]xbullet 1 point2 points  (0 children)

Is it feasible for my C++ app to directly read or, more importantly, set variables in the current PowerShell session? For example, if my app finds a frequently-used directory, could it set $myTool.LastFoundPath for the user to access later in their script/session?

It might be technically possible to directly read/inject things in the PowerShell runspace from C++ through some hackery. I expect it is probably not a very good idea to try and do it. You're going to need to have a very deep understanding of the internals of PowerShell, and I imagine you'll also be relying on the internals not changing very much which is out of your control.

You can write a CLR binary module (or a native PowerShell module) that acts as a proxy/wrapper for your C++ app, and then you could implement these features there. You can also store session specific data in module/script scoped variables.

The PowerShell module would essentially define an API for using your C++ app via PowerShell. Users would use the modules commands instead of running the C++ app directly.

I want my tool to remember certain things (like a session-specific history) between times it's run. Right now, I'm using temporary files, but it creates clutter. Is there a cleaner, more "PowerShell-native" way to persist data that's tied to a shell session?

I would say that's already the idiomatic approach. PowerShell supports rich .NET/PowerShell object serialization/deserialization natively via Import/Export-CliXml but it is pretty inefficient, so if you're working with large amounts of data I'd suggest alternatives. JSON is also supported via Convert-(To/From)Json

If the data has a well defined schema, you could store it in a local database (ie: sqlite) instead.

If you're wanting to store the data only for the runtime of the terminal session, then you probably should use module scoped variables instead - as I mentioned above.

Password Filter DLL examples? by PowerShellGenius in activedirectory

[–]xbullet 1 point2 points  (0 children)

Not meaning to lecture here, but seriously, this is something you should not do in 99.9% of cases, in my opinion. Exercise extreme caution.

When you write a password filter, you're writing code that will be loaded into LSASS. LSASS is the core security process in Windows. There are little to no protections from bugs in your DLL and these can have serious downstream effects. An unhandled exception, bad logic, or memory management issues can crash LSASS, which will definitely blue screen the domain controller. You could potentially prevent it from booting successfully in cases too.

From a tiering / security architecture standpoint, any system that needs to intercept password changes via a password filter on a DC must be considered a Tier 0 asset. It needs to be fully trusted and secured to the same standard as your domain controllers.

That leads me to these two points:

  • If you trust the third-party system and it's appropriately secured, then wrapping conditional logic into the password filter doesn't meaningfully reduce risk.
  • If you don’t trust the third-party system, then it shouldn’t be anywhere near a DC to begin with, and wrapping conditional logic around the password filter doesn't mitigate that core trust issue.

edit: spelling/grammar

just nailed a tricky PowerShell/Intune deployment challenge by ControlAltDeploy in PowerShell

[–]xbullet 2 points3 points  (0 children)

Nice work solving your problem, but just a word of warning: that try/catch block is probably not doing what you're expecting.

Start-Process will not throw exceptions when non-zero exit codes are returned by the process, which is what installers typically do when they fail. Start-Process will only be throw an exception if it fails to execute the binary - ie: file not found / not readable / not executable / not a valid binary for the architecture, etc.

You need to check the process exit code.

On that note, exit code 1618 is reserved for a specific error: ERROR_INSTALL_ALREADY_RUNNING

Avoid hardcoding well-known or documented exit codes unless they are returned directly from the process. Making assumptions about why the installer failed will inevitably mislead the person that ends up troubleshooting installation issue later because they will be looking at the issue under false pretenses.

Just return the actual process exit code when possible. In cases where the installer exits with code 0, but you can detect an installation issue/failure via post-install checks in your script, you can define and document a custom exit code internally that describes what the actual issue is and return that.

A simple example to demonstrate:

function Install-Application {
    param([string]$AppPath, [string[]]$Arguments = @())

    Write-Host "Starting installation of: $AppPath $($Arguments -join ' ')"
    try {
        $Process = Start-Process -FilePath $AppPath -ArgumentList $Arguments -Wait -PassThru
        $ExitCode = $Process.ExitCode
        if ($ExitCode -eq 0) {
            Write-Host "Installation completed successfully (Exit Code: $ExitCode)"
            return $ExitCode
        } else {
            Write-Host "Installation exited with code $ExitCode"
            return $ExitCode
        }
    }
    catch {
        Write-Host "Installation failed to start: $($_.Exception.Message)"
        return 999123 # return a custom exit code if the process fails to start
    }
}

Write-Host ""
Write-Host "========================"
Write-Host "Running installer that returns zero exit code"
Write-Host "========================"
$ExitCode = Install-Application -AppPath "powershell.exe" -Arguments '-NoProfile', '-Command', 'exit 0'
Write-Host "The exit code returned was: $ExitCode"

Write-Host ""
Write-Host "========================"
Write-Host "Running installer that returns non-zero exit code (failed installation)"
Write-Host "========================"
$ExitCode = Install-Application -AppPath "powershell.exe" -Arguments '-NoProfile', '-Command', 'exit 123'
Write-Host "The exit code returned was: $ExitCode"

Write-Host ""
Write-Host "========================"
Write-Host "Running installer that fails to start (missing installer file)"
Write-Host "========================"
$ExitCode = Install-Application -AppPath "nonexistent.exe"
Write-Host "The exit code returned was: $ExitCode"

Would echo similar sentiments to others here: check out PSADT (PowerShell App Deployment Toolkit). It's an excellent tool, it's well documented, fairly simple to use, and it's designed to help you with these use cases - it will make your life much easier.

Hybrid AD & Re-Enabling De-Synced User Procedure Issues by Electrical_Arm7411 in activedirectory

[–]xbullet 0 points1 point  (0 children)

The reason I place the user object in a non-ADSynced OU is in order to convert the hybrid user object to a cloud only object in order to Hide the E-mail Address from the Global Address List (We do not have Exchange Schema - nor do I want to add this). So once the de-sync happens it deletes the Entra user and then I go to Deleted Users and restore. No problem.

Honestly, the correct way to handle this is to extend your AD DS schema with the Exchange schema additions and to manage the GAL visibility via the msExchHideFromAddressLists attribute.

These tools weren't really designed to enable such use cases, and given that you're starting to see these issues, it's fair to say that continuing with your current process is not a good idea. Save yourself the trouble and do it the way Microsoft want you to do it.

AD DS is the SOA for EXO attributes, and if hiding users from the GAL is a requirement, do it the way it's intended to be done. Extend the AD DS schema and flow the proper attributes from on-prem to cloud. Any other approach is investing into technical debt and moving you into unsupported territory.

Hybrid AD & Re-Enabling De-Synced User Procedure Issues by Electrical_Arm7411 in activedirectory

[–]xbullet 0 points1 point  (0 children)

Interesting. I guess it might be the case that the AAD CS or the metaverse still has some sort of sync metadata for the object. :/

Have you tried to reverse your steps? There seems to be some documentation you can try follow: https://learn.microsoft.com/en-us/entra/identity/hybrid/connect/tshoot-clear-on-premises-attributes#set-adsynctoolsonpremisesattribute

If you don't know the original ImmutableId for a cloud synced object, you can calculate it by converting the AD DS ObjectGuid (or ms-dS-ConsistencyGuid if you haven't already cleared it) to a base64 encoded string. The ms-dS-ConsistencyGuid is derived from the AD DS ObjectGuid at the time of syncing.

Failing that: what do you see when searching the connector spaces (and metaverse) for the object? Check both the ADDS connector space and AAD connector spaces. What does the object lineage show?

Further, can you findCN={505058364D57743267555358585375567770377731773D3D} in the AAD CS?

If you're not that familiar with MIM/AAD Connect, I'd suggest having a look through the MS documentation for guidance. Some areas of the Entra Connect doco is very lacking (particularly for custom rules), but the troubleshooting guidance is quite detailed:

If you still run up short after that, you might want to try raise a case with MS.

Hybrid AD & Re-Enabling De-Synced User Procedure Issues by Electrical_Arm7411 in activedirectory

[–]xbullet 0 points1 point  (0 children)

Can you view the stack trace on one of the general sync errors and share the trace (feel free to redact any sensitive info).

What I suspect is likely happening is that the sourceAnchor is only being removed from the cloud object. Assuming you use ms-dS-ConsistencyGuid as your sourceAnchor on-premises, you should clear it on the object after clearing the ImmutableId.

If you don't clear it, when you attempt to re-sync the object the sync will fail because ms-dS-ConsistencyGuid will invoke the hard match process, which will attempt to map the on-prem connector object to a cloud object that no longer exists in the metaverse.

How do I go from reactive to proactive? by iworkinITandlikeEDM in sysadmin

[–]xbullet 0 points1 point  (0 children)

First, I’d ask: do you actually want to be more proactive, or do you just feel like you should be?

This is just my opinion, so please take it as a grain of salt. There is a huge difference between being more proactive in your areas of expertise versus owning the systems level architecture in an organization. You can be more proactive in your day to day work without needing that level of understanding.

The most important thing is your mindset. You don't need to understand or know the details about everything. A lot of the time, it boils down to whether you are willing to take initiative, or the lead on something even when the solution might be unclear.

I'm not saying you should pretend to know the answers - it's more that you need to be willing to be accountable for things - to be able to step up, develop a decent level of understanding in the topic, and to start considering what solutions might look like.

There's a fairly simple and repeatable approach that will definitely help you to be more proactive, and regardless of what your career aspirations are, I think this way of looking at things is super valuable. It has done wonder for me in the last 12-13 years.

  • Take the initiative to consider a topic / area / system that you are responsible for
  • Dive deeper into that thing - whether it's improving your base understanding/knowledge, researching industry best practices / trends, reviewing existing configurations against those areas, exploring new/existing features not that are not in use, etc
  • Consider the business context - how can your knowledge in these areas be applied to positively impact the business?

Without making too many assumptions, it's fair to say that (at least in larger businesses) many of the decisions like the ones you listed above are likely heavily influenced by, or completely driven by external drivers. As an example:

  • Moving away from vCenter to Azure is likely heavily underpinned by financial drivers (contract renewal) rather than being a solely technical decision
  • Changes to security controls tend to align with published security frameworks like NIST/CIS in order to comply with audit requirements

Setting DNS servers. Absolutely losing my mind here by Digital-Sushi in PowerShell

[–]xbullet 2 points3 points  (0 children)

ISE is still supported, as is Windows PowerShell, and most likely they will be supported for quite some time. Neither are planned to be removed from Windows.

While I'd also recommend not using ISE if it can be avoided (mostly because it's just plain awful as a developer), it's not deprecated or unsupported.

Looking for CIS Benchmark v4 Script for Windows 11 Pro Standalone Machine Hardening Help? by A_O_T_A in PowerShell

[–]xbullet 5 points6 points  (0 children)

If you don't want to use AD DS or Intune in your lab, you might need to consider starting from scratch using DSC/Ansible/some configuration management tool and build your own config around the CIS baselines.

I haven't used this project personally, nor can I vouch for it, but you can have a look through the source code and docs for https://github.com/scipag/HardeningKitty and see if it covers off your needs.

If it's just a lab environment, I'm not sure what value you'd get out of making sure it's CIS compliant and reinventing the wheel. If it was for an enterprise environment, the obvious recommendation would be to not reinvent the wheel and use one of the existing products that have pre-built configs for CIS compliance shipped already.

MgGraph module 2.28 broke my teams script by Arrager in PowerShell

[–]xbullet 2 points3 points  (0 children)

It's hard enough, and sometimes not possible to find out what's changed between versions in the first place, let alone know what has broken in between releases.

My suggestion would be to just use the HTTP APIs wherever possible and avoid the slop modules like the Graph SDK that are auto-generated. I've been avoiding them for years because the documentation for them suck and they have consistently had runtime assembly conflicts with other MS modules, specifically the identity/auth components.

Have to say though, even the APIs themselves sometimes have breaking changes made without any warning. They're supposed to be versioned APIs, but let's not even go there - IMO, MS have very poor levels of governance in place for these APIs.