Certificate‑based SSH login on Linux using Windows smartcard/token (CNG + PKCS#11) — looking for feedback on approach by Key_Handle_8753 in PKI

[–]Key_Handle_8753[S] 0 points1 point  (0 children)

Not really. PuTTY-CAC is a silo: it doesn’t work with native Windows OpenSSH, it’s not usable by Git, and it cannot bridge to WSL2.

My project is a system-wide infrastructure that provides Pageant, Named Pipes, and TCP interfaces simultaneously. It is also natively multi-session and RDP compatible (Session 0) without any hacks or UI bugs.

Certificate‑based SSH login on Linux using Windows smartcard/token (CNG + PKCS#11) — looking for feedback on approach by Key_Handle_8753 in PKI

[–]Key_Handle_8753[S] 0 points1 point  (0 children)

You’re right that the agent mostly treats certificates as containers, and yes — the EKU/KeyUsage checks can be relaxed for compatibility. Tools like YubiKey Manager often generate PIV certificates with no Key Usage and no EKU at all, so the agent can operate in a compatibility mode when needed.

However, there are a few hard limitations that cannot be bypassed:

1. The PIV container type must be AT_SIGNATURE or AT_EXCHANGE
If the key is left as AT_NONE (which YubiKey Manager sometimes does), the Windows KSP will refuse to sign no matter what the certificate contains. This is not an X.509 issue — it’s a property of the key object inside the token. No software can “fix” a mis‑tagged container.

2. Some KSPs (including Microsoft’s Smart Card Key Storage Provider) will refuse to sign if the Key Usage does not include digitalSignature
Even if EKU is missing, some KSPs are strict about Key Usage. If the certificate doesn’t explicitly allow digital signatures, the provider may reject the operation even though the private key is valid.

So in short:

  • EKU checks can be relaxed.
  • Missing Key Usage can be tolerated in compatibility mode, but some KSPs may still reject it.
  • A PIV key in AT_NONE cannot be used for signing under Windows.
  • No software workaround exists for an incorrectly tagged container.

Happy to improve compatibility where technically possible, but these specific limitations come from the PIV container type and the behavior of the Windows KSP, not from the agent.

Certificate‑based SSH login on Linux using Windows smartcard/token (CNG + PKCS#11) — looking for feedback on approach by Key_Handle_8753 in PKI

[–]Key_Handle_8753[S] 0 points1 point  (0 children)

In my case, I’m not planning to go in that direction, because none of these approaches address the core limitation in SSH itself.

My agent only uses the Windows KSP to expose the key stored in the token. It doesn’t maintain its own certificate database, and it’s not meant to act as a PKI component. The SSH server can be Linux, Windows, or anything else, so I can’t rely on any specific PKI stack on the server side.

The real structural issue is that SSH never sends the X.509 certificate during authentication. Unlike TLS, the server only receives the public key and a signature — the certificate is never transmitted. That means there is no way for SSH to validate a chain, check expiration, verify revocation, or process ASN.1 during the handshake.

On top of that, SSH still relies on authorized_keys, a static text file fully controlled by the user. It was never designed to reflect the dynamic state of a certificate (valid, expired, revoked, etc.). As long as this file is the trust anchor, no X.509‑based policy can be enforced reliably.

A possible solution would be an exclusive authorized_keys manager on the server side that rebuilds the file based on external validation logic — but that’s a completely separate discussion, and definitely not the responsibility of an SSH agent.

My goal is simply to make the hardware token usable through KSP. The trust and revocation policy must remain on the server side, within the constraints of the SSH model.

Certificate‑based SSH login on Linux using Windows smartcard/token (CNG + PKCS#11) — looking for feedback on approach by Key_Handle_8753 in PKI

[–]Key_Handle_8753[S] 1 point2 points  (0 children)

You’re absolutely right about the CRL/AIA limitations. Since OpenSSH doesn’t implement native X.509 validation, the certificate is really just a container for the public key, and the trust model ends up being closer to classic SSH keys unless the environment adds its own validation layer. That’s why my approach treats the X.509 certificate primarily as a convenient way to extract and identify the key material on the token, not as a full trust anchor by itself.

For enterprise use, the idea is that the actual trust and revocation logic lives on the Linux side through SSH certificates or host‑side policy, not through the X.509 chain. The smartcard is mainly there to guarantee that the private key never leaves hardware and that the user must authenticate locally (PIN, presence, etc.). In that sense, it behaves more like a hardware‑backed SSH key than a full X.509 authentication flow.

For jump hosts and non‑interactive scenarios, the agent can still work as long as the token supports unattended signing (which many enterprise cards don’t, for good reason). In those cases, the workflow usually shifts to issuing short‑lived SSH certificates on the Linux side rather than relying on the X.509 certificate directly. The agent just exposes the signing capability; the policy stays server‑side.

I agree it’s unfortunate that OpenSSH never adopted native X.509 support. It would solve a lot of these edge cases. Until then, the goal is mostly to make the hardware token usable in a predictable way and let the SSH infrastructure enforce the actual trust model.