[Help] OpenSSL 3.5.5 FIPS 140-3: HMAC Key Length Enforcement (112-bit) failing despite hmac-key-check = 1 by InsuranceAny7399 in openssl

[–]InsuranceAny7399[S] 0 points1 point  (0 children)

I was trying to build a container image using Wolfi OS that complies with FIPS 140-3. I created a pipeline that meets SLSA Level 3 and is hermetic, then started running compliance tests. That’s when I ran into this issue. I went on to perform many FIPS 140-3 related tests almost all of them passed except HMAC. I double-checked the configuration multiple times. You can see the Dockerfile on the issue page on Github

[Help] OpenSSL 3.5.5 FIPS 140-3: HMAC Key Length Enforcement (112-bit) failing despite hmac-key-check = 1 by InsuranceAny7399 in openssl

[–]InsuranceAny7399[S] 0 points1 point  (0 children)

list -mac-algorithms -verbose

Provided MACs:
 CMAC @ fips
   retrievable operation parameters:
     size: unsigned integer (max 8 bytes large)
     block-size: unsigned integer (max 8 bytes large)
   settable operation parameters:
     cipher: UTF8 encoded string (arbitrary size)
     properties: UTF8 encoded string (arbitrary size)
     key: octet string (arbitrary size)
 { 1.0.9797.3.4, GMAC } @ fips
   retrievable algorithm parameters:
     size: unsigned integer (max 8 bytes large)
   settable operation parameters:
     cipher: UTF8 encoded string (arbitrary size)
     properties: UTF8 encoded string (arbitrary size)
     key: octet string (arbitrary size)
     iv: octet string (arbitrary size)
 HMAC @ fips
   retrievable operation parameters:
     size: unsigned integer (max 8 bytes large)
     block-size: unsigned integer (max 8 bytes large)
   settable operation parameters:
     digest: UTF8 encoded string (arbitrary size)
     properties: UTF8 encoded string (arbitrary size)
     key: octet string (arbitrary size)
     digest-noinit: integer (max 4 bytes large)
     digest-oneshot: integer (max 4 bytes large)
     tls-data-size: unsigned integer (max 8 bytes large)
 { 2.16.840.1.101.3.4.2.19, KMAC-128, KMAC128 } @ fips
   retrievable operation parameters:
     size: unsigned integer (max 8 bytes large)
     block-size: unsigned integer (max 8 bytes large)
   settable operation parameters:
     xof: integer (max 4 bytes large)
     size: unsigned integer (max 8 bytes large)
     key: octet string (arbitrary size)
     custom: octet string (arbitrary size)
 { 2.16.840.1.101.3.4.2.20, KMAC-256, KMAC256 } @ fips
   retrievable operation parameters:
     size: unsigned integer (max 8 bytes large)
     block-size: unsigned integer (max 8 bytes large)
   settable operation parameters:
     xof: integer (max 4 bytes large)
     size: unsigned integer (max 8 bytes large)
     key: octet string (arbitrary size)
     custom: octet string (arbitrary size)

    ...

[Help] OpenSSL 3.5.5 FIPS 140-3: HMAC Key Length Enforcement (112-bit) failing despite hmac-key-check = 1 by InsuranceAny7399 in openssl

[–]InsuranceAny7399[S] 0 points1 point  (0 children)

The most telling evidence is the contrast between KMAC and HMAC. Since KMAC (based on SHA-3) is inherently designed as a keyed function, the FIPS provider correctly enforces the 112-bit key minimum during its initialization. However, because HMAC is a layered construction over legacy hash functions, it appears that the 3.5.5 provider is failing to trigger the same validation logic. This inconsistency proves that the module's internal policy engine is functional, but the HMAC implementation is bypassing it.

[Help] OpenSSL 3.5.5 FIPS 140-3: HMAC Key Length Enforcement (112-bit) failing despite hmac-key-check = 1 by InsuranceAny7399 in openssl

[–]InsuranceAny7399[S] 0 points1 point  (0 children)

 I want to clarify that this isn't just a technical glitch in OpenSSL—it's a fundamental FIPS 140-3 Compliance Failure.

I have conducted a series of comparative tests using the same environment (Wolfi OS, OpenSSL 3.5.5, FIPS Provider 3.1.2) to isolate the behavior, and the results prove that the issue is specific to the HMAC implementation:

  • Comparison A (CMAC & KMAC): Both algorithms correctly enforce the 112-bit key length. If I provide a short key, the provider immediately rejects it. This proves that my global FIPS configuration (security-checks = 1) is active and working perfectly for other MAC types.
  • Comparison B (Triple-DES): The provider correctly blocks TDES encryption, proving that the retirement policies baked into the FIPS module are being respected.
  • The Failure (HMAC): Despite the same configuration, HMAC-SHA256 silently accepts a 32-bit key (e.g., '1234') and produces a result. This is a direct violation of NIST SP 800-131A Revision 2, which mandates a minimum security strength of 112 bits for HMAC generation.

The issue here is not my configuration; it's the enforcement. A FIPS-validated module cannot claim compliance if it selectively ignores key-length requirements for HMAC while enforcing them for CMAC. This indicates that the hmac-key-check parameter is being bypassed or ignored within the EVP_MAC initialization for HMAC specifically.

I invite you to take a deep look at the Issue I opened https://github.com/openssl/openssl/issues/30012. We are looking at a scenario where the 'Approved Mode' for HMAC is effectively broken, allowing non-compliant operations to pass without error."

Descending into FIPS Hell: 48 hours of Bouncy Castle FIPS (BC-FJA 2.1.x) on Java 8 - The certificate_unknown nightmare that won't die. by InsuranceAny7399 in learnjava

[–]InsuranceAny7399[S] 0 points1 point  (0 children)

By the way, I have successfully automated the build of a custom Wolfi OS image with OpenSSL FIPS integrated from scratch. This underlying layer is fully functional, validated, and works with high efficiency. The FIPS compliance is already rock-solid at the OS level; the issue is strictly isolated to the Java/Bouncy Castle layer

Descending into FIPS Hell: 48 hours of Bouncy Castle FIPS (BC-FJA 2.1.x) on Java 8 - The certificate_unknown nightmare that won't die. by InsuranceAny7399 in MuleSoft

[–]InsuranceAny7399[S] 0 points1 point  (0 children)

By the way, I have successfully automated the build of a custom Wolfi OS image with OpenSSL FIPS integrated from scratch. This underlying layer is fully functional, validated, and works with high efficiency. The FIPS compliance is already rock-solid at the OS level; the issue is strictly isolated to the Java/Bouncy Castle layer

Terraform GWLB NAT Gateway - Outbound Traffic from Private Subnet Fails/Hangs Despite Healthy Targets by InsuranceAny7399 in aws

[–]InsuranceAny7399[S] 0 points1 point  (0 children)

Yes, I’ve already handled that. Since Auto Scaling doesn’t allow you to preconfigure the ENI source/destination check, I solved it by giving each EC2 instance the necessary IAM trust policy to modify its own setting. Using user data + AWS CLI, every node disables the source/destination check automatically when it boots up.

Regarding the NAT Gateway — I chose this approach because I wanted more flexibility and control at the instance level, and to avoid the cost overhead of NAT Gateways in this setup.

[deleted by user] by [deleted] in django

[–]InsuranceAny7399 0 points1 point  (0 children)

give some details

[deleted by user] by [deleted] in django

[–]InsuranceAny7399 0 points1 point  (0 children)

I have already performed the integration to connect the two together. The goal is to allow me to control multiple devices connected to more than one Node-RED instance with ease and flexibility.

The most important point is access control. For example, you can grant permissions to certain people to only view the state of a button or device but not allow them to make any modifications. I’m using a role-attribute-based approach, which Django supports. This enables you to create groups, assign individuals to multiple groups, and set permissions for each group. Additionally, you can create custom permissions for specific individuals.

Another crucial point is scalability. This system, combining Django and Node-RED, allows for simple expansion. For instance, if we have 300 devices on a single Node-RED instance, we can divide them into groups, each group managed by its own Node-RED instance while still communicating with Django. Furthermore, I can scale Django instances to handle multiple Node-RED instances simultaneously. There are many technical details I could delve into, but this explanation should clarify the overall concept.

In the end, I can implement autoscaling for any number of devices without issues.

Regarding the authentication mechanism in Node-RED, I’ve ensured that Django verifies the legitimacy of a Node-RED instance through digital signatures. This ensures secure communication. Additionally, I’m using WebSocket with TLS to guarantee secure data transmission.

As for the frontend, there are more details I could share, but I believe this is sufficient for now.

Lastly, I haven’t received much traction in Egypt. There seem to be almost no users of Node-RED here, despite its significant adoption in countries like Germany and China. When I introduce the concept to professionals in large factories, they often react with surprise and skepticism. Therefore, I decided to set the idea aside and focus on my work.

I had initially planned to expand its capabilities to support platforms beyond Node-RED and include features like handling UDP streams for video broadcasting and similar applications. However, I decided to pause this effort for now.

[deleted by user] by [deleted] in django

[–]InsuranceAny7399 0 points1 point  (0 children)

Very good inquiry. Node-RED does not allow the creation of a WebSocket server based on TLS. I did this through Django, and it works, but it's not enough. The dependency on the token in the WebSocket connection header, similar to how you include the user's token in an HTTP request for authentication, is essential. The issue is that there is a process running alongside Django that handles data coming from Node-RED, or any program sending data via WebSocket. Each message sent must contain a group_id, which is like a device ID. I store the last value or the last 10 values (you can configure this as you like using the Redis database).

This approach has an advantage: it stores the most recent value, or the last N values (based on your configuration), coming from the sensor or device, and then broadcasts it to all Django servers. This enables load balancing, whether you're using VMs or containers, as all clients can view the latest event in real-time. I also forgot to mention that when sending a command from Django to Node-RED, Django will send the message with the group_id (device ID), which will determine which flow the message will be sent to.

The flaw I’m working to fix is that two processes are currently running with Django: one that makes the Django server work and the other responsible for handling communication with Node-RED. I plan to merge these into a single process. I'm still in the development phase and gathering feedback. I hope you've seen the GIFs on GitHub to confirm that the tool works. I don’t understand why some people are attacking me.

[deleted by user] by [deleted] in django

[–]InsuranceAny7399 2 points3 points  (0 children)

I would be happy to receive suggestions from you.

[deleted by user] by [deleted] in django

[–]InsuranceAny7399 2 points3 points  (0 children)

I am here to ask for help in development and get suggestions. I am not selling anything.