Cryptography Asked by Guille on October 24, 2021
As far as I know, FIPS requires a set of self tests (POST) to verify the cryptographic algorithms permitted and the integrity of the module.
These tests are performed at run-time, so OpenSSL does a HMAC-SHA1 of the code loaded in memory and compares its output with the HMAC-SHA1 computed at build time.
I think that an attacker could modify the source code compile it and then both hashes would match too. Also, HMAC secret key is disclosed as OpenSSL is open-source software.
Where is the security here? Keeping the HMAC key secret? Which is the most-common approach to fulfill this FIPS requirement?
Indeed FIPS 140-2 requires a module to validate itself, and this has no security benefit.
In order for a code integrity check to have a security benefit, both the code that performs the verification and the data that it uses (hash value for a hash, MAC value and secret key for a MAC, public key for a signature) need to be integrity-protected (and with a MAC, the key also needs to be confidentiality-protected). Since the verification code is part of the module, the code integrity verification does not provide any security benefit. It guarantees the module's integrity under the assumption that the module's integrity is not breached.
As far as I know, this started as a requirement for hardware modules, which had to perform some kind of self-test that would typically fail if the physical integrity of the module was compromised. Basically, a physical tamper detection mechanism: make sure that the keys are unusable if the box has been opened. This doesn't translate to software well, but the requirement has been kept and has evolved to require cryptography, partly due to inertia and partly due to the illusory wish that something should be done.
An integrity (or authenticity) check is useful when the module that performs the check is better-protected than the module that is checked. For example, some devices boot from ROM (as in, physical ROM, which can only be modified by completely taking the device apart), which then transfers control to updatable, physically exposed storage (such as a hard disk or flash memory). If the ROM contains signature verification code and a root public key, and the ROM code verifies the integrity or authenticity of the external storage before transferring control to it, there is a real security benefit: it can ensure that the device will only run trusted code. Depending on who you ask, this is called secure boot or trusted boot. There is often a chain of secure boot, from ROM to firmware bootloader to operating system bootloader to operating system kernel to user land utilities. This chain relies on physical integrity protection for the first element, and cryptography to extend trust to subsequent elements.
Some security standards that are applicable to a specific kind of device (for example a PC, a smartphone, a smartcard, etc.) have specific requirements on secure boot. But FIPS 140 is too generic for that.
So treat the FIPS 140 integrity check as a compliance requirement, not as a security requirement. It's one of the many hoops through which you need to jump. Fiddle with your operating system's low-level interfaces to figure out how to read code in memory before any dynamic linking or randomization takes place. Calculate the expected MAC value and store it somewhere that isn't part of the verification (because it's infeasible to find $v$ such that $mathsf{MAC}(m) = v$ and $v$ is a substring of $m$). Arrange for your code's initialization sequence to verify the MAC of the code against the reference value stored next to it. It's part of the cost of getting a FIPS stamp.
Answered by Gilles 'SO- stop being evil' on October 24, 2021
Get help from others!
Recent Questions
Recent Answers
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP