Over the past 50 years, cryptographic strength has traditionally been measured in bits due to the correlation between the bit length and the number of possible keys that can be generated. This originated from the prime number factorisation problem used by algorithms like RSA. Each additional bit doubles the number of potential keys, thereby exponentially increasing the difficulty of a brute force attack decrypting the data without the correct key. The higher the bit number, the more secure the encryption was thought to be, as it implies a greater number of possible key combinations that a brute force attacker would have to try to break the encryption.

As encryption algorithms have evolved over time and new ones were introduced, i.e., Elliptic curves and NIST PQC algorithms, relative strength has been compared from a bit perspective i.e., RSA-2048 (2048 bits or 256 bytes) is equivalent to ECC-224 (224 bits or 28 bytes). As we have already determined in previous blog posts, both Classical and NIST PQC algorithms are based on mathematical complexity and a level of assumed security i.e., we don’t know of any algorithms that can solve the mathematical problem quickly therefore we assume it is secure.

Unfortunately, Cryptanalysis (or code breaking) does not typically brute force algorithms, but looks for novel ways of breaching cryptographic systems, therefore measuring cryptographic strength using number of bits makes this an arbitrary number and not a key determinant of actual cryptographic strength. With Quantum and AI assailants, cryptographic algorithms based on mathematical problems will be solved in even more novel ways, given an AI will make no assumptions. Therefore, the only thing we can say with absolute certainty is that any cryptographic primitives based on mathematical complexity will become obsolete, and your data privacy will be lost forever.

The only REAL way to mitigate this risk (read our blog here – https://incrypteon.com/cryptographic-risks-issues-mitigation-consequence/) is to avoid this happening, by targeting a higher cryptographic bar and ensure we align to Information Theoretic Security. This is the only way of protecting ourselves from the Quantum and AI threats in place and which is why as an Industry we should be more transparent around these risks and issues ensuring a healthy and open debate using an objective measurement.

## Introducing The “Shannon Security” Score

Introducing The Shannon Security Score ^{TM}, an objective security and performance score for comparing information theoretic security (conditional entropy), performance (encryption time) and ciphertext size (against original message size) for any cryptographic algorithms to provide a balanced and objective view of cryptographic strength.

The INDEX Shannon Score is 100.

- Any Shannon Score > 100 has Information Theoretic Secrecy and is secure against Quantum & AI adversaries even with unlimited Time and Compute power.
- Any Shannon Score < 100 Anything is not secure against Quantum and AI compute and
be broken.__will__

As an Industry, there is an opportunity we have to more accurately and objectively reflect the cryptographic strength of existing cryptographic primitives, new NIST PQC primitives, the OTP and Incrypteon (amongst other Quantum cryptographic primitives) to provide a transparent view of our current position in relation to Quantum and AI threats. Check out previous blog around how we currently mitigate risks and why we need to do more – https://incrypteon.com/cryptographic-risks-issues-mitigation-consequence/.

It has been named the Shannon Score ^{TM} since the principal security metric used for comparing the cryptographic strength of encryption systems for information theoretic security is “equivocation”. This was specified by Claude Shannon as a security index N indicating “the length of ciphertext that is secure, up to a point (the “Unicity Point”), where conditional entropy equals zero and the encipherment is breakable with great probability by an assailant with infinite time and computing resources”.

The Shannon Score ^{TM} is computed with respect to various lengths of ciphertext using the following formula:

**Shannon Score ^{TM} = k * (N / Ni) * (log M / log Mi)**

Where:

- k is a scaling constant
- N = The Unicity Distance (Equivocation) N for the system under evaluation using a natural language message and is calculated as:

**N = H(K) / D**

**H(K)**= is entropy of Key for the cipher and is the logarithm of the number of possible keys.**D**= is the redundancy of the language and measures the amount of “statistical constraint” imposed by the language. For normal language messages we have used D = .8375 (indicating 86.75% redundancy and 13.25% information)**Ni**= The Unicity Distance of the index system (One-Time Pad)**M**= Maximum Throughput rate in Bytes/sec for the cipher being evaluated.**Mi**= Maximum Throughput rate in Bytes/sec for the index system (One-Time Pad).

Note:

- Normal ciphers have fixed Unicity Distances irrespective of Ciphertext length because they have fixed key sizes.
- One Time Pads have keys as long as the message.
- Augmented entropy ciphers can increase their Unicity Distance because they use keys that are as long as the message.

The following table shows some initial Shannon Scores based on a message size of 1,500,000 bytes, illustrating the differences between existing cryptographic primitives (RSA and AES), alongside the OTP and Archaios from Incrypteon. We can see that AES-256 score is 0.0020499, whereas Archaois from Incrypteon scores 203.98.

In a future blog post we will expand our comparisons to include NIST PQC approved and proposed algorithms to provide their Shannon Scores as well as we as a more detailed analysis across different message sizes. We will explain why Incrypteon scores high given we have solved the Entropy depletion problem using our Entropy Augmentation patent, and how this applies to different message sizes.

We are looking to work with Cryptographers who share our vision for a No Comprise Cryptographic future, openly discussing raising the cryptographic bar and objectively measuring cryptographic strength to build trust and ensure transparency.

For more information, read our blog post here – https://incrypteon.com/equivocaton-graphs-unicity-point/ and our white papers here – https://incrypteon.com/white-papers/

Please join us on our journey!

*Steve, Helder & Ian – Incrypteon Co-founders*