For people wondering whether to migrate now: the practical question isn't "is a CRQC imminent" (it isn't), it's whether your encrypted messages have a useful lifetime longer than the optimistic deployment timeline.
If you encrypt a one-off email with a 5-year confidentiality requirement, harvest-now-decrypt-later actually matters. If you're encrypting backups that get rotated every 90 days, it doesn't.
The hybrid construction (Kyber/ML-KEM + X25519) is nice precisely because it's a no-regret move — you don't lose anything by adopting early. If Kyber turns out to have a structural flaw, X25519 still protects you. If a CRQC arrives, ML-KEM still protects you. The only real cost is key/ciphertext size, which for OpenPGP isn't a hot path anyway.
The interesting question is what happens to long-lived smartcard/HSM-backed keys. Those typically have a 5–10 year lifecycle and most hardware won't grow ML-KEM support without a hardware refresh. That's where I'd expect the first real compatibility headaches.
Some Hardware Security Module manufacturers were smart enough to include FPGAs in their products, which they can now use to accelerate PQC algorithms without a hardware refresh.
The trouble is that PQC already has inherent size/performance downsides, and it won't benefit from the decades of optimizations that classical algorithms had. Expect a hefty performance tax for some time.
> introduction of Kyber (aka ML-KEM or FIPS-203) as PQC encryption
algorithm
Funny to read 1-liner changelog versus the plethora of articles just few years ago along the line of "Quantum computer, it might just change our entire lives and make privacy impossible!".
The simple addition (of a not so simple algorithm) to the software (and few others, e.g. OpenSSL) and voila, me can move on with our daily lives. Cryptography and computational complexity are truly amazing.
It reminds me a lot of Y2K. The fix is simple, but finding the places where it's needed and doing it in a compatible way are absolutely non-trivial problems. The best we can hope is the same as Y2K: the plethora of articles convince businesses to invest large amounts of money to migrate algorithms, so that when a quantum computer arrives it won't be a big deal.
I don't know enough about either the technical nuance or the political drama, but some observers have noted that GnuPG's implementation is (deliberately?) incompatible with the IETF's standards. It's not clear why.
From the GnuPG prospective RFC-9580 is a deliberate fork away from what agreement could be achieved. Basically the faction that is now called RFC-9580 (mostly Sequoia and Proton) wanted to make a lot of changes to the existing standard but the faction that is now called LibrePGP (mostly GnuPG and RNP) was not convinced that those changes were necessary.
Traditionally the OpenPGP standards process has been very conservative and minimalistic. GnuPG comes from that tradition. So the RFC-9580 faction created their own maximalist version of the standard and are actively promoting it as the standard.
So from a user perspective, there are two incompatible proposals out there. It's a mess. So it is better to aggressively ignore them both and maintain interoperability by sticking with RFC-4880 (OpenPGP). That might be a problem if you for some reason are still concerned about a quantum attack against cryptography as the post quantum stuff has gotten caught in this schism. It is certainly something that the users need to keep in mind.
It is very hard to prevent a proposal from becoming a RFC. You have to generate ongoing opposition for longer than the supporters. FWIW, here is the LibrePGP proposal:
Observing the OpenPGP schism mess I think I have gained some insight as to why some RFCs become so bloated. For example it has been recently pointed out that there are 60 RFCs for TLS (with 31 drafts in progress)[1]. The RFC process seems to be more optimal during the design phase. Once we have an established standard there should to be some way to force those that propose changes/extensions to provide appropriately strong justifications for those changes/extensions. Right now it is a popularity contest and there will always be more people out there in favour of changes/extensions than those willing to endlessly fight against those changes/extensions. Because cryptography is so specialized and obscure, the users tend to get left out of the discussion.
It is a standard proposal, which is why it's in the standards track. The point was that it is not the only (the) standard, and not the universally accepted one.
As far as I understood it: GnuPG started to implement stuff from the standard before it was finished, the standard continued to improve and GnuPG refused to change code already written.
it's not that simple. the new standard is a complete rewrite of the old one. they are not even compatible anymore. things the old standard used to support are not supported in the new standard. that makes any implementation of the new standard incompatible with implementations of the old one. GnuPG simply refused to stop supporting the old standard and decided to fork the standard itself. on the personal drama my interpretation is that it resulted from people backing the new standard being unhappy that GnuPG didn't go along.
my opinion is that rewriting standards like that is the result of design by committee. everyone wants to put their mark on it. designing a new standard is fine, but the new standard should have also received a new name, or it should at least have been acknowledged that the old standard still needs to be supported until enough time has passed that the old standard is no longer in use. (which could take decades if not more if we want to be realistic and consider that encrypted data at rest could linger around pretty much forever unless actively re-encoded.)
been thinking about this a bit. someone just tell me what algo to use and ill start using it now. are the quantum-resistant cryptos significantly slower?
Basically the idea is use hybrid. AES-GCM-256 or ChaCha20-Poly1305 for symmetric encryption (which is already PQ-safe), and ML-KEM looks set to become the standard for key encapsulation.
ML-KEM-768 is fast as an algorithm, faster than X25519 in terms of pure computation, but uses large keys, so has higher overheads on small payloads. Most of the time, they’re about equal, or the absolute time is so slow it doesn’t matter.
Most folks now are doing hybrid ML-KEM and X25519 to guard against undiscovered flaws in ML-KEM.
For people reading this, you may want to know the the NSA is allegedly trying to weaken hybrid ML-KEM and X25519 down to just ML-KEM. This is a good thing to pay attention to!
For something like PGP, any performance difference wouldn't matter. There is one message and the key agreement is done once. As long as things are fast enough to be imperceptible to the user we are fine.
I believe ML-KEM is the standard algorithm for post-quantum asymmetric encryption. I think it's slower mainly because there's not good hardware support, but it shouldn't be a big deal because most encryption is hybrid where you only use the asymmetric crypto briefly to share a secret you can use for symmetric cryptography.
ML-KEM based on a lattice problem called "Learning With Errors", and there are similar lattice-based algorithms which have no known quantum speedup. Most traditional asymmetric encryption algorithms are based on number-theoretic assumptions like the discrete logarithm problem or the RSA assumption, which are broken by Shor's algorithm.
Symmetric cryptography (AES and SHA hash functions) are post-quantum resistant for now. Grover's algorithm technically cuts their asymptotic security in half, but that doesn't parallelize, so practically there is no known good quantum attack, and cryptographers and standards agencies tend to not worry about that. You can keep using those.
[edit: according to the sister comment posted simulataneously ML-KEM is faster than X25519. good to know!]
For people wondering whether to migrate now: the practical question isn't "is a CRQC imminent" (it isn't), it's whether your encrypted messages have a useful lifetime longer than the optimistic deployment timeline.
If you encrypt a one-off email with a 5-year confidentiality requirement, harvest-now-decrypt-later actually matters. If you're encrypting backups that get rotated every 90 days, it doesn't.
The hybrid construction (Kyber/ML-KEM + X25519) is nice precisely because it's a no-regret move — you don't lose anything by adopting early. If Kyber turns out to have a structural flaw, X25519 still protects you. If a CRQC arrives, ML-KEM still protects you. The only real cost is key/ciphertext size, which for OpenPGP isn't a hot path anyway.
The interesting question is what happens to long-lived smartcard/HSM-backed keys. Those typically have a 5–10 year lifecycle and most hardware won't grow ML-KEM support without a hardware refresh. That's where I'd expect the first real compatibility headaches.
Some Hardware Security Module manufacturers were smart enough to include FPGAs in their products, which they can now use to accelerate PQC algorithms without a hardware refresh.
The trouble is that PQC already has inherent size/performance downsides, and it won't benefit from the decades of optimizations that classical algorithms had. Expect a hefty performance tax for some time.
> introduction of Kyber (aka ML-KEM or FIPS-203) as PQC encryption algorithm
Funny to read 1-liner changelog versus the plethora of articles just few years ago along the line of "Quantum computer, it might just change our entire lives and make privacy impossible!".
The simple addition (of a not so simple algorithm) to the software (and few others, e.g. OpenSSL) and voila, me can move on with our daily lives. Cryptography and computational complexity are truly amazing.
It reminds me a lot of Y2K. The fix is simple, but finding the places where it's needed and doing it in a compatible way are absolutely non-trivial problems. The best we can hope is the same as Y2K: the plethora of articles convince businesses to invest large amounts of money to migrate algorithms, so that when a quantum computer arrives it won't be a big deal.
Does it implement the hybrid version ML-KEM-768 + X25519 or ML-KEM-768 only ?
The X25519 key could remain in hardware keys for a while til manufactures catch up.
If I understood the code correctly, it always use the hybrid version.
> Kyber is always used in a composite scheme along with a classic ECC algorithm.
I don't know enough about either the technical nuance or the political drama, but some observers have noted that GnuPG's implementation is (deliberately?) incompatible with the IETF's standards. It's not clear why.
https://floss.social/@hko/116459621169318785
From the GnuPG prospective RFC-9580 is a deliberate fork away from what agreement could be achieved. Basically the faction that is now called RFC-9580 (mostly Sequoia and Proton) wanted to make a lot of changes to the existing standard but the faction that is now called LibrePGP (mostly GnuPG and RNP) was not convinced that those changes were necessary.
Traditionally the OpenPGP standards process has been very conservative and minimalistic. GnuPG comes from that tradition. So the RFC-9580 faction created their own maximalist version of the standard and are actively promoting it as the standard.
So from a user perspective, there are two incompatible proposals out there. It's a mess. So it is better to aggressively ignore them both and maintain interoperability by sticking with RFC-4880 (OpenPGP). That might be a problem if you for some reason are still concerned about a quantum attack against cryptography as the post quantum stuff has gotten caught in this schism. It is certainly something that the users need to keep in mind.
> […] and are actively promoting it as the standard.
Well:
> Category: Standards Track
* https://datatracker.ietf.org/doc/html/rfc9580
It is very hard to prevent a proposal from becoming a RFC. You have to generate ongoing opposition for longer than the supporters. FWIW, here is the LibrePGP proposal:
* https://datatracker.ietf.org/doc/draft-koch-librepgp/
Observing the OpenPGP schism mess I think I have gained some insight as to why some RFCs become so bloated. For example it has been recently pointed out that there are 60 RFCs for TLS (with 31 drafts in progress)[1]. The RFC process seems to be more optimal during the design phase. Once we have an established standard there should to be some way to force those that propose changes/extensions to provide appropriately strong justifications for those changes/extensions. Right now it is a popularity contest and there will always be more people out there in favour of changes/extensions than those willing to endlessly fight against those changes/extensions. Because cryptography is so specialized and obscure, the users tend to get left out of the discussion.
[1] https://www.cs.auckland.ac.nz/~pgut001/pubs/bollocks.pdf
It is a standard proposal, which is why it's in the standards track. The point was that it is not the only (the) standard, and not the universally accepted one.
As far as I understood it: GnuPG started to implement stuff from the standard before it was finished, the standard continued to improve and GnuPG refused to change code already written.
Combined with some personal drama.
it's not that simple. the new standard is a complete rewrite of the old one. they are not even compatible anymore. things the old standard used to support are not supported in the new standard. that makes any implementation of the new standard incompatible with implementations of the old one. GnuPG simply refused to stop supporting the old standard and decided to fork the standard itself. on the personal drama my interpretation is that it resulted from people backing the new standard being unhappy that GnuPG didn't go along.
my opinion is that rewriting standards like that is the result of design by committee. everyone wants to put their mark on it. designing a new standard is fine, but the new standard should have also received a new name, or it should at least have been acknowledged that the old standard still needs to be supported until enough time has passed that the old standard is no longer in use. (which could take decades if not more if we want to be realistic and consider that encrypted data at rest could linger around pretty much forever unless actively re-encoded.)
(source: i talked to a GnuPG developer)
GnuPG Version 2.5.19
The 2.5 series are improvements for 64 bit Windows and the introduction of Kyber (aka ML-KEM or FIPS-203) as PQC encryption algorithm.
The old 2.4 series reaches end-of-life in just two months.
been thinking about this a bit. someone just tell me what algo to use and ill start using it now. are the quantum-resistant cryptos significantly slower?
Basically the idea is use hybrid. AES-GCM-256 or ChaCha20-Poly1305 for symmetric encryption (which is already PQ-safe), and ML-KEM looks set to become the standard for key encapsulation.
ML-KEM-768 is fast as an algorithm, faster than X25519 in terms of pure computation, but uses large keys, so has higher overheads on small payloads. Most of the time, they’re about equal, or the absolute time is so slow it doesn’t matter.
Most folks now are doing hybrid ML-KEM and X25519 to guard against undiscovered flaws in ML-KEM.
For people reading this, you may want to know the the NSA is allegedly trying to weaken hybrid ML-KEM and X25519 down to just ML-KEM. This is a good thing to pay attention to!
Here is a 6-part article about the topic: https://blog.cr.yp.to/20251004-weakened.html
> Here is a 6-part article about the topic: https://blog.cr.yp.to/20251004-weakened.html
* https://news.ycombinator.com/item?id=45477206
* https://news.ycombinator.com/item?id=45477206#unv_45477799
See various "NSA and IETF":
* https://news.ycombinator.com/from?site=cr.yp.to
It’s worth noting that e.g. the Go stdlib has this hybrid construction built-in via crypto/hpke.
So low not so slow
For something like PGP, any performance difference wouldn't matter. There is one message and the key agreement is done once. As long as things are fast enough to be imperceptible to the user we are fine.
I believe ML-KEM is the standard algorithm for post-quantum asymmetric encryption. I think it's slower mainly because there's not good hardware support, but it shouldn't be a big deal because most encryption is hybrid where you only use the asymmetric crypto briefly to share a secret you can use for symmetric cryptography.
ML-KEM based on a lattice problem called "Learning With Errors", and there are similar lattice-based algorithms which have no known quantum speedup. Most traditional asymmetric encryption algorithms are based on number-theoretic assumptions like the discrete logarithm problem or the RSA assumption, which are broken by Shor's algorithm.
Symmetric cryptography (AES and SHA hash functions) are post-quantum resistant for now. Grover's algorithm technically cuts their asymptotic security in half, but that doesn't parallelize, so practically there is no known good quantum attack, and cryptographers and standards agencies tend to not worry about that. You can keep using those.
[edit: according to the sister comment posted simulataneously ML-KEM is faster than X25519. good to know!]
cool, now my emails that nobody's reading anyway are safe from quantum computers that don't exist yet
it's mostly to make clowns repeating "It's not PQ secure therefore bad" happy I think