Digital signature that is only verifiable by one specific personWhat is the main difference between a key, an IV and a nonce?What happens if a nonce is reused in ChaCha20-Poly1305?Secure encrypt-then-sign with RSADo I really need to use the same private/public key pair in RSA?Zero Knowledge vs. Digital SignatureDigital signature - how secure is it?Understanding SSL Certificate SigningMinimum number of public and private keys depending on signature?Digital Signature Algorithm for large files - bottle-necked by hash function?
Drawing hexagonal lattice in LaTex using Cartesian coordinates
Can I reuse old electrical wire?
How to express "naked" in different situations?
Python Curses input screen
Why would the command "ls *" generate an error?
Z80 CPU address lines not stable
"When you Frankenstein a team together..." - Is "Frankenstein" a new verb?
Why do we need dedicated launch vehicles for small satellites?
Thicken stew/sauce twice with flour
What is the most life you can have at the end of your first turn with only three cards?
What element, if any, would justify mining stars (financially)?
How do I free the memory used from node_load()?
How to create numeronyms in bash
Is there an algorithm for determining whether an expression involving nested radicals is rational?
Approx 1948 Brasil Brazil Airliner...what is it? Taildragger?
Why does it seem everything I push moves at a constant velocity?
Logic - How to say "Not only but also".
Software update on obsolete system: is it real?
Is it a Hamiltonian cycle on a grid?
Error: Could not find org.jetbrains.kotlin:kotlin-stdlib-jdk8:1.3.60-eap-25 in Ionic 3
MobileDevice.pkg untrusted, cannot open Xcode after OS X update
The DM is unapologetically providing TPK encounters; what can we do?
Why couldn't the Romulans simply circumvent Starfleet's blockade?
Why don't combat aircraft have rear-facing *laser* weapons?
Digital signature that is only verifiable by one specific person
What is the main difference between a key, an IV and a nonce?What happens if a nonce is reused in ChaCha20-Poly1305?Secure encrypt-then-sign with RSADo I really need to use the same private/public key pair in RSA?Zero Knowledge vs. Digital SignatureDigital signature - how secure is it?Understanding SSL Certificate SigningMinimum number of public and private keys depending on signature?Digital Signature Algorithm for large files - bottle-necked by hash function?
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty
margin-bottom:0;
.everyonelovesstackoverflowposition:absolute;height:1px;width:1px;opacity:0;top:0;left:0;pointer-events:none;
$begingroup$
I would like to digitally sign a message in such a way that the signature can only be verified by one specific person. Simply encrypting the signature will not work, because that person could then decrypt and publish the signature and everyone else would be able to verify it.
Does anyone know of a scheme to accomplish this? Or any other advice?
In my situation, the person verifying the signature has an incentive not to publish their private keys.
public-key signature reference-request dsa
$endgroup$
|
show 1 more comment
$begingroup$
I would like to digitally sign a message in such a way that the signature can only be verified by one specific person. Simply encrypting the signature will not work, because that person could then decrypt and publish the signature and everyone else would be able to verify it.
Does anyone know of a scheme to accomplish this? Or any other advice?
In my situation, the person verifying the signature has an incentive not to publish their private keys.
public-key signature reference-request dsa
$endgroup$
2
$begingroup$
Does it have to be an asymmetric signature or would a MAC do?
$endgroup$
– mat
Jun 13 at 9:06
3
$begingroup$
I've never needed this type of signature. Could you provide some details of your use case? It is merely curiosity on my part.
$endgroup$
– jww
Jun 13 at 20:13
6
$begingroup$
Your 'doesn't work' case applies to all possible solutions. For example you could sign with the public key and verify with the person's private key, APIs permitting, but then the person could leak the private key and thus enable anybody to verify.
$endgroup$
– user207421
Jun 14 at 1:34
1
$begingroup$
"the person verifying the signature has an incentive not to publish their keys" So does this mean the person signing has access to the verifiers keys?
$endgroup$
– Tezra
Jun 14 at 19:44
$begingroup$
I suggest editing the question to further emphasise that the verifier is not trusted.
$endgroup$
– Max Barraclough
Jun 16 at 12:30
|
show 1 more comment
$begingroup$
I would like to digitally sign a message in such a way that the signature can only be verified by one specific person. Simply encrypting the signature will not work, because that person could then decrypt and publish the signature and everyone else would be able to verify it.
Does anyone know of a scheme to accomplish this? Or any other advice?
In my situation, the person verifying the signature has an incentive not to publish their private keys.
public-key signature reference-request dsa
$endgroup$
I would like to digitally sign a message in such a way that the signature can only be verified by one specific person. Simply encrypting the signature will not work, because that person could then decrypt and publish the signature and everyone else would be able to verify it.
Does anyone know of a scheme to accomplish this? Or any other advice?
In my situation, the person verifying the signature has an incentive not to publish their private keys.
public-key signature reference-request dsa
public-key signature reference-request dsa
edited Jun 15 at 19:59
Jesse Busman
asked Jun 13 at 8:59
Jesse BusmanJesse Busman
1911 silver badge7 bronze badges
1911 silver badge7 bronze badges
2
$begingroup$
Does it have to be an asymmetric signature or would a MAC do?
$endgroup$
– mat
Jun 13 at 9:06
3
$begingroup$
I've never needed this type of signature. Could you provide some details of your use case? It is merely curiosity on my part.
$endgroup$
– jww
Jun 13 at 20:13
6
$begingroup$
Your 'doesn't work' case applies to all possible solutions. For example you could sign with the public key and verify with the person's private key, APIs permitting, but then the person could leak the private key and thus enable anybody to verify.
$endgroup$
– user207421
Jun 14 at 1:34
1
$begingroup$
"the person verifying the signature has an incentive not to publish their keys" So does this mean the person signing has access to the verifiers keys?
$endgroup$
– Tezra
Jun 14 at 19:44
$begingroup$
I suggest editing the question to further emphasise that the verifier is not trusted.
$endgroup$
– Max Barraclough
Jun 16 at 12:30
|
show 1 more comment
2
$begingroup$
Does it have to be an asymmetric signature or would a MAC do?
$endgroup$
– mat
Jun 13 at 9:06
3
$begingroup$
I've never needed this type of signature. Could you provide some details of your use case? It is merely curiosity on my part.
$endgroup$
– jww
Jun 13 at 20:13
6
$begingroup$
Your 'doesn't work' case applies to all possible solutions. For example you could sign with the public key and verify with the person's private key, APIs permitting, but then the person could leak the private key and thus enable anybody to verify.
$endgroup$
– user207421
Jun 14 at 1:34
1
$begingroup$
"the person verifying the signature has an incentive not to publish their keys" So does this mean the person signing has access to the verifiers keys?
$endgroup$
– Tezra
Jun 14 at 19:44
$begingroup$
I suggest editing the question to further emphasise that the verifier is not trusted.
$endgroup$
– Max Barraclough
Jun 16 at 12:30
2
2
$begingroup$
Does it have to be an asymmetric signature or would a MAC do?
$endgroup$
– mat
Jun 13 at 9:06
$begingroup$
Does it have to be an asymmetric signature or would a MAC do?
$endgroup$
– mat
Jun 13 at 9:06
3
3
$begingroup$
I've never needed this type of signature. Could you provide some details of your use case? It is merely curiosity on my part.
$endgroup$
– jww
Jun 13 at 20:13
$begingroup$
I've never needed this type of signature. Could you provide some details of your use case? It is merely curiosity on my part.
$endgroup$
– jww
Jun 13 at 20:13
6
6
$begingroup$
Your 'doesn't work' case applies to all possible solutions. For example you could sign with the public key and verify with the person's private key, APIs permitting, but then the person could leak the private key and thus enable anybody to verify.
$endgroup$
– user207421
Jun 14 at 1:34
$begingroup$
Your 'doesn't work' case applies to all possible solutions. For example you could sign with the public key and verify with the person's private key, APIs permitting, but then the person could leak the private key and thus enable anybody to verify.
$endgroup$
– user207421
Jun 14 at 1:34
1
1
$begingroup$
"the person verifying the signature has an incentive not to publish their keys" So does this mean the person signing has access to the verifiers keys?
$endgroup$
– Tezra
Jun 14 at 19:44
$begingroup$
"the person verifying the signature has an incentive not to publish their keys" So does this mean the person signing has access to the verifiers keys?
$endgroup$
– Tezra
Jun 14 at 19:44
$begingroup$
I suggest editing the question to further emphasise that the verifier is not trusted.
$endgroup$
– Max Barraclough
Jun 16 at 12:30
$begingroup$
I suggest editing the question to further emphasise that the verifier is not trusted.
$endgroup$
– Max Barraclough
Jun 16 at 12:30
|
show 1 more comment
3 Answers
3
active
oldest
votes
$begingroup$
What you seem to be looking for is deniable authentication.
This is actually a somewhat stronger property than what you're asking for: it guarantees that the recipient (let's call him Bob) cannot cryptographically convince anyone else that the sender (let's call her Alice) signed the message, even if he discloses all his private keys, simply because the protocol guarantees that knowing Bob's (and/or Alice's) private key is both necessary to verify the signature and sufficient to forge it. So Bob, seeing a message with a valid signature and knowing that he didn't create it himself, can be confident that Alice must have sent it — but he cannot use the signature to convince anyone else of that, since he could've just as well created the signature himself.
The simplest way to achieve this kind of authenticated but repudiable communication between two parties is to use a symmetric-key authenticated encryption scheme (or, if message privacy is for some reason not required or desired, just a plain MAC). With these schemes, Alice and Bob know the same secret key that is used both to authenticate the messages and to verify their authenticity. Thus, trivially, anything Alice can do (such as to create a valid authenticated message claiming to be from Alice to Bob) Bob — or anyone else who knows the secret key — can do as well.
The main drawbacks of such symmetric-key schemes are that they require a separate secret key for each pair of communicating parties (which could become cumbersome if there are potentially many such parties) and that the secret keys must somehow be securely shared between each pair of parties. This would be easy if we had an encrypted and authenticated secure channel between each pair of parties, but since that's exactly what we're trying to set up here, that creates a kind of a chicken-and-egg problem.
One way around these issues is to use public-key encryption to share the secret keys. In particular, we can use the Diffie–Hellman key exchange to establish a shared secret between any two parties, as long as they know each other's public keys (and, of course, their own corresponding private keys).
The Diffie–Hellman key exchange is often illustrated as an interactive protocol, but actually the only interaction it needs is for each party to send their public key to the other (which they may do in advance, e.g. by publishing them on some semi-trusted central key server). After that, any time one party (say, again, Alice) wants to send a message to another party (say, Bob), she can just combine her private key with Bob's public key to obtain a secret value known only to her and Bob, and then use this secret (possibly after feeding it through a suitable KDF) as the symmetric secret key for an authenticated encryption scheme as described above.
Anyway, for practical use, you don't actually need to implement any of this yourself, since there are plenty of existing implementations of such schemes. For example, the NaCl library (and its various derivatives, such as libsodium) provides the crypto_secretbox
function for symmetric-key authenticated encryption and the crypto_box
function for repudiable authenticated public-key encryption. If you don't particularly need to roll your own encryption scheme, I would encourage you to use those, or some other similar established and well studied implementation.
(One possible reason why you might want to do that is for nonce misuse resistance. The NaCl functions described above require you to assign each message a unique nonce, and its security can be badly compromised if you ever reuse the same nonce for two distinct messages encrypted with the same secret key. There are authenticated encryption schemes based on the SIV construction that are much more resistant to such nonce misuse, such as AES-SIV, AES-GCM-SIV or even HS1-SIV, but NaCl crypto_box
does not currently support them. If you wanted, you could reimplement the "hashed Diffie–Hellman" part of crypto_box
using crypto_scalarmult
and use the resulting key with some SIV-style symmetric encryption scheme, but that requires a lot more effort and care than just using crypto_box
as it is.)
Ps. On a slight tangent, note that Diffie–Hellman alone doesn't entirely solve the key distribution problem, since it still relies on the parties being able to share their public keys without anyone tampering with them. In particular, if Alice and Bob are trying to exchange public keys over a channel controlled by a middle-man Mallory, he can just replace Alice's and Bob's public keys with his own, and thereafter intercept any messages encrypted with those keys, decrypting and re-encrypting each message before passing it on.
(Of course, if Mallory ever stops doing that, Alice and Bob will find themselves unable to communicate until and unless they re-exchange public keys. But to Alice and Bob, that just looks as if someone just started attacking their communications by intercepting their messages and replacing them with invalid forgeries. Without some alternative communications channel, there's no way for Alice or Bob to know whether an attack just started or whether one just stopped. And even if they do somehow figure it out, it may be too late.)
One way to try to solve this problem is to set up some kind of a public key infrastructure where third parties can sign Alice and Bob's public keys in order to vouch for their correctness. But setting up a reliable PKI is far from a trivial task, since at some point you still need to trust someone.
$endgroup$
13
$begingroup$
It may be amusing to note that what Whit Diffie and Martin Hellman originally proposed in their seminal 1976 new directions paper was not, in fact, an interactive key agreement system, but rather a system for putting your public key in the telephone book so that you can compute a shared secret for symmetric cryptography with anyone in town noninteractively.
$endgroup$
– Squeamish Ossifrage
Jun 13 at 14:01
$begingroup$
@SqueamishOssifrage: huh I've never known that, but didn't "ElGamal" encryption in PGP end up achieving that exact thing with DH?
$endgroup$
– grawity
Jun 14 at 7:34
$begingroup$
Interesting. So many crypto proofs are built around just knowing. Such a deniable authentication is a proof around doing.
$endgroup$
– Cort Ammon
Jun 14 at 15:08
$begingroup$
Deniability schemes don't work in practice. Just ask Chelsea Manning, who was using OTR for deniability. I would not risk my freedom or life on a deniability scheme. I would do something like anonymous upload or Tor service.
$endgroup$
– jww
Jun 16 at 10:58
2
$begingroup$
@jww ‘Deniability’ doesn't mean that the mere use of a MAC on a message instead of a signature is a get-out-of-jail-free card for the sender. It only means that the mere use of a MAC by the receiver is not enough, on its own, to convince a third party that the sender sent it. The third party might reasonably accept the receiver's claim on the basis of other evidence. And, of course, that's not a reason to give them stronger evidence by using signatures that are designed for third-party verifiability, unless you want to guarantee third-party verifiability in the first place.
$endgroup$
– Squeamish Ossifrage
Jun 16 at 17:44
add a comment
|
$begingroup$
Lets say Alice wants to send Bob a sensitive message, she wants to prove to Bob that it came from her, but she doesn't want Bob to be able to prove that to anyone else.
A MAC is a good way of doing this. If Alice and Bob share a MAC key (and only they have it) then Bob will know any message authenticated with that MAC key came from Alice, since he knows he didn't make it, and she is the only other person who could have.
However, there would be no way for a third party to tell the difference between a message from Alice and a forgery from Bob, since Bob is just as capable of creating the MACs as Alice.
A ring signature would also work, and it wouldn't require them to share a secret. Here, Alice would make a signature which proves that the message came from Alice OR Bob. Bob knows he didn't sign it, but he'd have a hard time trying to convince a third party about that.
$endgroup$
add a comment
|
$begingroup$
the person verifying the signature has an incentive not to publish their keys
I don't see any way around publishing a public key. Other than possession of a secret, what could differentiate the designated 'verifier' from others?
With that said I believe we can use a public/private key-pair, such that only the 'verifier' has the private key, to achieve the properties you want.
We can't simply have senders take a hash of their message and encrypt it with the well-known public key, to generate the signature. This would allow anyone to verify the signature, as anyone can hash a message and encrypt the result using the public key. This 'generate-and-compare' is a show-stopper despite that the attacker doesn't have the private key. (Indeed, the private key is of no real value at all.)
Fortunately we should be able to fix this 'reversibility' problem by introducing non-determinism, in the form of a random 'nonce' number.
When someone wants to sign a message, they first pair the input message with a random nonce, then they encrypt this pair using the public key. The resulting encrypted blob can be used as the signature. (Unfortunately the length of the signature will roughly equal the length of the original message.)
The 'verifier' can easily verify integrity: they decrypt the encrypted blob using the private key, discard the nonce component, and compare the other (message) component against the unencrypted message.
No-one else can make use of the encrypted blob, however, as without the private key they cannot decrypt it, and due to the use of nonce they cannot use the generate-and-compare approach; their random nonce will be different, meaning they will generate a completely different encrypted blob.
(Someone more knowledgeable than me might know whether this approach has a name, or perhaps a fatal flaw that I've missed.)
$endgroup$
$begingroup$
Sorry, I meant that they have an incentive not to publish their private keys.
$endgroup$
– Jesse Busman
Jun 15 at 19:59
$begingroup$
If only the verifier's key pair is involved and nobody else's, how does the verifier distinguish whether it was the prover who signed a message and not some schmuck off the street who signed it?
$endgroup$
– Squeamish Ossifrage
Jun 15 at 21:11
$begingroup$
@SqueamishOssifrage You're right of course - I misread the question as pertaining to integrity rather than proving provenance. I agree with others here pointing at the well documented 'Off-The-Record' (OTR) protocol, which implements deniable authentication and forward secrecy. It leverages time in an interesting way, publishing old secrets as it goes, deliberately enabling forgery 'after the fact'. In short then, just adopt OTR. Edit I believe this would introduce a requirement that it's always possible to send another message, though, or you'd presumably lose deniability.
$endgroup$
– Max Barraclough
Jun 16 at 12:34
add a comment
|
Your Answer
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "281"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/4.0/"u003ecc by-sa 4.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcrypto.stackexchange.com%2fquestions%2f71271%2fdigital-signature-that-is-only-verifiable-by-one-specific-person%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
3 Answers
3
active
oldest
votes
3 Answers
3
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
What you seem to be looking for is deniable authentication.
This is actually a somewhat stronger property than what you're asking for: it guarantees that the recipient (let's call him Bob) cannot cryptographically convince anyone else that the sender (let's call her Alice) signed the message, even if he discloses all his private keys, simply because the protocol guarantees that knowing Bob's (and/or Alice's) private key is both necessary to verify the signature and sufficient to forge it. So Bob, seeing a message with a valid signature and knowing that he didn't create it himself, can be confident that Alice must have sent it — but he cannot use the signature to convince anyone else of that, since he could've just as well created the signature himself.
The simplest way to achieve this kind of authenticated but repudiable communication between two parties is to use a symmetric-key authenticated encryption scheme (or, if message privacy is for some reason not required or desired, just a plain MAC). With these schemes, Alice and Bob know the same secret key that is used both to authenticate the messages and to verify their authenticity. Thus, trivially, anything Alice can do (such as to create a valid authenticated message claiming to be from Alice to Bob) Bob — or anyone else who knows the secret key — can do as well.
The main drawbacks of such symmetric-key schemes are that they require a separate secret key for each pair of communicating parties (which could become cumbersome if there are potentially many such parties) and that the secret keys must somehow be securely shared between each pair of parties. This would be easy if we had an encrypted and authenticated secure channel between each pair of parties, but since that's exactly what we're trying to set up here, that creates a kind of a chicken-and-egg problem.
One way around these issues is to use public-key encryption to share the secret keys. In particular, we can use the Diffie–Hellman key exchange to establish a shared secret between any two parties, as long as they know each other's public keys (and, of course, their own corresponding private keys).
The Diffie–Hellman key exchange is often illustrated as an interactive protocol, but actually the only interaction it needs is for each party to send their public key to the other (which they may do in advance, e.g. by publishing them on some semi-trusted central key server). After that, any time one party (say, again, Alice) wants to send a message to another party (say, Bob), she can just combine her private key with Bob's public key to obtain a secret value known only to her and Bob, and then use this secret (possibly after feeding it through a suitable KDF) as the symmetric secret key for an authenticated encryption scheme as described above.
Anyway, for practical use, you don't actually need to implement any of this yourself, since there are plenty of existing implementations of such schemes. For example, the NaCl library (and its various derivatives, such as libsodium) provides the crypto_secretbox
function for symmetric-key authenticated encryption and the crypto_box
function for repudiable authenticated public-key encryption. If you don't particularly need to roll your own encryption scheme, I would encourage you to use those, or some other similar established and well studied implementation.
(One possible reason why you might want to do that is for nonce misuse resistance. The NaCl functions described above require you to assign each message a unique nonce, and its security can be badly compromised if you ever reuse the same nonce for two distinct messages encrypted with the same secret key. There are authenticated encryption schemes based on the SIV construction that are much more resistant to such nonce misuse, such as AES-SIV, AES-GCM-SIV or even HS1-SIV, but NaCl crypto_box
does not currently support them. If you wanted, you could reimplement the "hashed Diffie–Hellman" part of crypto_box
using crypto_scalarmult
and use the resulting key with some SIV-style symmetric encryption scheme, but that requires a lot more effort and care than just using crypto_box
as it is.)
Ps. On a slight tangent, note that Diffie–Hellman alone doesn't entirely solve the key distribution problem, since it still relies on the parties being able to share their public keys without anyone tampering with them. In particular, if Alice and Bob are trying to exchange public keys over a channel controlled by a middle-man Mallory, he can just replace Alice's and Bob's public keys with his own, and thereafter intercept any messages encrypted with those keys, decrypting and re-encrypting each message before passing it on.
(Of course, if Mallory ever stops doing that, Alice and Bob will find themselves unable to communicate until and unless they re-exchange public keys. But to Alice and Bob, that just looks as if someone just started attacking their communications by intercepting their messages and replacing them with invalid forgeries. Without some alternative communications channel, there's no way for Alice or Bob to know whether an attack just started or whether one just stopped. And even if they do somehow figure it out, it may be too late.)
One way to try to solve this problem is to set up some kind of a public key infrastructure where third parties can sign Alice and Bob's public keys in order to vouch for their correctness. But setting up a reliable PKI is far from a trivial task, since at some point you still need to trust someone.
$endgroup$
13
$begingroup$
It may be amusing to note that what Whit Diffie and Martin Hellman originally proposed in their seminal 1976 new directions paper was not, in fact, an interactive key agreement system, but rather a system for putting your public key in the telephone book so that you can compute a shared secret for symmetric cryptography with anyone in town noninteractively.
$endgroup$
– Squeamish Ossifrage
Jun 13 at 14:01
$begingroup$
@SqueamishOssifrage: huh I've never known that, but didn't "ElGamal" encryption in PGP end up achieving that exact thing with DH?
$endgroup$
– grawity
Jun 14 at 7:34
$begingroup$
Interesting. So many crypto proofs are built around just knowing. Such a deniable authentication is a proof around doing.
$endgroup$
– Cort Ammon
Jun 14 at 15:08
$begingroup$
Deniability schemes don't work in practice. Just ask Chelsea Manning, who was using OTR for deniability. I would not risk my freedom or life on a deniability scheme. I would do something like anonymous upload or Tor service.
$endgroup$
– jww
Jun 16 at 10:58
2
$begingroup$
@jww ‘Deniability’ doesn't mean that the mere use of a MAC on a message instead of a signature is a get-out-of-jail-free card for the sender. It only means that the mere use of a MAC by the receiver is not enough, on its own, to convince a third party that the sender sent it. The third party might reasonably accept the receiver's claim on the basis of other evidence. And, of course, that's not a reason to give them stronger evidence by using signatures that are designed for third-party verifiability, unless you want to guarantee third-party verifiability in the first place.
$endgroup$
– Squeamish Ossifrage
Jun 16 at 17:44
add a comment
|
$begingroup$
What you seem to be looking for is deniable authentication.
This is actually a somewhat stronger property than what you're asking for: it guarantees that the recipient (let's call him Bob) cannot cryptographically convince anyone else that the sender (let's call her Alice) signed the message, even if he discloses all his private keys, simply because the protocol guarantees that knowing Bob's (and/or Alice's) private key is both necessary to verify the signature and sufficient to forge it. So Bob, seeing a message with a valid signature and knowing that he didn't create it himself, can be confident that Alice must have sent it — but he cannot use the signature to convince anyone else of that, since he could've just as well created the signature himself.
The simplest way to achieve this kind of authenticated but repudiable communication between two parties is to use a symmetric-key authenticated encryption scheme (or, if message privacy is for some reason not required or desired, just a plain MAC). With these schemes, Alice and Bob know the same secret key that is used both to authenticate the messages and to verify their authenticity. Thus, trivially, anything Alice can do (such as to create a valid authenticated message claiming to be from Alice to Bob) Bob — or anyone else who knows the secret key — can do as well.
The main drawbacks of such symmetric-key schemes are that they require a separate secret key for each pair of communicating parties (which could become cumbersome if there are potentially many such parties) and that the secret keys must somehow be securely shared between each pair of parties. This would be easy if we had an encrypted and authenticated secure channel between each pair of parties, but since that's exactly what we're trying to set up here, that creates a kind of a chicken-and-egg problem.
One way around these issues is to use public-key encryption to share the secret keys. In particular, we can use the Diffie–Hellman key exchange to establish a shared secret between any two parties, as long as they know each other's public keys (and, of course, their own corresponding private keys).
The Diffie–Hellman key exchange is often illustrated as an interactive protocol, but actually the only interaction it needs is for each party to send their public key to the other (which they may do in advance, e.g. by publishing them on some semi-trusted central key server). After that, any time one party (say, again, Alice) wants to send a message to another party (say, Bob), she can just combine her private key with Bob's public key to obtain a secret value known only to her and Bob, and then use this secret (possibly after feeding it through a suitable KDF) as the symmetric secret key for an authenticated encryption scheme as described above.
Anyway, for practical use, you don't actually need to implement any of this yourself, since there are plenty of existing implementations of such schemes. For example, the NaCl library (and its various derivatives, such as libsodium) provides the crypto_secretbox
function for symmetric-key authenticated encryption and the crypto_box
function for repudiable authenticated public-key encryption. If you don't particularly need to roll your own encryption scheme, I would encourage you to use those, or some other similar established and well studied implementation.
(One possible reason why you might want to do that is for nonce misuse resistance. The NaCl functions described above require you to assign each message a unique nonce, and its security can be badly compromised if you ever reuse the same nonce for two distinct messages encrypted with the same secret key. There are authenticated encryption schemes based on the SIV construction that are much more resistant to such nonce misuse, such as AES-SIV, AES-GCM-SIV or even HS1-SIV, but NaCl crypto_box
does not currently support them. If you wanted, you could reimplement the "hashed Diffie–Hellman" part of crypto_box
using crypto_scalarmult
and use the resulting key with some SIV-style symmetric encryption scheme, but that requires a lot more effort and care than just using crypto_box
as it is.)
Ps. On a slight tangent, note that Diffie–Hellman alone doesn't entirely solve the key distribution problem, since it still relies on the parties being able to share their public keys without anyone tampering with them. In particular, if Alice and Bob are trying to exchange public keys over a channel controlled by a middle-man Mallory, he can just replace Alice's and Bob's public keys with his own, and thereafter intercept any messages encrypted with those keys, decrypting and re-encrypting each message before passing it on.
(Of course, if Mallory ever stops doing that, Alice and Bob will find themselves unable to communicate until and unless they re-exchange public keys. But to Alice and Bob, that just looks as if someone just started attacking their communications by intercepting their messages and replacing them with invalid forgeries. Without some alternative communications channel, there's no way for Alice or Bob to know whether an attack just started or whether one just stopped. And even if they do somehow figure it out, it may be too late.)
One way to try to solve this problem is to set up some kind of a public key infrastructure where third parties can sign Alice and Bob's public keys in order to vouch for their correctness. But setting up a reliable PKI is far from a trivial task, since at some point you still need to trust someone.
$endgroup$
13
$begingroup$
It may be amusing to note that what Whit Diffie and Martin Hellman originally proposed in their seminal 1976 new directions paper was not, in fact, an interactive key agreement system, but rather a system for putting your public key in the telephone book so that you can compute a shared secret for symmetric cryptography with anyone in town noninteractively.
$endgroup$
– Squeamish Ossifrage
Jun 13 at 14:01
$begingroup$
@SqueamishOssifrage: huh I've never known that, but didn't "ElGamal" encryption in PGP end up achieving that exact thing with DH?
$endgroup$
– grawity
Jun 14 at 7:34
$begingroup$
Interesting. So many crypto proofs are built around just knowing. Such a deniable authentication is a proof around doing.
$endgroup$
– Cort Ammon
Jun 14 at 15:08
$begingroup$
Deniability schemes don't work in practice. Just ask Chelsea Manning, who was using OTR for deniability. I would not risk my freedom or life on a deniability scheme. I would do something like anonymous upload or Tor service.
$endgroup$
– jww
Jun 16 at 10:58
2
$begingroup$
@jww ‘Deniability’ doesn't mean that the mere use of a MAC on a message instead of a signature is a get-out-of-jail-free card for the sender. It only means that the mere use of a MAC by the receiver is not enough, on its own, to convince a third party that the sender sent it. The third party might reasonably accept the receiver's claim on the basis of other evidence. And, of course, that's not a reason to give them stronger evidence by using signatures that are designed for third-party verifiability, unless you want to guarantee third-party verifiability in the first place.
$endgroup$
– Squeamish Ossifrage
Jun 16 at 17:44
add a comment
|
$begingroup$
What you seem to be looking for is deniable authentication.
This is actually a somewhat stronger property than what you're asking for: it guarantees that the recipient (let's call him Bob) cannot cryptographically convince anyone else that the sender (let's call her Alice) signed the message, even if he discloses all his private keys, simply because the protocol guarantees that knowing Bob's (and/or Alice's) private key is both necessary to verify the signature and sufficient to forge it. So Bob, seeing a message with a valid signature and knowing that he didn't create it himself, can be confident that Alice must have sent it — but he cannot use the signature to convince anyone else of that, since he could've just as well created the signature himself.
The simplest way to achieve this kind of authenticated but repudiable communication between two parties is to use a symmetric-key authenticated encryption scheme (or, if message privacy is for some reason not required or desired, just a plain MAC). With these schemes, Alice and Bob know the same secret key that is used both to authenticate the messages and to verify their authenticity. Thus, trivially, anything Alice can do (such as to create a valid authenticated message claiming to be from Alice to Bob) Bob — or anyone else who knows the secret key — can do as well.
The main drawbacks of such symmetric-key schemes are that they require a separate secret key for each pair of communicating parties (which could become cumbersome if there are potentially many such parties) and that the secret keys must somehow be securely shared between each pair of parties. This would be easy if we had an encrypted and authenticated secure channel between each pair of parties, but since that's exactly what we're trying to set up here, that creates a kind of a chicken-and-egg problem.
One way around these issues is to use public-key encryption to share the secret keys. In particular, we can use the Diffie–Hellman key exchange to establish a shared secret between any two parties, as long as they know each other's public keys (and, of course, their own corresponding private keys).
The Diffie–Hellman key exchange is often illustrated as an interactive protocol, but actually the only interaction it needs is for each party to send their public key to the other (which they may do in advance, e.g. by publishing them on some semi-trusted central key server). After that, any time one party (say, again, Alice) wants to send a message to another party (say, Bob), she can just combine her private key with Bob's public key to obtain a secret value known only to her and Bob, and then use this secret (possibly after feeding it through a suitable KDF) as the symmetric secret key for an authenticated encryption scheme as described above.
Anyway, for practical use, you don't actually need to implement any of this yourself, since there are plenty of existing implementations of such schemes. For example, the NaCl library (and its various derivatives, such as libsodium) provides the crypto_secretbox
function for symmetric-key authenticated encryption and the crypto_box
function for repudiable authenticated public-key encryption. If you don't particularly need to roll your own encryption scheme, I would encourage you to use those, or some other similar established and well studied implementation.
(One possible reason why you might want to do that is for nonce misuse resistance. The NaCl functions described above require you to assign each message a unique nonce, and its security can be badly compromised if you ever reuse the same nonce for two distinct messages encrypted with the same secret key. There are authenticated encryption schemes based on the SIV construction that are much more resistant to such nonce misuse, such as AES-SIV, AES-GCM-SIV or even HS1-SIV, but NaCl crypto_box
does not currently support them. If you wanted, you could reimplement the "hashed Diffie–Hellman" part of crypto_box
using crypto_scalarmult
and use the resulting key with some SIV-style symmetric encryption scheme, but that requires a lot more effort and care than just using crypto_box
as it is.)
Ps. On a slight tangent, note that Diffie–Hellman alone doesn't entirely solve the key distribution problem, since it still relies on the parties being able to share their public keys without anyone tampering with them. In particular, if Alice and Bob are trying to exchange public keys over a channel controlled by a middle-man Mallory, he can just replace Alice's and Bob's public keys with his own, and thereafter intercept any messages encrypted with those keys, decrypting and re-encrypting each message before passing it on.
(Of course, if Mallory ever stops doing that, Alice and Bob will find themselves unable to communicate until and unless they re-exchange public keys. But to Alice and Bob, that just looks as if someone just started attacking their communications by intercepting their messages and replacing them with invalid forgeries. Without some alternative communications channel, there's no way for Alice or Bob to know whether an attack just started or whether one just stopped. And even if they do somehow figure it out, it may be too late.)
One way to try to solve this problem is to set up some kind of a public key infrastructure where third parties can sign Alice and Bob's public keys in order to vouch for their correctness. But setting up a reliable PKI is far from a trivial task, since at some point you still need to trust someone.
$endgroup$
What you seem to be looking for is deniable authentication.
This is actually a somewhat stronger property than what you're asking for: it guarantees that the recipient (let's call him Bob) cannot cryptographically convince anyone else that the sender (let's call her Alice) signed the message, even if he discloses all his private keys, simply because the protocol guarantees that knowing Bob's (and/or Alice's) private key is both necessary to verify the signature and sufficient to forge it. So Bob, seeing a message with a valid signature and knowing that he didn't create it himself, can be confident that Alice must have sent it — but he cannot use the signature to convince anyone else of that, since he could've just as well created the signature himself.
The simplest way to achieve this kind of authenticated but repudiable communication between two parties is to use a symmetric-key authenticated encryption scheme (or, if message privacy is for some reason not required or desired, just a plain MAC). With these schemes, Alice and Bob know the same secret key that is used both to authenticate the messages and to verify their authenticity. Thus, trivially, anything Alice can do (such as to create a valid authenticated message claiming to be from Alice to Bob) Bob — or anyone else who knows the secret key — can do as well.
The main drawbacks of such symmetric-key schemes are that they require a separate secret key for each pair of communicating parties (which could become cumbersome if there are potentially many such parties) and that the secret keys must somehow be securely shared between each pair of parties. This would be easy if we had an encrypted and authenticated secure channel between each pair of parties, but since that's exactly what we're trying to set up here, that creates a kind of a chicken-and-egg problem.
One way around these issues is to use public-key encryption to share the secret keys. In particular, we can use the Diffie–Hellman key exchange to establish a shared secret between any two parties, as long as they know each other's public keys (and, of course, their own corresponding private keys).
The Diffie–Hellman key exchange is often illustrated as an interactive protocol, but actually the only interaction it needs is for each party to send their public key to the other (which they may do in advance, e.g. by publishing them on some semi-trusted central key server). After that, any time one party (say, again, Alice) wants to send a message to another party (say, Bob), she can just combine her private key with Bob's public key to obtain a secret value known only to her and Bob, and then use this secret (possibly after feeding it through a suitable KDF) as the symmetric secret key for an authenticated encryption scheme as described above.
Anyway, for practical use, you don't actually need to implement any of this yourself, since there are plenty of existing implementations of such schemes. For example, the NaCl library (and its various derivatives, such as libsodium) provides the crypto_secretbox
function for symmetric-key authenticated encryption and the crypto_box
function for repudiable authenticated public-key encryption. If you don't particularly need to roll your own encryption scheme, I would encourage you to use those, or some other similar established and well studied implementation.
(One possible reason why you might want to do that is for nonce misuse resistance. The NaCl functions described above require you to assign each message a unique nonce, and its security can be badly compromised if you ever reuse the same nonce for two distinct messages encrypted with the same secret key. There are authenticated encryption schemes based on the SIV construction that are much more resistant to such nonce misuse, such as AES-SIV, AES-GCM-SIV or even HS1-SIV, but NaCl crypto_box
does not currently support them. If you wanted, you could reimplement the "hashed Diffie–Hellman" part of crypto_box
using crypto_scalarmult
and use the resulting key with some SIV-style symmetric encryption scheme, but that requires a lot more effort and care than just using crypto_box
as it is.)
Ps. On a slight tangent, note that Diffie–Hellman alone doesn't entirely solve the key distribution problem, since it still relies on the parties being able to share their public keys without anyone tampering with them. In particular, if Alice and Bob are trying to exchange public keys over a channel controlled by a middle-man Mallory, he can just replace Alice's and Bob's public keys with his own, and thereafter intercept any messages encrypted with those keys, decrypting and re-encrypting each message before passing it on.
(Of course, if Mallory ever stops doing that, Alice and Bob will find themselves unable to communicate until and unless they re-exchange public keys. But to Alice and Bob, that just looks as if someone just started attacking their communications by intercepting their messages and replacing them with invalid forgeries. Without some alternative communications channel, there's no way for Alice or Bob to know whether an attack just started or whether one just stopped. And even if they do somehow figure it out, it may be too late.)
One way to try to solve this problem is to set up some kind of a public key infrastructure where third parties can sign Alice and Bob's public keys in order to vouch for their correctness. But setting up a reliable PKI is far from a trivial task, since at some point you still need to trust someone.
answered Jun 13 at 13:36
Ilmari KaronenIlmari Karonen
38.8k3 gold badges83 silver badges154 bronze badges
38.8k3 gold badges83 silver badges154 bronze badges
13
$begingroup$
It may be amusing to note that what Whit Diffie and Martin Hellman originally proposed in their seminal 1976 new directions paper was not, in fact, an interactive key agreement system, but rather a system for putting your public key in the telephone book so that you can compute a shared secret for symmetric cryptography with anyone in town noninteractively.
$endgroup$
– Squeamish Ossifrage
Jun 13 at 14:01
$begingroup$
@SqueamishOssifrage: huh I've never known that, but didn't "ElGamal" encryption in PGP end up achieving that exact thing with DH?
$endgroup$
– grawity
Jun 14 at 7:34
$begingroup$
Interesting. So many crypto proofs are built around just knowing. Such a deniable authentication is a proof around doing.
$endgroup$
– Cort Ammon
Jun 14 at 15:08
$begingroup$
Deniability schemes don't work in practice. Just ask Chelsea Manning, who was using OTR for deniability. I would not risk my freedom or life on a deniability scheme. I would do something like anonymous upload or Tor service.
$endgroup$
– jww
Jun 16 at 10:58
2
$begingroup$
@jww ‘Deniability’ doesn't mean that the mere use of a MAC on a message instead of a signature is a get-out-of-jail-free card for the sender. It only means that the mere use of a MAC by the receiver is not enough, on its own, to convince a third party that the sender sent it. The third party might reasonably accept the receiver's claim on the basis of other evidence. And, of course, that's not a reason to give them stronger evidence by using signatures that are designed for third-party verifiability, unless you want to guarantee third-party verifiability in the first place.
$endgroup$
– Squeamish Ossifrage
Jun 16 at 17:44
add a comment
|
13
$begingroup$
It may be amusing to note that what Whit Diffie and Martin Hellman originally proposed in their seminal 1976 new directions paper was not, in fact, an interactive key agreement system, but rather a system for putting your public key in the telephone book so that you can compute a shared secret for symmetric cryptography with anyone in town noninteractively.
$endgroup$
– Squeamish Ossifrage
Jun 13 at 14:01
$begingroup$
@SqueamishOssifrage: huh I've never known that, but didn't "ElGamal" encryption in PGP end up achieving that exact thing with DH?
$endgroup$
– grawity
Jun 14 at 7:34
$begingroup$
Interesting. So many crypto proofs are built around just knowing. Such a deniable authentication is a proof around doing.
$endgroup$
– Cort Ammon
Jun 14 at 15:08
$begingroup$
Deniability schemes don't work in practice. Just ask Chelsea Manning, who was using OTR for deniability. I would not risk my freedom or life on a deniability scheme. I would do something like anonymous upload or Tor service.
$endgroup$
– jww
Jun 16 at 10:58
2
$begingroup$
@jww ‘Deniability’ doesn't mean that the mere use of a MAC on a message instead of a signature is a get-out-of-jail-free card for the sender. It only means that the mere use of a MAC by the receiver is not enough, on its own, to convince a third party that the sender sent it. The third party might reasonably accept the receiver's claim on the basis of other evidence. And, of course, that's not a reason to give them stronger evidence by using signatures that are designed for third-party verifiability, unless you want to guarantee third-party verifiability in the first place.
$endgroup$
– Squeamish Ossifrage
Jun 16 at 17:44
13
13
$begingroup$
It may be amusing to note that what Whit Diffie and Martin Hellman originally proposed in their seminal 1976 new directions paper was not, in fact, an interactive key agreement system, but rather a system for putting your public key in the telephone book so that you can compute a shared secret for symmetric cryptography with anyone in town noninteractively.
$endgroup$
– Squeamish Ossifrage
Jun 13 at 14:01
$begingroup$
It may be amusing to note that what Whit Diffie and Martin Hellman originally proposed in their seminal 1976 new directions paper was not, in fact, an interactive key agreement system, but rather a system for putting your public key in the telephone book so that you can compute a shared secret for symmetric cryptography with anyone in town noninteractively.
$endgroup$
– Squeamish Ossifrage
Jun 13 at 14:01
$begingroup$
@SqueamishOssifrage: huh I've never known that, but didn't "ElGamal" encryption in PGP end up achieving that exact thing with DH?
$endgroup$
– grawity
Jun 14 at 7:34
$begingroup$
@SqueamishOssifrage: huh I've never known that, but didn't "ElGamal" encryption in PGP end up achieving that exact thing with DH?
$endgroup$
– grawity
Jun 14 at 7:34
$begingroup$
Interesting. So many crypto proofs are built around just knowing. Such a deniable authentication is a proof around doing.
$endgroup$
– Cort Ammon
Jun 14 at 15:08
$begingroup$
Interesting. So many crypto proofs are built around just knowing. Such a deniable authentication is a proof around doing.
$endgroup$
– Cort Ammon
Jun 14 at 15:08
$begingroup$
Deniability schemes don't work in practice. Just ask Chelsea Manning, who was using OTR for deniability. I would not risk my freedom or life on a deniability scheme. I would do something like anonymous upload or Tor service.
$endgroup$
– jww
Jun 16 at 10:58
$begingroup$
Deniability schemes don't work in practice. Just ask Chelsea Manning, who was using OTR for deniability. I would not risk my freedom or life on a deniability scheme. I would do something like anonymous upload or Tor service.
$endgroup$
– jww
Jun 16 at 10:58
2
2
$begingroup$
@jww ‘Deniability’ doesn't mean that the mere use of a MAC on a message instead of a signature is a get-out-of-jail-free card for the sender. It only means that the mere use of a MAC by the receiver is not enough, on its own, to convince a third party that the sender sent it. The third party might reasonably accept the receiver's claim on the basis of other evidence. And, of course, that's not a reason to give them stronger evidence by using signatures that are designed for third-party verifiability, unless you want to guarantee third-party verifiability in the first place.
$endgroup$
– Squeamish Ossifrage
Jun 16 at 17:44
$begingroup$
@jww ‘Deniability’ doesn't mean that the mere use of a MAC on a message instead of a signature is a get-out-of-jail-free card for the sender. It only means that the mere use of a MAC by the receiver is not enough, on its own, to convince a third party that the sender sent it. The third party might reasonably accept the receiver's claim on the basis of other evidence. And, of course, that's not a reason to give them stronger evidence by using signatures that are designed for third-party verifiability, unless you want to guarantee third-party verifiability in the first place.
$endgroup$
– Squeamish Ossifrage
Jun 16 at 17:44
add a comment
|
$begingroup$
Lets say Alice wants to send Bob a sensitive message, she wants to prove to Bob that it came from her, but she doesn't want Bob to be able to prove that to anyone else.
A MAC is a good way of doing this. If Alice and Bob share a MAC key (and only they have it) then Bob will know any message authenticated with that MAC key came from Alice, since he knows he didn't make it, and she is the only other person who could have.
However, there would be no way for a third party to tell the difference between a message from Alice and a forgery from Bob, since Bob is just as capable of creating the MACs as Alice.
A ring signature would also work, and it wouldn't require them to share a secret. Here, Alice would make a signature which proves that the message came from Alice OR Bob. Bob knows he didn't sign it, but he'd have a hard time trying to convince a third party about that.
$endgroup$
add a comment
|
$begingroup$
Lets say Alice wants to send Bob a sensitive message, she wants to prove to Bob that it came from her, but she doesn't want Bob to be able to prove that to anyone else.
A MAC is a good way of doing this. If Alice and Bob share a MAC key (and only they have it) then Bob will know any message authenticated with that MAC key came from Alice, since he knows he didn't make it, and she is the only other person who could have.
However, there would be no way for a third party to tell the difference between a message from Alice and a forgery from Bob, since Bob is just as capable of creating the MACs as Alice.
A ring signature would also work, and it wouldn't require them to share a secret. Here, Alice would make a signature which proves that the message came from Alice OR Bob. Bob knows he didn't sign it, but he'd have a hard time trying to convince a third party about that.
$endgroup$
add a comment
|
$begingroup$
Lets say Alice wants to send Bob a sensitive message, she wants to prove to Bob that it came from her, but she doesn't want Bob to be able to prove that to anyone else.
A MAC is a good way of doing this. If Alice and Bob share a MAC key (and only they have it) then Bob will know any message authenticated with that MAC key came from Alice, since he knows he didn't make it, and she is the only other person who could have.
However, there would be no way for a third party to tell the difference between a message from Alice and a forgery from Bob, since Bob is just as capable of creating the MACs as Alice.
A ring signature would also work, and it wouldn't require them to share a secret. Here, Alice would make a signature which proves that the message came from Alice OR Bob. Bob knows he didn't sign it, but he'd have a hard time trying to convince a third party about that.
$endgroup$
Lets say Alice wants to send Bob a sensitive message, she wants to prove to Bob that it came from her, but she doesn't want Bob to be able to prove that to anyone else.
A MAC is a good way of doing this. If Alice and Bob share a MAC key (and only they have it) then Bob will know any message authenticated with that MAC key came from Alice, since he knows he didn't make it, and she is the only other person who could have.
However, there would be no way for a third party to tell the difference between a message from Alice and a forgery from Bob, since Bob is just as capable of creating the MACs as Alice.
A ring signature would also work, and it wouldn't require them to share a secret. Here, Alice would make a signature which proves that the message came from Alice OR Bob. Bob knows he didn't sign it, but he'd have a hard time trying to convince a third party about that.
edited Jun 15 at 21:16
Max Barraclough
1032 bronze badges
1032 bronze badges
answered Jun 13 at 10:02
JvHJvH
1465 bronze badges
1465 bronze badges
add a comment
|
add a comment
|
$begingroup$
the person verifying the signature has an incentive not to publish their keys
I don't see any way around publishing a public key. Other than possession of a secret, what could differentiate the designated 'verifier' from others?
With that said I believe we can use a public/private key-pair, such that only the 'verifier' has the private key, to achieve the properties you want.
We can't simply have senders take a hash of their message and encrypt it with the well-known public key, to generate the signature. This would allow anyone to verify the signature, as anyone can hash a message and encrypt the result using the public key. This 'generate-and-compare' is a show-stopper despite that the attacker doesn't have the private key. (Indeed, the private key is of no real value at all.)
Fortunately we should be able to fix this 'reversibility' problem by introducing non-determinism, in the form of a random 'nonce' number.
When someone wants to sign a message, they first pair the input message with a random nonce, then they encrypt this pair using the public key. The resulting encrypted blob can be used as the signature. (Unfortunately the length of the signature will roughly equal the length of the original message.)
The 'verifier' can easily verify integrity: they decrypt the encrypted blob using the private key, discard the nonce component, and compare the other (message) component against the unencrypted message.
No-one else can make use of the encrypted blob, however, as without the private key they cannot decrypt it, and due to the use of nonce they cannot use the generate-and-compare approach; their random nonce will be different, meaning they will generate a completely different encrypted blob.
(Someone more knowledgeable than me might know whether this approach has a name, or perhaps a fatal flaw that I've missed.)
$endgroup$
$begingroup$
Sorry, I meant that they have an incentive not to publish their private keys.
$endgroup$
– Jesse Busman
Jun 15 at 19:59
$begingroup$
If only the verifier's key pair is involved and nobody else's, how does the verifier distinguish whether it was the prover who signed a message and not some schmuck off the street who signed it?
$endgroup$
– Squeamish Ossifrage
Jun 15 at 21:11
$begingroup$
@SqueamishOssifrage You're right of course - I misread the question as pertaining to integrity rather than proving provenance. I agree with others here pointing at the well documented 'Off-The-Record' (OTR) protocol, which implements deniable authentication and forward secrecy. It leverages time in an interesting way, publishing old secrets as it goes, deliberately enabling forgery 'after the fact'. In short then, just adopt OTR. Edit I believe this would introduce a requirement that it's always possible to send another message, though, or you'd presumably lose deniability.
$endgroup$
– Max Barraclough
Jun 16 at 12:34
add a comment
|
$begingroup$
the person verifying the signature has an incentive not to publish their keys
I don't see any way around publishing a public key. Other than possession of a secret, what could differentiate the designated 'verifier' from others?
With that said I believe we can use a public/private key-pair, such that only the 'verifier' has the private key, to achieve the properties you want.
We can't simply have senders take a hash of their message and encrypt it with the well-known public key, to generate the signature. This would allow anyone to verify the signature, as anyone can hash a message and encrypt the result using the public key. This 'generate-and-compare' is a show-stopper despite that the attacker doesn't have the private key. (Indeed, the private key is of no real value at all.)
Fortunately we should be able to fix this 'reversibility' problem by introducing non-determinism, in the form of a random 'nonce' number.
When someone wants to sign a message, they first pair the input message with a random nonce, then they encrypt this pair using the public key. The resulting encrypted blob can be used as the signature. (Unfortunately the length of the signature will roughly equal the length of the original message.)
The 'verifier' can easily verify integrity: they decrypt the encrypted blob using the private key, discard the nonce component, and compare the other (message) component against the unencrypted message.
No-one else can make use of the encrypted blob, however, as without the private key they cannot decrypt it, and due to the use of nonce they cannot use the generate-and-compare approach; their random nonce will be different, meaning they will generate a completely different encrypted blob.
(Someone more knowledgeable than me might know whether this approach has a name, or perhaps a fatal flaw that I've missed.)
$endgroup$
$begingroup$
Sorry, I meant that they have an incentive not to publish their private keys.
$endgroup$
– Jesse Busman
Jun 15 at 19:59
$begingroup$
If only the verifier's key pair is involved and nobody else's, how does the verifier distinguish whether it was the prover who signed a message and not some schmuck off the street who signed it?
$endgroup$
– Squeamish Ossifrage
Jun 15 at 21:11
$begingroup$
@SqueamishOssifrage You're right of course - I misread the question as pertaining to integrity rather than proving provenance. I agree with others here pointing at the well documented 'Off-The-Record' (OTR) protocol, which implements deniable authentication and forward secrecy. It leverages time in an interesting way, publishing old secrets as it goes, deliberately enabling forgery 'after the fact'. In short then, just adopt OTR. Edit I believe this would introduce a requirement that it's always possible to send another message, though, or you'd presumably lose deniability.
$endgroup$
– Max Barraclough
Jun 16 at 12:34
add a comment
|
$begingroup$
the person verifying the signature has an incentive not to publish their keys
I don't see any way around publishing a public key. Other than possession of a secret, what could differentiate the designated 'verifier' from others?
With that said I believe we can use a public/private key-pair, such that only the 'verifier' has the private key, to achieve the properties you want.
We can't simply have senders take a hash of their message and encrypt it with the well-known public key, to generate the signature. This would allow anyone to verify the signature, as anyone can hash a message and encrypt the result using the public key. This 'generate-and-compare' is a show-stopper despite that the attacker doesn't have the private key. (Indeed, the private key is of no real value at all.)
Fortunately we should be able to fix this 'reversibility' problem by introducing non-determinism, in the form of a random 'nonce' number.
When someone wants to sign a message, they first pair the input message with a random nonce, then they encrypt this pair using the public key. The resulting encrypted blob can be used as the signature. (Unfortunately the length of the signature will roughly equal the length of the original message.)
The 'verifier' can easily verify integrity: they decrypt the encrypted blob using the private key, discard the nonce component, and compare the other (message) component against the unencrypted message.
No-one else can make use of the encrypted blob, however, as without the private key they cannot decrypt it, and due to the use of nonce they cannot use the generate-and-compare approach; their random nonce will be different, meaning they will generate a completely different encrypted blob.
(Someone more knowledgeable than me might know whether this approach has a name, or perhaps a fatal flaw that I've missed.)
$endgroup$
the person verifying the signature has an incentive not to publish their keys
I don't see any way around publishing a public key. Other than possession of a secret, what could differentiate the designated 'verifier' from others?
With that said I believe we can use a public/private key-pair, such that only the 'verifier' has the private key, to achieve the properties you want.
We can't simply have senders take a hash of their message and encrypt it with the well-known public key, to generate the signature. This would allow anyone to verify the signature, as anyone can hash a message and encrypt the result using the public key. This 'generate-and-compare' is a show-stopper despite that the attacker doesn't have the private key. (Indeed, the private key is of no real value at all.)
Fortunately we should be able to fix this 'reversibility' problem by introducing non-determinism, in the form of a random 'nonce' number.
When someone wants to sign a message, they first pair the input message with a random nonce, then they encrypt this pair using the public key. The resulting encrypted blob can be used as the signature. (Unfortunately the length of the signature will roughly equal the length of the original message.)
The 'verifier' can easily verify integrity: they decrypt the encrypted blob using the private key, discard the nonce component, and compare the other (message) component against the unencrypted message.
No-one else can make use of the encrypted blob, however, as without the private key they cannot decrypt it, and due to the use of nonce they cannot use the generate-and-compare approach; their random nonce will be different, meaning they will generate a completely different encrypted blob.
(Someone more knowledgeable than me might know whether this approach has a name, or perhaps a fatal flaw that I've missed.)
answered Jun 15 at 19:07
Max BarracloughMax Barraclough
1032 bronze badges
1032 bronze badges
$begingroup$
Sorry, I meant that they have an incentive not to publish their private keys.
$endgroup$
– Jesse Busman
Jun 15 at 19:59
$begingroup$
If only the verifier's key pair is involved and nobody else's, how does the verifier distinguish whether it was the prover who signed a message and not some schmuck off the street who signed it?
$endgroup$
– Squeamish Ossifrage
Jun 15 at 21:11
$begingroup$
@SqueamishOssifrage You're right of course - I misread the question as pertaining to integrity rather than proving provenance. I agree with others here pointing at the well documented 'Off-The-Record' (OTR) protocol, which implements deniable authentication and forward secrecy. It leverages time in an interesting way, publishing old secrets as it goes, deliberately enabling forgery 'after the fact'. In short then, just adopt OTR. Edit I believe this would introduce a requirement that it's always possible to send another message, though, or you'd presumably lose deniability.
$endgroup$
– Max Barraclough
Jun 16 at 12:34
add a comment
|
$begingroup$
Sorry, I meant that they have an incentive not to publish their private keys.
$endgroup$
– Jesse Busman
Jun 15 at 19:59
$begingroup$
If only the verifier's key pair is involved and nobody else's, how does the verifier distinguish whether it was the prover who signed a message and not some schmuck off the street who signed it?
$endgroup$
– Squeamish Ossifrage
Jun 15 at 21:11
$begingroup$
@SqueamishOssifrage You're right of course - I misread the question as pertaining to integrity rather than proving provenance. I agree with others here pointing at the well documented 'Off-The-Record' (OTR) protocol, which implements deniable authentication and forward secrecy. It leverages time in an interesting way, publishing old secrets as it goes, deliberately enabling forgery 'after the fact'. In short then, just adopt OTR. Edit I believe this would introduce a requirement that it's always possible to send another message, though, or you'd presumably lose deniability.
$endgroup$
– Max Barraclough
Jun 16 at 12:34
$begingroup$
Sorry, I meant that they have an incentive not to publish their private keys.
$endgroup$
– Jesse Busman
Jun 15 at 19:59
$begingroup$
Sorry, I meant that they have an incentive not to publish their private keys.
$endgroup$
– Jesse Busman
Jun 15 at 19:59
$begingroup$
If only the verifier's key pair is involved and nobody else's, how does the verifier distinguish whether it was the prover who signed a message and not some schmuck off the street who signed it?
$endgroup$
– Squeamish Ossifrage
Jun 15 at 21:11
$begingroup$
If only the verifier's key pair is involved and nobody else's, how does the verifier distinguish whether it was the prover who signed a message and not some schmuck off the street who signed it?
$endgroup$
– Squeamish Ossifrage
Jun 15 at 21:11
$begingroup$
@SqueamishOssifrage You're right of course - I misread the question as pertaining to integrity rather than proving provenance. I agree with others here pointing at the well documented 'Off-The-Record' (OTR) protocol, which implements deniable authentication and forward secrecy. It leverages time in an interesting way, publishing old secrets as it goes, deliberately enabling forgery 'after the fact'. In short then, just adopt OTR. Edit I believe this would introduce a requirement that it's always possible to send another message, though, or you'd presumably lose deniability.
$endgroup$
– Max Barraclough
Jun 16 at 12:34
$begingroup$
@SqueamishOssifrage You're right of course - I misread the question as pertaining to integrity rather than proving provenance. I agree with others here pointing at the well documented 'Off-The-Record' (OTR) protocol, which implements deniable authentication and forward secrecy. It leverages time in an interesting way, publishing old secrets as it goes, deliberately enabling forgery 'after the fact'. In short then, just adopt OTR. Edit I believe this would introduce a requirement that it's always possible to send another message, though, or you'd presumably lose deniability.
$endgroup$
– Max Barraclough
Jun 16 at 12:34
add a comment
|
Thanks for contributing an answer to Cryptography Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcrypto.stackexchange.com%2fquestions%2f71271%2fdigital-signature-that-is-only-verifiable-by-one-specific-person%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
2
$begingroup$
Does it have to be an asymmetric signature or would a MAC do?
$endgroup$
– mat
Jun 13 at 9:06
3
$begingroup$
I've never needed this type of signature. Could you provide some details of your use case? It is merely curiosity on my part.
$endgroup$
– jww
Jun 13 at 20:13
6
$begingroup$
Your 'doesn't work' case applies to all possible solutions. For example you could sign with the public key and verify with the person's private key, APIs permitting, but then the person could leak the private key and thus enable anybody to verify.
$endgroup$
– user207421
Jun 14 at 1:34
1
$begingroup$
"the person verifying the signature has an incentive not to publish their keys" So does this mean the person signing has access to the verifiers keys?
$endgroup$
– Tezra
Jun 14 at 19:44
$begingroup$
I suggest editing the question to further emphasise that the verifier is not trusted.
$endgroup$
– Max Barraclough
Jun 16 at 12:30