So far I've seen two different approaches to RSA signing with OpenSSL:
With EVP_PKEY_sign
ctx = EVP_PKEY_CTX_new(signing_key, NULL /* no engine */);
if (!ctx)
/* Error occurred */
if (EVP_PKEY_sign_init(ctx) <= 0)
/* Error */
if (EVP_PKEY_CTX_set_rsa_padding(ctx, RSA_PKCS1_PADDING) <= 0)
/* Error */
if (EVP_PKEY_CTX_set_signature_md(ctx, EVP_sha256()) <= 0)
/* Error */
/* Determine buffer length */
if (EVP_PKEY_sign(ctx, NULL, &siglen, md, mdlen) <= 0)
With EVP_DigestSignInit:
if(1 != EVP_DigestSignInit(mdctx, NULL, EVP_sha256(), NULL, key))
goto err;
if(1 != EVP_DigestSignUpdate(mdctx, msg, strlen(msg)))
goto err;
if(1 != EVP_DigestSignFinal(mdctx, NULL, slen))
goto err;
if(!(*sig = OPENSSL_malloc(sizeof(unsigned char) * (*slen))))
goto err;
if(1 != EVP_DigestSignFinal(mdctx, *sig, slen))
goto err;
Are these just two different ways to do the same thing?
Oh. There's a fairly major difference.
EVP_PKEY_sign() does not hash the data to be signed, and therefore is normally used to sign digests. For signing arbitrary messages, see the EVP_DigestSignInit(3)
Per https://wiki.openssl.org/index.php/Manual:EVP_PKEY_sign(3)
So EVP_PKEY_sign is very likely used under the hood in EVP_Digest SignInit, and is intended for applications where the caller will manually format the block to be signed.