I'm trying to encrypt and decrypt Strings
in Java and PHP, so i use AES/CFB8/NoPadding
that sould work on both sides.
Now my Cryptor
works quit well with Latin Characters, as long as i set the Charset to Latin. But when i set it to UTF-8 (which is what i use in my DB), the encryption is not properly done.
Here is the output:
/*
* Latin (ISO-8859-1) output:
* Original: MiiiMüäöMeeʞ
* Encoded: rQ¶[ÉÐRíD
* Decoded: MiiiMüäöMee?
*
* UTF-8 output:
* Original: MiiiMüäöMeeʞ
* Encoded: rQ�[�
* Decoded: Mii0SS1])_�ELJI�S�;�W��W?*
*/
Since "ʞ" is not a latin character, I understand it can not be encrypted properly. But why does UTF-8 not work?
public class Cryptor {
private Cipher cipher;
private String secretKey = "1234567890qwertz";
private String iv = "1234567890qwertz";
private SecretKey keySpec;
private IvParameterSpec ivSpec;
private Charset CHARSET = Charset.forName("ISO-8859-1"); // ISO-8859-1 vs. UTF-8
public Cryptor() throws CryptingException {
keySpec = new SecretKeySpec(secretKey.getBytes(CHARSET), "AES");
ivSpec = new IvParameterSpec(iv.getBytes(CHARSET));
try {
cipher = Cipher.getInstance("AES/CFB8/NoPadding");
} catch (NoSuchAlgorithmException e) {
throw new SecurityException(e);
} catch (NoSuchPaddingException e) {
throw new SecurityException(e);
}
}
public String decrypt(String input) throws CryptingException {
try {
cipher.init(Cipher.DECRYPT_MODE, keySpec, ivSpec);
return new String(cipher.doFinal(input.getBytes(CHARSET)), CHARSET).trim();
} catch (IllegalBlockSizeException e) {
throw new SecurityException(e);
} catch (BadPaddingException e) {
throw new SecurityException(e);
} catch (InvalidKeyException e) {
throw new SecurityException(e);
} catch (InvalidAlgorithmParameterException e) {
throw new SecurityException(e);
}
}
public String encrypt(String input) throws CryptingException {
try {
cipher.init(Cipher.ENCRYPT_MODE, keySpec, ivSpec);
return new String(cipher.doFinal(input.getBytes(CHARSET)), CHARSET).trim();
} catch (InvalidKeyException e) {
throw new SecurityException(e);
} catch (InvalidAlgorithmParameterException e) {
throw new SecurityException(e);
} catch (IllegalBlockSizeException e) {
throw new SecurityException(e);
} catch (BadPaddingException e) {
throw new SecurityException(e);
}
}
public static void main(String Args[]) {
try {
Cryptor c = new Cryptor();
String original = "MiiiMüäöMeeʞ";
System.out.println("Original: " + original);
String encrypted = c.encrypt("MiiiMüäöMeeʞ");
System.out.println("Encoded: " + encrypted);
System.out.println("Decoded: " + c.decrypt(encrypted));
} catch (CryptingException e) {
e.printStackTrace();
}
}
class CryptingException extends RuntimeException {
private static final long serialVersionUID = 7123322995084333687L;
public CryptingException() {
super();
}
public CryptingException(String message) {
super(message);
}
}
}
I think turning the encrypted bytes into a String
is a bad idea. The bytes are not valid for any encoding, they are random.
You need to encode the resulting byte[]
to base64 to get a consistent outcome. See sun.misc.BASE64Encoder
/sun.misc.BASE64Decoder
.
Here is an example of decoding a base64 String
to a byte[]
, the reverse process is very similar.
You can declare the decoder/encoder and the top of the class:
private final BASE64Decoder base64Decoder = new BASE64Decoder();
private final BASE64Encoder base64Encoder = new BASE64Encoder();
Then in your decypt
method you need to call
return new String(cipher.doFinal(base64Decoder.decodeBuffer(input)), CHARSET);
And in your encrypt
method
return base64Encoder.encode(cipher.doFinal(input.getBytes(CHARSET)));
Output using UTF-8:
Original: MiiiMüäöMeeʞ
Encoded: clEUtlv2ALXsKYw4ivOfwQ==
Decoded: MiiiMüäöMeeʞ
One side note is that it's not strictly speaking good practice to use packages from sun.*
as they are not part of the java spec and hence may change/vanish from version to version.
Here is a little article on moving to the Apache Commons Codec implementation.
This is also a similar class in the Guava API.